id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.00465 | New bootstrap tests for categorical time series. A comparative study | The problem of testing the equality of the generating processes of two
categorical time series is addressed in this work. To this aim, we propose
three tests relying on a dissimilarity measure between categorical processes.
Particular versions of these tests are constructed by considering three
specific distances evaluating discrepancy between the marginal distributions
and the serial dependence patterns of both processes. Proper estimates of these
dissimilarities are an essential element of the constructed tests, which are
based on the bootstrap. Specifically, a parametric bootstrap method assuming
the true generating models and extensions of the moving blocks bootstrap and
the stationary bootstrap are considered. The approaches are assessed in a broad
simulation study including several types of categorical models with different
degrees of complexity. Advantages and disadvantages of each one of the methods
are properly discussed according to their behavior under the null and the
alternative hypothesis. The impact that some important input parameters have on
the results of the tests is also analyzed. An application involving biological
sequences highlights the usefulness of the proposed techniques. | Ángel López-Oriona, José Antonio Vilar Fernández, Pierpaolo D'Urso | 2023-04-30T12:35:28Z | http://arxiv.org/abs/2305.00465v1 | # New bootstrap tests for categorical time series. A comparative study
###### Abstract
The problem of testing the equality of the generating processes of two categorical time series is addressed in this work. To this aim, we propose three tests relying on a dissimilarity measure between categorical processes. Particular versions of these tests are constructed by considering three specific distances evaluating discrepancy between the marginal distributions and the serial dependence patterns of both processes. Proper estimates of these dissimilarities are an essential element of the constructed tests, which are based on the bootstrap. Specifically, a parametric bootstrap method assuming the true generating models and extensions of the moving blocks bootstrap and the stationary bootstrap are considered. The approaches are assessed in a broad simulation study including several types of categorical models with different degrees of complexity. Advantages and disadvantages of each one of the methods are properly discussed according to their behavior under the null and the alternative hypothesis. The impact that some important input parameters have on the results of the tests is also analyzed. An application involving biological sequences highlights the usefulness of the proposed techniques.
keywords: categorical time series, hypothesis tests, distance measures, bootstrap. +
Footnote †: journal:
## 1 Introduction
The problem of comparing two time series arises in a natural way in multiple fields, including artificial intelligence, economics, computer science, biology,
medicine or chemistry, among others. For instance, an investor often has to determine if two particular assets show the same behavior over time based on historical data. In medicine, it is usually interesting to find out to what extent ECG signals from different subjects exhibit similar patterns. A broad variety of data mining and statistical methods have been proposed to address this kind of problems, including clustering [1], classification [2], outlier detection [3], and comparisons through hypothesis tests [4]. It is worth highlighting that these approaches have mainly focused on real-valued time series [5, 6, 7, 8, 9], while the study of time series with alternative ranges, for instance, categorical time series (CTS), has received much less attention [10, 11]. This is surprising, since CTS are frequently used for important tasks. Some illustrative examples are the stochastic modeling of DNA sequence data [12, 13], the analysis of EEG sleep state scores [14], and the use of hidden Markov models (HMM) to analyze protein sequences [15].
Frequently, these techniques require to evaluate dissimilarity between time series, which is not a simple task due to the dynamic character of these objects. In fact, the problem of determining a proper distance measure between time series has become an important research topic. In the real-valued setting, [7] provided a clustering algorithm for time series based on an innovative distance comparing the so-called quantile autocovariance functions. Other dissimilarity criteria recently proposed to construct clustering procedures include distances between: estimated GARCH coefficients [16], B-splines representations [17] and estimated conditional moments [18], among many others. Several methods employing distance measures have also been proposed for classifying real-valued series [19, 20]. The definition of a suitable dissimilarity becomes even more complex in the categorical context, since most of the standard tools used to deal with real-valued time series (e.g., the autocorrelation function) are no longer valid when analyzing CTS. [21] introduced a dissimilarity between CTS which evaluates both closeness between raw categorical values and proximity between dynamic patterns. [11] proposed two novel feature-based distances between categorical series measuring discrepancy between their marginal distributions and their underlying serial dependence patterns. In both works, the corresponding metrics are applied to perform CTS clustering.
The aim of the present work is to introduce procedures to test that two categorical processes are equal in terms of marginal distributions and serial dependence structures. Specifically, let \(\{X_{t}^{(1)},t\in\mathbb{Z}\}\) and \(\{X_{t}^{(2)},t\in\mathbb{Z}\}\) be two independent stationary categorical processes with range \(\mathcal{V}=\{1,\ldots,r\}\) and denote by \(\boldsymbol{\pi}^{(1)}=(\pi_{1}^{(1)},\ldots,\pi_{r}^{(1)})\) and \(\boldsymbol{\pi}^{(2)}=(\pi_{1}^{(2)},\ldots,\pi_{r}^{(2)})\), respectively, the corresponding vectors of marginal probabilities, that is, \(\pi_{h}^{(i)}=P\big{(}X_{t}^{(i)}=h\big{)}\), \(i=1,2\), \(h=1,\ldots,r\). In addition, given a lag \(l\in\mathbb{Z}\) and \(j,k\in\mathcal{V}\), let \(p_{jk}^{(i)}(l)\) be the corresponding lagged joint probability for process \(\{X_{t}^{(i)},t\in\mathbb{Z}\}\), that is, \(p_{jk}^{(i)}(l)=P(X_{t}^{(i)}=j,X_{t-l}^{(i)}=k)\), \(i=1,2\). The null hypothesis we consider can be stated as
\[H_{0}:\boldsymbol{\pi}^{(1)}=\boldsymbol{\pi}^{(2)}\ \ \text{and}\ \ p_{jk}^{(1)}(l)=p_{jk}^{(2)}(l)\ \ \forall(j,k,l)\in\mathcal{V}^{2}\times\mathbb{Z}. \tag{1}\]
In order to perform the hypothesis test in (1), we consider three distance measures between categorical stochastic processes, whose estimates were employed by [11] to perform clustering of CTS. Two of these dissimilarities are based on extracted features describing the marginal properties and the serial dependence structures of both stochastic processes. The remaining metric relies on the coefficients defining a given categorical model. In the first two cases, the distances take the value of \(0\) when the null hypothesis is true, which makes the estimates of these metrics a reasonable tool to carry out the test in (1). It is worth highlighting that the computation of the asymptotic distribution of these estimates under the null hypothesis is a very challenging problem if a specific generating structure is not assumed, so resampling techniques can be considered to perform the test.
Based on previous considerations, three bootstrap methods are proposed in this work to approximate the null distribution of the considered estimates. The first technique is a parametric test which assumes a specific class of categorical model for both stochastic processes. The crucial step of this procedure is based on the generation of time series from a process which contains information about both original series in equal measure. The remaining tests are extensions of two bootstrap approaches specifically designed to deal with dependent data, namely the moving blocks bootstrap (MBB) [22; 23] and the stationary bootstrap (SB) [24]. In both cases, the key principle is to generate pseudo-series with the aim of mimicking the distribution under the null hypothesis of the corresponding estimates without assuming specific parametric models for the generating processes. The bootstrap approaches based on the three metrics are compared in terms of size and power by means of a broad simulation study. Several types of generating processes are considered under the null and alternative hypotheses. Finally, an interesting application involving biological sequences highlights the usefulness of the proposed methods. It is worth remarking that, although there exist many statistical tests for assessing dissimilarity between the generating processes of two time series [25; 26; 27], most of them focus on the real-valued setting. In fact, to the best of our knowledge, there exist no works in the literature dealing with the comparison of the generating structures of two categorical series.
The rest of the article is organized as follows. The three considered distances between categorical processes are defined in Section 2 after introducing some features measuring serial dependence within these type of processes. The three bootstrap techniques to carry out the test in (1) are presented in Section 3. A description of the simulation experiments performed to compare the proposed tests is provided in Section 4 along with the corresponding results and discussion. Section 5 contains the application of the bootstrap tests and Section 6 concludes.
## 2 Background on three dissimilarity measures for categorical series
Hereafter, \(\{X_{t},t\in\mathbb{Z}\}\) (or just \(X_{t}\)) denotes a categorical stochastic process taking values on a number \(r\) of unordered qualitative categories, which are coded from \(1\) to \(r\) so that the range of the process can be seen as \(\mathcal{V}=\{1,\ldots,r\}\). It is
assumed that \(X_{t}\) is bivariate stationary, that is, the pairwise joint distribution of \((X_{t},X_{t-l})\) is invariant in \(t\) for arbitrary \(l\) (see [13]). The marginal distribution of \(X_{t}\) is denoted by \(\boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{r})\), with \(\pi_{j}=P(X_{t}=j)\), \(j=1,\ldots,r\). Fixed \(l\in\mathbb{N}\), we use the notation \(p_{ij}(l)=P(X_{t}=i,X_{t-l}=j)\), with \(i,j\in\mathcal{V}\), for the lagged joint probability and the notation \(p_{i|j}(l)=P(X_{t}=i|X_{t-l}=j)=p_{ij}(l)/\pi_{j}\) for the conditional lagged probability.
Next, we introduce different sets of features that can be used to describe the process \(X_{t}\) and, afterwards, we present the dissimilarity measures based on the corresponding sets of features.
### Structural features for categorical processes
In order to extract suitable features characterizing the serial dependence of a given categorical process, we first start by defining the concepts of perfect serial independence and dependence for a categorical process. Following [13], we have perfect serial independence at lag \(l\in\mathbb{N}\) if and only if \(p_{ij}(l)=\pi_{i}\pi_{j}\) for any \(i,j\in\mathcal{V}\). On the other hand, we have perfect serial dependence at lag \(l\in\mathbb{N}\) if and only if the conditional distribution \(p_{.|j}(l)\) is a one-point distribution for any \(j\in\mathcal{V}\). This way, in a perfect serially independent process, knowledge about \(X_{t-l}\) does not help at all in predicting the value of \(X_{t}\). Conversely, in a perfect serially dependent process, the value of \(X_{t}\) is completely determined from \(X_{t-l}\).
There are several association measures that describe the serial dependence structure of a categorical process at lag \(l\). One of such measures is the so-called Cramer's \(v\), which is defined as
\[v(l)=\sqrt{\frac{1}{r-1}\sum_{i,j=1}^{r}\frac{(p_{ij}(l)-\pi_{i}\pi_{j})^{2}}{ \pi_{i}\pi_{j}}}. \tag{2}\]
The quantity \(v(l)\) has range \([0,1]\), with the values \(0\) and \(1\) associated with the cases of perfect serial independence and perfect serial dependence at lag \(l\), respectively. Note that the numerator appearing in the summation of (2) measures the deviation of \(p_{ij}(l)\) from the case of serial independence between \(i\) and \(j\) at lag \(l\).
Cramer's \(v\) summarizes the serial dependence levels of a categorical process for every pair \((i,j)\) and \(l\in\mathbb{N}\). However, this quantity is not appropriate for characterizing a given stochastic process, since different processes can exhibit the same value of \(v(l)\). A better way to characterize the process \(X_{t}\) is by considering the matrix \(\boldsymbol{V}(l)=\big{(}V_{ij}(l)\big{)}_{1\leq i,j\leq r}\), where
\[V_{ij}(l)=\frac{(p_{ij}(l)-\pi_{i}\pi_{j})^{2}}{\pi_{i}\pi_{j}}. \tag{3}\]
In this way, the \(r^{2}\) elements in the summation of (2) are separately considered, and a much richer picture of the underlying dependence structure of \(X_{t}\) is available.
The elements of the matrix \(\boldsymbol{V}(l)\) give information about the so-called _unsigned_ dependence of the process. However, it is often useful to know whether a
process tends to stay in the state it has reached or, on the contrary, the repetition of the same state after \(l\) steps is infrequent. This motivates the concept of _signed_ dependence, which arises as an analogy of the autocorrelation function of a real-valued process, since such quantity can take either positive or negative values. The reader is referred to [13, 11] for more details about the concepts of unsigned and signed serial dependence.
Since \(\boldsymbol{V}(l)\) does not shed light on the signed dependence patterns, it would be valuable to complement the information contained in this matrix by adding features describing signed dependence. In this regard, a common measure of signed serial dependence at lag \(l\) is the Cohen's \(\kappa\), which takes the form
\[\kappa(l)=\frac{\sum_{j=1}^{r}(p_{jj}(l)-\pi_{j}^{2})}{1-\sum_{j=1}^{r}\pi_{j} ^{2}}. \tag{4}\]
Proceeding as with \(v(l)\), the quantity \(\kappa(l)\) can be decomposed in order to obtain a more detailed representation of the signed dependence pattern of the process. In this way, we consider the vector \(\boldsymbol{\mathcal{K}}(l)=\big{(}\mathcal{K}_{1}(l),\ldots,\mathcal{K}_{r}( l)\big{)}\), where each \(\mathcal{K}_{i}(l)\), for \(i=1,\ldots,r\), is defined as
\[\mathcal{K}_{i}(l)=\frac{p_{ii}(l)-\pi_{i}^{2}}{1-\sum_{j=1}^{r}\pi_{j}^{2}}. \tag{5}\]
In practice, the matrix \(\boldsymbol{V}(l)\) and the vector \(\boldsymbol{\mathcal{K}}(l)\) must be estimated from a \(T\)-length realization of the process, denoted by \((x_{1},\ldots,x_{T})\). To this aim, we consider estimators of \(\pi_{i}\) and \(p_{ij}(l)\), denoted by \(\widehat{\pi}_{i}\) and \(\widehat{p}_{ij}(l)\), respectively, defined as
\[\widehat{\pi}_{i}=\frac{N_{i}}{T}\ \ \text{and}\ \ \widehat{p}_{ij}(l)=\frac{N_{ ij}(l)}{T-l}, \tag{6}\]
where \(N_{i}\) is the number of elements \(x_{t}\) equal to \(i\) in the realization \((x_{1},\ldots,x_{T})\), and \(N_{ij}(l)\) is the number of pairs \((x_{t},x_{t-l})=(i,j)\) in the realization \((x_{1},\ldots,x_{T})\). Hence, estimates of \(\boldsymbol{V}(l)\) and \(\boldsymbol{\mathcal{K}}(l)\), denoted by \(\widehat{\boldsymbol{V}}(l)\) and \(\widehat{\boldsymbol{\mathcal{K}}}(l)\), respectively, can be obtained by considering the estimates \(\widehat{\pi}_{i}\) and \(\widehat{p}_{ij}(l)\) in (3) and (5). This leads directly to estimates of \(v(l)\) and \(\kappa(l)\), denoted by \(\widehat{v}(l)\) and \(\widehat{\kappa}(l)\), respectively, whose asymptotic distributions have been studied for the i.i.d. case by [28] and [29], respectively. Note that, by considering \(\widehat{\boldsymbol{V}}(l)\) and \(\widehat{\boldsymbol{\mathcal{K}}}(l)\), a complete picture of the serial dependence patterns of a CTS is provided.
An alternative way of describing the dependence structure of the process \(\{X_{t},t\in\mathbb{Z}\}\) is by taking into consideration its equivalent representation as a multivariate binary process. The so-called _binarization_ of \(\{X_{t},t\in\mathbb{Z}\}\) is obtained as follows. Let \(\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{r}\in\{0,1\}^{r}\) be unit vectors such that \(\boldsymbol{e}_{k}\) has all its entries equal to zero except for a one in the \(k\)th position, \(k=1,\ldots,r\). Then, the binarization of \(\{X_{t},t\in\mathbb{Z}\}\) is given by the process \(\{\boldsymbol{Y}_{t}=(Y_{t,1},\ldots,Y_{t,r}),t\in\mathbb{Z}\}\) such that \(\boldsymbol{Y}_{t}=\boldsymbol{e}_{j}\) if \(X_{t}=j\). Fixed \(l\in\mathbb{Z}\) and \(i,j\in\mathcal{V}\), consider the correlation
\[\phi_{ij}(l)=Corr(Y_{t,i},Y_{t-l,j}), \tag{7}\]
which measures linear dependence between the \(i\)th and \(j\)th categories with respect to the lag \(l\). According to Theorem 1 in [11], the quantity \(\phi_{ij}(l)\) describes both the signed and unsigned dependence patterns of a categorical process. Moreover, this quantity can be written as
\[\phi_{ij}(l)=\frac{p_{ij}(l)-\pi_{i}\pi_{j}}{\sqrt{\pi_{i}(1-\pi_{i})\pi_{j}(1- \pi_{j})}}. \tag{8}\]
Based on previous comments, a complete description of process \(X_{t}\) can be obtained by considering the matrix \(\boldsymbol{\Phi}(l)=\left(\phi_{ij}(l)\right)_{1\leq i,j\leq r}\). This matrix can be directly estimated by means of \(\widehat{\boldsymbol{\Phi}}(l)=\left(\widehat{\phi}_{ij}(l)\right)_{1\leq i,j \leq r}\), where the estimates \(\widehat{\phi}_{ij}(l)\) are computed as
\[\widehat{\phi}_{ij}(l)=\frac{\widehat{p}_{ij}(l)-\widehat{\pi}_{i}\widehat{ \pi}_{j}}{\sqrt{\widehat{\pi}_{i}(1-\widehat{\pi}_{i})\widehat{\pi}_{j}(1- \widehat{\pi}_{j})}}, \tag{9}\]
with \(\widehat{\pi}_{i}\) (\(\widehat{\pi}_{j}\)) and \(\widehat{p}_{ij}(l)\) given in (6).
Note that all the previously introduced features are well-defined for any stationary process. However, when assuming a specific type of parametric model, we can describe the process \(X_{t}\) by means of the corresponding vector of parameters, denoted by \(\boldsymbol{\theta}\). For instance, if \(X_{t}\) is a Markov chain (MC), then \(\boldsymbol{\theta}\) is given by the vectorized version of the transition probability matrix. When dealing with a realization of the process, the vector \(\boldsymbol{\theta}\) must be estimated in a specific way, e.g., via maximum likelihood estimation (MLE), giving rise to the vector of estimated parameters \(\widehat{\boldsymbol{\theta}}\).
### Three distances between categorical processes
Hereafter, \(\{X_{t}^{(1)},t\in\mathbb{Z}\}\) and \(\{X_{t}^{(2)},t\in\mathbb{Z}\}\) (or just \(X_{t}^{(1)}\) and \(X_{t}^{(2)}\)) denote two independent categorical stochastic processes with the same properties as the process \(\{X_{t},t\in\mathbb{Z}\}\) introduced above. Similarly, \(\boldsymbol{X}_{T}{}^{(1)}=\left(x_{1}^{(1)},\ldots,x_{T}^{(1)}\right)\) and \(\boldsymbol{X}_{T}{}^{(2)}=\left(x_{1}^{(2)},\ldots,x_{T}^{(2)}\right)\) denote two realizations of length \(T\) from processes \(X_{t}^{(1)}\) and \(X_{t}^{(2)}\), respectively. In addition, the superscript \((i)\) is employed to indicate that a specific feature (estimate) is associated with process \(X_{t}^{(i)}\) (realization \(\boldsymbol{X}_{T}{}^{(i)}\)), \(i=1,2\). For instance, \(\pi_{j}^{(1)}\) denotes the marginal probability for the \(j\)th category in process \(X_{t}^{(1)}\), and \(\widehat{\pi}_{j}^{(1)}\) denotes the estimate of such probability according to the realization \(\boldsymbol{X}_{T}{}^{(1)}\).
According to the model-free features introduced in Section 2.1 (see (3), (5) and (7)), and following [11], one can define two distance measures between categorical stochastic processes. The first metric, so-called \(d_{CC}\), is based on Cramer's \(v\) and Cohen's \(\kappa\), while the second distance, denoted by \(d_{B}\), relies on the binarization of the processes. Specifically, dissimilarities \(d_{CC}\) and \(d_{B}\) are defined as follows
\[d_{CC}(X_{t}^{(1)},X_{t}^{(2)})=\sum_{k=1}^{L}\left[\,\left\|vec \big{(}\mathbf{V}(l_{k})^{(1)}-\mathbf{V}(l_{k})^{(2)}\big{)}\right\|^{2}\right. \tag{10}\] \[\qquad\qquad+\left.\,\left\|\mathbf{\mathcal{K}}(l_{k})^{(1)}-\mathbf{ \mathcal{K}}(l_{k})^{(2)}\right\|^{2}\,\right]+\left\|\mathbf{\pi}^{(1)}-\mathbf{\pi}^{( 2)}\right\|^{2},\] \[d_{B}(X_{t}^{(1)},X_{t}^{(2)})=\sum_{k=1}^{L}\left\|vec\big{(} \mathbf{\Phi}(l_{k})^{(1)}-\mathbf{\Phi}(l_{k})^{(2)}\big{)}\right\|^{2}+\left\|\mathbf{ \pi}^{(1)}-\mathbf{\pi}^{(2)}\right\|^{2}, \tag{11}\]
where the operator \(vec(\cdot)\) transforms a matrix into a row vector by sequentially placing the corresponding numbers by columns and \(\mathcal{L}=\{l_{1},\ldots,l_{L}\}\) is a set of lags which is determined by the user. The metric \(d_{CC}\) combines the features \(V_{ij}(l)\) in (3) with the quantities \(\mathcal{K}_{i}(l)\) in (5), thus taking into account signed and unsigned dependence simultaneously. On the other hand, the distance \(d_{B}\) jointly considers both types of dependence, thus evaluating discrepancy between the whole serial dependence patterns of the series. Note that a term measuring discrepancy between the marginal distributions appears in the definition of both metrics. It is worth highlighting that this term improves the discriminative ability of both dissimilarities (see Remark 4 in Section 2 of [11]).
Both metrics \(d_{CC}\) and \(d_{B}\) are defined under the general assumption of stationarity. An alternative way of assessing discrepancy between both processes is by assuming a common parametric model and evaluating dissimilarity between the vectors of model parameters. The corresponding metric, denoted by \(d_{MLE}\), is defined as
\[d_{MLE}(X_{t}^{(1)},X_{t}^{(2)})=\left\|\mathbf{\theta}^{(1)}-\mathbf{\theta}^{(2)} \right\|^{2}. \tag{12}\]
Note that, in practice, the three dissimilarities previously introduced must be properly estimated from realizations \(\mathbf{X}_{T}^{(1)}\) and \(\mathbf{X}_{T}^{(2)}\), which leads to estimates of \(d_{CC}\), \(d_{B}\) and \(d_{MLE}\) given by
\[\widehat{d}_{CC}(X_{t}^{(1)},X_{t}^{(2)})=\sum_{k=1}^{L}\left[\, \left\|vec\big{(}\mathbf{\hat{V}}(l_{k})^{(1)}-\mathbf{\hat{V}}(l_{k})^{(2)}\big{)} \right\|^{2}\right. \tag{13}\] \[\qquad\qquad+\left.\,\left\|\widehat{\mathbf{\mathcal{K}}}(l_{k})^{( 1)}-\widehat{\mathbf{\mathcal{K}}}(l_{k})^{(2)}\right\|^{2}\,\right]+\left\|\widehat {\mathbf{\pi}}^{(1)}-\widehat{\mathbf{\pi}}^{(2)}\right\|^{2},\] \[\widehat{d}_{B}(X_{t}^{(1)},X_{t}^{(2)})=\sum_{k=1}^{L}\left\| vec\big{(}\widehat{\mathbf{\Phi}}(l_{k})^{(1)}-\widehat{\mathbf{\Phi}}(l_{k})^{(2)}\big{)} \right\|^{2}+\left\|\widehat{\mathbf{\pi}}^{(1)}-\widehat{\mathbf{\pi}}^{(2)}\right\| ^{2},\] (14) \[\widehat{d}_{MLE}(X_{t}^{(1)},X_{t}^{(2)})=\left\|\widehat{\mathbf{ \theta}}^{(1)}-\widehat{\mathbf{\theta}}^{(2)}\right\|^{2}, \tag{15}\]
respectively, where \(\widehat{\mathbf{\pi}}^{(i)}=(\widehat{\pi}_{1}^{(i)},\ldots,\widehat{\pi}_{r}^{(i )})\), \(i=1,2\). Distances \(\widehat{d}_{CC}\), \(\widehat{d}_{B}\) and \(\widehat{d}_{MLE}\) have been used in [11] to perform clustering of CTS. Specifically, their behavior was analyzed in a broad simulation study involving several types of
categorical models, and the advantages and disadvantages of each metric were discussed. In short, metrics \(\widehat{d}_{CC}\) and \(\widehat{d}_{B}\) showed a better clustering effectiveness than distance \(\widehat{d}_{MLE}\), even though the latter metric takes advantage of assuming the true generating mechanism, which is not realistic in practice.
According to the form of metrics \(\widehat{d}_{CC}\), \(\widehat{d}_{B}\) and \(\widehat{d}_{MLE}\) and the null hypothesis in (1), a reasonable decision rule would rely on rejecting this hypothesis for large values of the considered distances. To that aim, a proper approximation of the null distribution of these metrics is needed.
## 3 Bootstraps tests for categorical series
Bootstrap methods provide a powerful way of approximating the null distribution of distances \(\widehat{d}_{CC}\), \(\widehat{d}_{B}\) and \(\widehat{d}_{MLE}\). In this section, three resampling procedures based on bootstrapping these metrics are proposed. The first test is a parametric method based on generating bootstrap replicates by considering the average vector of estimated model coefficients via maximum likelihood. The remaining two approaches rely on well-known resampling methods for dependent data. The key principle is to draw pseudo-series capturing the dependence structure without assuming any parametric model. It is worth highlighting that the proposed bootstrap approaches have already been considered by [30] in a context of multivariate time series.
### A test based on estimated model coefficients
The first test we propose is a parametric procedure. Specifically, for the \(T\)-length realizations \(\boldsymbol{X}_{T}{}^{(1)}=\left(x_{1}^{(1)},\ldots,x_{T}^{(1)}\right)\) and \(\boldsymbol{X}_{T}{}^{(2)}=\left(x_{1}^{(2)},\ldots,x_{T}^{(2)}\right)\), and a distance measure between CTS, \(\widehat{d}\in\{\widehat{d}_{CC},\widehat{d}_{B},\widehat{d}_{MLE}\}\), the method is based on the following steps:
Step 1. Select a specific class of categorical model (e.g., a MC of order 1).
Step 2. For each one of the realizations \(\boldsymbol{X}_{T}{}^{(1)}\) and \(\boldsymbol{X}_{T}{}^{(2)}\), estimate via maximum likelihood the vector of parameters for the categorical model selected in the previous step, which results in the vectors \(\widehat{\boldsymbol{\theta}}^{(1)}\) and \(\widehat{\boldsymbol{\theta}}^{(2)}\), respectively. Compute the vector of average estimates as \(\widetilde{\boldsymbol{\theta}}=\frac{\widehat{\boldsymbol{\theta}}^{(1)}+ \widehat{\boldsymbol{\theta}}^{(2)}}{2}\).
Step 3. Simulate two independent time series of length \(T\), \(\boldsymbol{X}_{T}{}^{(1)*}\) and \(\boldsymbol{X}_{T}{}^{(2)*}\), by considering the categorical model selected in the first step with parameters given by \(\widetilde{\boldsymbol{\theta}}\). Then, obtain the bootstrap version \(\widehat{d}^{*}\) of \(\widehat{d}\) based on the series \(\boldsymbol{X}_{T}{}^{(1)*}\) and \(\boldsymbol{X}_{T}{}^{(2)*}\).
Step 4. Repeat Step 3 a large number \(B\) of times to obtain the bootstrap replicates \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\).
Step 5. Given a significance level \(\alpha\), compute the quantile of order \(1-\alpha\) based on the sample \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\), denoted by \(q_{1-\alpha}^{*}\). Then, the decision rule consists of rejecting \(H_{0}\) if \(\widehat{d}(X_{t}^{(1)},X_{t}^{(2)})>q_{1-\alpha}^{*}\).
Note that the consideration of the average vector \(\widehat{\mathbf{\theta}}\) in the previous procedure allows for a proper approximation of the distribution of \(\widehat{d}\) under the null hypothesis independently of this hypothesis being true.
From now on, we will refer to the test presented in this section as bootstrap averaging (BA).
### A test based on the moving blocks bootstrap
In this section, we introduce an alternative bootstrap test based on a modification of the classical MBB method proposed by [22] and [23]. MBB generates replicates of the time series by joining blocks of fixed length which have been drawn randomly with replacement from among blocks of the original realizations. This approach allows to mimic the underlying dependence structure without assuming specific parametric models for the generating processes.
Given the realizations \(\mathbf{X}_{T}{}^{(1)}=\big{(}x_{1}^{(1)},\ldots,x_{T}^{(1)}\big{)}\) and \(\mathbf{X}_{T}{}^{(2)}=\big{(}x_{1}^{(2)},\ldots,x_{T}^{(2)}\big{)}\), and a distance measure between CTS, \(\widehat{d}\in\{\widehat{d}_{CC},\widehat{d}_{B},\widehat{d}_{MLE}\}\), the procedure proceeds as follows:
Step 1. Fix a positive integer, \(b\), representing the block size, and take \(k\) equal to the smallest integer greater than or equal to \(T/b\).
Step 2. For each realization \(\mathbf{X}_{T}{}^{(i)}\), define the block \(\mathbf{B}_{j}^{(i)}=\big{(}x_{j}^{(i)},\ldots,x_{j+b-1}^{(i)}\big{)}\), for \(j=1,\ldots,q\), with \(q=T-b+1\). Let \(\overline{\mathbf{B}}=\{\mathbf{B}_{j}^{(1)},\ldots,\mathbf{B}_{q}^{(1)},\mathbf{B}_{j}^{(2)},\ldots,\mathbf{B}_{q}^{(2)}\}\) be the set of all blocks, those coming from \(\mathbf{X}_{T}{}^{(1)}\) and those coming from \(\mathbf{X}_{T}{}^{(2)}\).
Step 3. Draw two sets of \(k\) blocks, \(\mathbf{\xi}^{(i)}=\big{(}\mathbf{\xi}_{1}^{(i)},\ldots,\mathbf{\xi}_{k}^{(i)}\big{)}\), \(i=1,2\), with equiprobable distribution from \(\overline{\mathbf{B}}\). Note that each \(\mathbf{\xi}_{j}^{(i)}\), \(j=1,\ldots,k\), \(i=1,2\), is a \(b\)-length CTS, let us say \((\xi_{1j}^{(i)},\xi_{2j}^{(i)},\ldots,\xi_{bj}^{(i)})\).
Step 4. For each \(i=1,2\), construct the pseudo-series \(\mathbf{X}_{T}{}^{(i)*}\) by taking the first \(T\) elements of:
\[\mathbf{\xi}^{(i)}=(\xi_{11}^{(i)},\xi_{21}^{(i)},\ldots,\xi_{b1}^{(i)},\xi_{12}^ {(i)},\xi_{22}^{(i)},\ldots,\xi_{b2}^{(i)},\ldots,\xi_{1k}^{(i)},\xi_{2k}^{(i )},\ldots,\xi_{bk}^{(i)}).\]
Then, obtain the bootstrap version \(\widehat{d}^{*}\) of \(\widehat{d}\) based on the pseudo-series \(\mathbf{X}_{T}{}^{(1)*}\) and \(\mathbf{X}_{T}{}^{(2)*}\).
Step 5. Repeat Steps 3 and 4 a large number \(B\) of times to obtain the bootstrap replicates \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\).
Step 6. Given a significance level \(\alpha\), compute the quantile of order \(1-\alpha\) based on the sample \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\), denoted by \(q_{1-\alpha}^{*}\). Then, the decision rule consists of rejecting \(H_{0}\) if \(\widehat{d}(X_{t}^{(1)},X_{t}^{(2)})>q_{1-\alpha}^{*}\).
Note that, by considering the whole set of blocks \(\overline{\mathbf{B}}\) in Step 2, both pseudo-time series \(\mathbf{X}_{T}{}^{(1)*}\) and \(\mathbf{X}_{T}{}^{(2)*}\) are expected to contain information about the original series \(\mathbf{X}_{T}{}^{(1)}\) and \(\mathbf{X}_{T}{}^{(2)}\) in equal measure. This way, the bootstrap procedure is able to correctly approximate the distribution of the test statistic \(\widehat{d}\) under the null hypothesis even if this hypothesis is not true.
From now on, we will refer to the test presented in this section as MBB.
### A test based on the stationary bootstrap
The third bootstrap mechanism to approximate the distribution of \(\widehat{d}\in\{\widehat{d}_{CC},\widehat{d}_{B},\widehat{d}_{MLE}\}\) adapts the classical SB [24]. This resampling method is aimed at overcoming the lack of stationarity of the MBB procedure. Note that \(d_{CC}\) and \(d_{B}\) are well-defined only for stationary processes, so it is desirable that a bootstrap technique based on estimates of these metrics generates stationary pseudo-series.
For realizations \(\boldsymbol{X}_{T}{}^{(1)}=\left(x_{1}^{(1)},\ldots,x_{T}^{(1)}\right)\) and \(\boldsymbol{X}_{T}{}^{(2)}=\left(x_{1}^{(2)},\ldots,x_{T}^{(2)}\right)\), the resampling method proceeds as follows:
Step 1. Fix a real number \(p\in[0,1]\).
Step 2. For \(i=1,2\), draw randomly one observation from the pooled series \(\widetilde{\boldsymbol{X}}=\left(\boldsymbol{X}_{T}{}^{(1)},\boldsymbol{X}_{ T}{}^{(2)}\right)\). The drawn observations are of the form \(x_{j}^{(k^{i})}\) for some \(k^{i}=1,2\), \(j^{i}=1,\ldots,T\), and \(i=1,2\). Then, \(x_{j^{i}}^{(k^{i})}\) is taken as the first element of the pseudo-series \(\boldsymbol{X}_{T}{}^{(i)*}\), denoted by \(x_{1}{}^{(i)*}\).
Step 3. Once obtained \(x_{l}{}^{(i)*}=x_{j^{i}}^{(k^{i})}\), for \(l<T\) and \(i=1,2\), the next bootstrap replication \(x_{l+1}^{(i)*}\) is defined as \(x_{j^{i}+1}^{(k^{i})}\) with probability \(1-p\), and is randomly drawn from \(\widetilde{\boldsymbol{X}}\) with probability \(p\). When \(j^{i}=T\), the selected observation is \(x_{1}^{(2)}\) if \(k^{i}=1\) and \(x_{1}^{(1)}\) if \(k^{i}=2\).
Step 4. Repeat Step 3 until the pseudo-series \(\boldsymbol{X}_{T}{}^{(1)*}\) and \(\boldsymbol{X}_{T}{}^{(2)*}\) contain \(T\) observations. Based on these pseudo-series, compute the bootstrap version \(\widehat{d}^{*}\) of \(\widehat{d}\).
Step 5. Repeat Steps 2-4 a large number \(B\) of times to obtain the bootstrap replicates \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\).
Step 6. Given a significance level \(\alpha\), compute the quantile of order \(1-\alpha\) based on the sample \(\widehat{d}^{(1)*},\ldots,\widehat{d}^{(B)*}\), denoted by \(q_{1-\alpha}^{*}\). Then, the decision rule consists of rejecting \(H_{0}\) if \(\widehat{d}(X_{t}^{(1)},X_{t}^{(2)})>q_{1-\alpha}^{*}\).
It is worth remarking that, likewise MBB procedure, a proper approximation of the null distribution of \(\widehat{d}\) is also expected here due to the consideration of the pooled time series \(\widetilde{\boldsymbol{X}}\) in the generating mechanism.
From now on, we will refer to the test presented in this section as SB.
## 4 Simulation study
In this section, we carry out a simulation study conducted to assess the performance with finite samples of the testing procedures presented in Section 3. Note that we are considering three dissimilarities and three resampling schemes, which gives rise to 9 hypothesis tests to be evaluated. First we describe the simulation mechanism and then we discuss the main results. Finally, some additional analysis are performed to analyze the procedures in deeper detail.
### Experimental design
The behavior of the methods was examined with pairs of CTS realizations, \(\boldsymbol{X}_{T}{}^{(1)}=\left(x_{1}^{(1)},\ldots,x_{T}^{(1)}\right)\) and \(\boldsymbol{X}_{T}{}^{(2)}=\left(x_{1}^{(2)},\ldots,x_{T}^{(2)}\right)\), simulated from categorical processes selected to cover different dependence structures. Specifically, three types of generating models were considered, namely MC, HMM, and new discrete ARMA (NDARMA) processes. In all cases, the deviation from the null hypothesis in (1) was established in accordance with differences in the coefficients of the generating processes. Specifically, the degree of deviation between the simulated realizations was regulated by a specific parameter (\(\delta\)) included in the formulation of the models. The specific scenarios and generating processes are given below.
**Scenario 1**. Hypothesis testing for MC. Consider three-state MC models of order 1 given by the matrix of transition probabilities
\[\begin{pmatrix}0.1+\delta&0.1+\delta&0.1+\delta\\ 0.3+\delta&0.3+\delta&0.3+\delta\\ 0.6-2\delta&0.6-2\delta&0.6-2\delta\end{pmatrix}.\]
**Scenario 2**. Hypothesis testing for HMM. Consider three-state HMM models of order 1 defined by the same transition and emission probability matrix, which is given by
\[\begin{pmatrix}0.3+\delta&0.3+\delta&0.3+\delta\\ 0.3+\delta&0.3+\delta&0.3+\delta\\ 0.4-2\delta&0.4-2\delta&0.4-2\delta\end{pmatrix}.\]
**Scenario 3**. Hypothesis testing for NDARMA models. Let \(\{X_{t},t\in\mathbb{Z}\}\) and \(\{\epsilon_{t},t\in\mathbb{Z}\}\) be two count processes with range \(\{1,\ldots,r\}\) and following the equation
\[X_{t}=\alpha_{t,1}X_{t-1}+\ldots+\alpha_{t,p}X_{t-p}+\beta_{t,0}\epsilon_{t}+ \ldots+\beta_{t,q}\epsilon_{t-q}, \tag{16}\]
where \(\{\epsilon_{t},t\in\mathbb{Z}\}\) is i.i.d. with \(P(\epsilon_{t}=i)=\pi_{i}\), independent of \((X_{s})_{s<t}\), and the i.i.d. multinomial random vectors
\[(\alpha_{t,1},\ldots,\alpha_{t,p},\beta_{t,0},\ldots,\beta_{t,q})\sim\text{ MULT}(1;\phi_{1},\ldots,\phi_{p},\varphi_{0},\ldots,\varphi_{q}), \tag{17}\]
are independent of \(\{\epsilon_{t},t\in\mathbb{Z}\}\) and \((X_{s})_{s<t}\). The considered models are three-state NDARMA(1,0) processes with marginal probabilities given by the vector \((\pi_{1},\pi_{2},\pi_{3})=(0.2,0.3-\delta,0.5+\delta)\) and multinomial probabilities given by the vector \((\phi_{1},\varphi_{0})=(0.6-\delta,0.4+\delta)\).
In the previous scenarios, \(\boldsymbol{X}_{T}{}^{(1)}\) is always generated by taking \(\delta=0\), while \(\boldsymbol{X}_{T}{}^{(2)}\) is generated using different values of \(\delta\), thus allowing to obtain simulation schemes under the null, when \(\delta=0\) also for \(\boldsymbol{X}_{T}{}^{(2)}\), and under the alternative otherwise. To empirically assess the size and power behavior of the different tests, a number of \(N=1000\) replications of pairs of realizations
\({\mathbf{X}_{T}}^{(1)}\) and \({\mathbf{X}_{T}}^{(2)}\) coming from the processes at each scenario were obtained. Realizations \({\mathbf{X}_{T}}^{(2)}\) were generated by considering \(\delta\in\{0,0.05,0.075,0.10\}\), \(\{0,0.025,0.05,0.075\}\) and \(\{0,0.10,0.15,0.20\}\) in Scenarios 1, 2 and 3, respectively.
For a pair of realizations associated with a specific value of \(\delta\), \(B=500\) bootstrap replicates were considered to approximate the distribution of the different test statistics under the null hypothesis. Simulations were carried out for different values of \(T\), namely \(T\in\{100,200,500\}\). For methods MBB and SB, we chose the corresponding input parameters as \(b=\lceil T^{1/3}\rceil\) and \(p=T^{-1/3}\), respectively, being \(\lceil\cdot\rceil\) the ceiling function. These choices were motivated by the related literature. For instance, [31] addressed the issue of selecting \(b\) in the context of bias and variance bootstrap estimation, concluding that the optimal block size is of order \(T^{1/3}\). On the other hand, since the mean block size in SB corresponds to \(1/p\), it is reasonable to select \(p\) of order \(T^{-1/3}\). Computation of dissimilarities \(\widehat{d}_{CC}\) and \(\widehat{d}_{B}\) was carried out by considering only one lag, i.e, \(\mathcal{L}=\{1\}\), since the first lag is enough to characterize the dependence structures of the processes in the three scenarios. Computation of distance \(\widehat{d}_{MLE}\) was performed by considering the true class of models existing in each scenario. Note that, for each combination of metric and resampling scheme, each one of the \(N\) replications leads to a particular outcome of the decision rule for the test in (1). In all cases, the results were obtained for a significance level \(\alpha=0.05\).
### Results and discussion
Tables 1, 2 and 3 contain the rejection rates for Scenarios 1, 2 and 3, respectively. In Scenario 1 and, under the null hypothesis (\(\delta=0\)), all methods display rejection rates slightly above the significance level (0.05) when \(T=100\). However, when increasing the series length, the rates get close to this level. In fact, for \(T=500\), all approaches approximate the nominal size pretty well, with the tests based on the distance \(\widehat{d}_{MLE}\) being slightly conservative. On the other hand, when the null hypothesis is not true (\(\delta>0\)), there are dramatic differences in the rejection rates of the considered approaches. Specifically, metric \(\widehat{d}_{CC}\) achieves the highest power by a large degree, while distance \(\widehat{d}_{B}\) gets very poor results. Metric \(\widehat{d}_{MLE}\) lies somewhere in the middle. For a given dissimilarity, there are no substantial differences among the three bootstrap techniques, although the MBB approach slightly produces the highest rejection rates in most of the settings with \(\delta>0\). As expected, all methods improve their power when increasing the value of the parameter \(\delta\) and the series length.
In Scenario 2 (see Table 2), the metric \(\widehat{d}_{CC}\) exhibits again the largest power, but the differences between the considered approaches are less substantial. As in Scenario 1, the method based on moving blocks (MBB) moderately outperforms the remaining resampling techniques. Finally, in Scenario 3 (see Table 3), the results are quite similar to the ones in Scenario 1, with the metric \(\widehat{d}_{CC}\) clearly outperforming the remaining dissimilarities in most cases.
In short, the above simulation results showed that, under the null hypothesis, most methods respect the significance level rather properly when sufficiently
large values of the series length are considered. On the other hand, when the null hypothesis is not true, the test based on the metric \(\widehat{d}_{CC}\) and the bootstrap approach MBB exhibits the highest power in most settings. Note that this fact is quite interesting and advantageous for the practitioners, since neither the dissimilarity \(\widehat{d}_{CC}\) nor the resampling mechanism based on moving blocks assume a specific class of categorical models. Moreover, as stated in Section 4.1, the rejection rates provided in Tables 1, 2 and 3 were obtained by considering the default value \(b=\lceil T^{1/3}\rceil\) for the block size in all cases, which means that no hyperparameter selection was performed for MBB.
### Further analysis
In order to provide a more comprehensive evaluation of the proposed clustering methods, we extended the previous simulations by: (i) increasing the complexity of original Scenarios 1, 2 and 3, and (ii) analyzing the impact that parameters \(b\) and \(p\) have on the behavior of MBB and SB, respectively. Each one of the above points is discussed below.
#### 4.3.1 Additional scenarios
Two additional setups were constructed by increasing the complexity of Scenarios 1, 2 and 3. First, note that the series range in these scenarios was fixed to \(\mathcal{V}=\{1,2,3\}\). However, it is interesting to assess the performance of the different methods when the set \(\mathcal{V}\) contains a different number of categories. To this aim, we constructed a new simulation scenario, so-called Scenario 4, in which the size of \(\mathcal{V}\) is randomly determined. Specifically, let \(R\) be a random variable following
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(T=100\)} & \multicolumn{3}{c|}{\(T=200\)} & \multicolumn{3}{c}{\(T=500\)} \\ & BA & MBB & SB & BA & MBB & SB & BA & MBB & SB \\ \hline \(\delta=0.000\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.058 & 0.067 & 0.069 & 0.056 & 0.066 & 0.064 & 0.053 & 0.054 & 0.052 \\ \(\widehat{d}_{B}\) & 0.067 & 0.084 & 0.086 & 0.057 & 0.072 & 0.060 & 0.054 & 0.060 & 0.053 \\ \(\widehat{d}_{MLE}\) & 0.054 & 0.059 & 0.067 & 0.050 & 0.053 & 0.063 & 0.036 & 0.044 & 0.043 \\ \hline \(\delta=0.050\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.242 & 0.313 & 0.307 & 0.411 & 0.454 & 0.410 & 0.823 & 0.854 & 0.820 \\ \(\widehat{d}_{B}\) & 0.062 & 0.078 & 0.079 & 0.059 & 0.082 & 0.083 & 0.102 & 0.121 & 0.114 \\ \(\widehat{d}_{MLE}\) & 0.125 & 0.135 & 0.101 & 0.179 & 0.202 & 0.154 & 0.461 & 0.496 & 0.353 \\ \hline \(\delta=0.075\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.474 & 0.523 & 0.501 & 0.745 & 0.799 & 0.752 & 0.995 & 0.993 & 0.996 \\ \(\widehat{d}_{B}\) & 0.089 & 0.112 & 0.121 & 0.095 & 0.127 & 0.123 & 0.275 & 0.282 & 0.265 \\ \(\widehat{d}_{MLE}\) & 0.227 & 0.249 & 0.166 & 0.406 & 0.436 & 0.283 & 0.886 & 0.899 & 0.749 \\ \hline \(\delta=0.100\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.720 & 0.763 & 0.715 & 0.963 & 0.972 & 0.961 & 1.000 & 1.000 & 1.000 \\ \(\widehat{d}_{B}\) & 0.196 & 0.208 & 0.222 & 0.508 & 0.546 & 0.513 & 0.996 & 0.997 & 0.996 \\ \(\widehat{d}_{MLE}\) & 0.425 & 0.459 & 0.263 & 0.744 & 0.783 & 0.555 & 1.000 & 1.000 & 0.991 \\ \hline \end{tabular}
\end{table}
Table 1: Simulated rejection rates in Scenario 1.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(T=100\)} & \multicolumn{3}{c|}{\(T=200\)} & \multicolumn{3}{c}{\(T=500\)} \\ & BA & MBB & SB & BA & MBB & SB & BA & MBB & SB \\ \hline \(\delta=0.000\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.061 & 0.057 & 0.045 & 0.050 & 0.056 & 0.068 & 0.051 & 0.057 & 0.053 \\ \(\widehat{d}_{B}\) & 0.055 & 0.084 & 0.071 & 0.055 & 0.072 & 0.062 & 0.054 & 0.073 & 0.055 \\ \(\widehat{d}_{MLE}\) & 0.063 & 0.055 & 0.071 & 0.050 & 0.055 & 0.052 & 0.056 & 0.054 & 0.047 \\ \hline \(\delta=0.025\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.161 & 0.203 & 0.188 & 0.255 & 0.315 & 0.262 & 0.373 & 0.452 & 0.340 \\ \(\widehat{d}_{B}\) & 0.071 & 0.089 & 0.081 & 0.145 & 0.176 & 0.147 & 0.201 & 0.252 & 0.234 \\ \(\widehat{d}_{MLE}\) & 0.103 & 0.182 & 0.150 & 0.216 & 0.293 & 0.223 & 0.313 & 0.312 & 0.326 \\ \hline \(\delta=0.050\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.267 & 0.294 & 0.280 & 0.357 & 0.389 & 0.342 & 0.593 & 0.701 & 0.650 \\ \(\widehat{d}_{B}\) & 0.145 & 0.176 & 0.134 & 0.287 & 0.298 & 0.284 & 0.465 & 0.523 & 0.434 \\ \(\widehat{d}_{MLE}\) & 0.228 & 0.244 & 0.256 & 0.327 & 0.346 & 0.312 & 0.591 & 0.673 & 0.595 \\ \hline \(\delta=0.075\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.404 & 0.452 & 0.431 & 0.661 & 0.705 & 0.654 & 0.843 & 0.964 & 0.875 \\ \(\widehat{d}_{B}\) & 0.268 & 0.297 & 0.259 & 0.476 & 0.513 & 0.487 & 0.712 & 0.779 & 0.734 \\ \(\widehat{d}_{MLE}\) & 0.358 & 0.417 & 0.401 & 0.585 & 0.685 & 0.624 & 0.813 & 0.924 & 0.825 \\ \hline \end{tabular}
\end{table}
Table 2: Simulated rejection rates in Scenario 2.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(T=100\)} & \multicolumn{3}{c|}{\(T=200\)} & \multicolumn{3}{c}{\(T=500\)} \\ & BA & MBB & SB & BA & MBB & SB & BA & MBB & SB \\ \hline \(\delta=0.00\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.081 & 0.077 & 0.067 & 0.042 & 0.039 & 0.042 & 0.045 & 0.046 & 0.042 \\ \(\widehat{d}_{B}\) & 0.071 & 0.091 & 0.081 & 0.054 & 0.058 & 0.062 & 0.048 & 0.048 & 0.052 \\ \(\widehat{d}_{MLE}\) & 0.060 & 0.074 & 0.070 & 0.045 & 0.062 & 0.051 & 0.057 & 0.060 & 0.051 \\ \hline \(\delta=0.10\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.341 & 0.407 & 0.335 & 0.640 & 0.667 & 0.623 & 0.998 & 0.997 & 0.997 \\ \(\widehat{d}_{B}\) & 0.053 & 0.076 & 0.072 & 0.067 & 0.093 & 0.089 & 0.143 & 0.163 & 0.148 \\ \(\widehat{d}_{MLE}\) & 0.179 & 0.206 & 0.196 & 0.331 & 0.366 & 0.345 & 0.760 & 0.793 & 0.771 \\ \hline \(\delta=0.15\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.665 & 0.700 & 0.625 & 0.915 & 0.920 & 0.905 & 1.000 & 1.000 & 1.000 \\ \(\widehat{d}_{B}\) & 0.078 & 0.093 & 0.080 & 0.126 & 0.140 & 0.123 & 0.390 & 0.410 & 0.405 \\ \(\widehat{d}_{MLE}\) & 0.370 & 0.424 & 0.404 & 0.673 & 0.701 & 0.678 & 0.991 & 0.996 & 0.996 \\ \hline \(\delta=0.20\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.915 & 0.925 & 0.925 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 \\ \(\widehat{d}_{B}\) & 0.124 & 0.126 & 0.143 & 0.260 & 0.292 & 0.267 & 0.800 & 0.822 & 0.820 \\ \(\widehat{d}_{MLE}\) & 0.619 & 0.659 & 0.665 & 0.940 & 0.945 & 0.945 & 1.000 & 1.000 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 3: Simulated rejection rates in Scenario 3.
a discrete uniform distribution in the set \(\{2,3,4,5\}\) and consider \(R\)-state MC models given by the following transition probability matrix of order \(R\):
\[\begin{pmatrix}\frac{1}{R}-\delta&\frac{1}{R}-\delta&\dots&\frac{1}{R}-\delta& \frac{1}{R}-\delta\\ \frac{1}{R}&\frac{1}{R}&\dots&\frac{1}{R}&\frac{1}{R}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \frac{1}{R}&\frac{1}{R}&\dots&\frac{1}{R}&\frac{1}{R}\\ \frac{1}{R}+\delta&\frac{1}{R}+\delta&\dots&\frac{1}{R}+\delta&\frac{1}{R}+ \delta\end{pmatrix}. \tag{18}\]
The simulations concerning Scenario 4 were carried out in an analogous way as the ones described in Section 4.1 but considering \(\delta\in\{0,0.05,0.10,0.15\}\). The corresponding rejection rates are displayed in Table 4. Under the null hypothesis (\(\delta=0\)), all methods respect the significance level quite properly for \(T=500\), while the methods based on \(\widehat{d}_{CC}\) and \(\widehat{d}_{B}\) show a few overrejections for \(T\in\{100,200\}\). On the other hand, the results for \(\delta>0\) are rather different to the ones in Tables 1, 2 and 3. Dissimilarity \(\widehat{d}_{B}\) still reaches the worst results by far but, this time, there seems to be no significant differences between distances \(\widehat{d}_{CC}\) and \(\widehat{d}_{MLE}\) in most settings. In fact, a more detailed analysis indicates that these distances get similar rejection rates for all values of \(R\).
A second additional scenario was constructed to examine the behavior of the methods when higher order dependencies exist. In particular, the so-called Scenario 5 considers three-state NDARMA(2, 0) models with marginal probabilities \((\pi_{1},\pi_{2},\pi_{3})=(0.3,0.3-\delta,0.4+\delta)\) and multinomial probabilities \((\phi_{1},\phi_{2},\varphi_{0})=(0.4-\delta,0.4-\delta,0.2+2\delta)\). Simulations were carried out this time in the same way as in previous analyses but setting \(\delta\in\{0.025,0.050,0.075\}\)
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(T=100\)} & \multicolumn{3}{c|}{\(T=200\)} & \multicolumn{3}{c}{\(T=500\)} \\ & BA & MBB & SB & BA & MBB & SB & BA & MBB & SB \\ \hline \(\delta=0.00\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.091 & 0.067 & 0.057 & 0.074 & 0.063 & 0.054 & 0.053 & 0.057 & 0.052 \\ \(\widehat{d}_{B}\) & 0.062 & 0.064 & 0.067 & 0.060 & 0.066 & 0.070 & 0.055 & 0.053 & 0.062 \\ \(\widehat{d}_{MLE}\) & 0.048 & 0.047 & 0.058 & 0.048 & 0.049 & 0.053 & 0.051 & 0.048 & 0.052 \\ \hline \(\delta=0.05\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.101 & 0.095 & 0.112 & 0.203 & 0.214 & 0.176 & 0.334 & 0.375 & 0.331 \\ \(\widehat{d}_{B}\) & 0.065 & 0.056 & 0.065 & 0.056 & 0.098 & 0.060 & 0.074 & 0.092 & 0.073 \\ \(\widehat{d}_{MLE}\) & 0.134 & 0.137 & 0.132 & 0.178 & 0.175 & 0.194 & 0.346 & 0.320 & 0.317 \\ \hline \(\delta=0.10\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.194 & 0.276 & 0.234 & 0.443 & 0.470 & 0.487 & 0.843 & 0.827 & 0.804 \\ \(\widehat{d}_{B}\) & 0.104 & 0.093 & 0.091 & 0.125 & 0.122 & 0.112 & 0.151 & 0.149 & 0.152 \\ \(\widehat{d}_{MLE}\) & 0.187 & 0.273 & 0.225 & 0.364 & 0.465 & 0.451 & 0.801 & 0.824 & 0.793 \\ \hline \(\delta=0.15\) & & & & & & & & & \\ \(\widehat{d}_{CC}\) & 0.443 & 0.437 & 0.478 & 0.836 & 0.889 & 0.892 & 1.000 & 1.000 & 1.000 \\ \(\widehat{d}_{B}\) & 0.153 & 0.163 & 0.139 & 0.165 & 0.223 & 0.193 & 0.324 & 0.342 & 0.320 \\ \(\widehat{d}_{MLE}\) & 0.463 & 0.454 & 0.471 & 0.825 & 0.876 & 0.864 & 0.998 & 0.997 & 0.999 \\ \hline \end{tabular}
\end{table}
Table 4: Simulated rejection rates in Scenario 4.
and \(\mathcal{L}=\{1,2\}\) for the computation of dissimilarities \(\widehat{d}_{CC}\) and \(\widehat{d}_{B}\), since the serial dependence structures of the processes in Scenario 5 are characterized by means of the first two lags. The corresponding rejection rates are provided in Table 5. Once again, all methods respect the nominal size rather properly when \(T=500\). Dissimilarity \(\widehat{d}_{CC}\) exhibits by far the best power, and the bootstrap method MBB slightly outperforms the remaining ones in most cases.
#### 4.3.2 Analyzing the impact of \(b\) and \(p\) on MBB and SB
In order to analyze the influence of parameters \(b\) and \(p\) on the tests based on MBB and SB, respectively, we run some additional simulations. In particular, we considered Scenario 1 in Section 4.1 for two different values of \(\delta\), namely \(\delta=0\) (null hypothesis) and \(\delta=0.075\) (alternative hypothesis). The series length was set to \(T=200\). In addition, we fixed different values for both \(b\) and \(p\). Specifically, we set \(b\in\{4,6,\ldots,20\}\) and \(p\in\{1/4,1/6,\ldots,1/20\}\). Note that the values employed in Section 4.1 for \(T=200\) were \(b=6\) and \(p=1/6\). For each resampling procedure (MBB and SB), dissimilarity measure (\(\widehat{d}_{CC}\), \(\widehat{d}_{B}\) and \(\widehat{d}_{MLE}\)), value of \(\delta\) and value of the corresponding input parameter in the selected grid, we repeated the simulation mechanism described in Section 4.1 by considering again \(N=1000\), \(B=500\) and \(\alpha=0.05\).
Curves of rejection rates as a function of \(b\) (MBB) and \(p\) (SB) are displayed in the left and right panels of Figure 1, respectively, where each color corresponds to a different dissimilarity measure. In all cases, there are no dramatic differences among the rejection rates associated with different values of the corresponding input parameters. Under the null hypothesis (top panels), the curves oscillate around the nominal level of \(0.05\) with moderate deviations, which can
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(T=100\)} & \multicolumn{3}{c|}{\(T=200\)} & \multicolumn{3}{c}{\(T=500\)} \\ & BA & MBB & SB & BA & MBB & SB & BA & MBB & SB \\ \hline \(\delta=0.000\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & \(0.047\) & \(0.068\) & \(0.062\) & \(0.049\) & \(0.068\) & \(0.065\) & \(0.061\) & \(0.051\) & \(0.054\) \\ \(\widehat{d}_{B}\) & \(0.070\) & \(0.085\) & \(0.084\) & \(0.068\) & \(0.064\) & \(0.067\) & \(0.049\) & \(0.053\) & \(0.054\) \\ \(\widehat{d}_{MLE}\) & \(0.048\) & \(0.050\) & \(0.072\) & \(0.049\) & \(0.060\) & \(0.058\) & \(0.038\) & \(0.048\) & \(0.043\) \\ \hline \(\delta=0.025\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & \(0.356\) & \(0.397\) & \(0.365\) & \(0.644\) & \(0.698\) & \(0.653\) & \(0.923\) & \(0.947\) & \(0.910\) \\ \(\widehat{d}_{B}\) & \(0.058\) & \(0.071\) & \(0.068\) & \(0.087\) & \(0.103\) & \(0.091\) & \(0.175\) & \(0.193\) & \(0.131\) \\ \(\widehat{d}_{MLE}\) & \(0.174\) & \(0.147\) & \(0.126\) & \(0.254\) & \(0.307\) & \(0.259\) & \(0.373\) & \(0.389\) & \(0.293\) \\ \hline \(\delta=0.050\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & \(0.687\) & \(0.723\) & \(0.617\) & \(0.845\) & \(0.893\) & \(0.865\) & \(0.943\) & \(0.965\) & \(0.957\) \\ \(\widehat{d}_{B}\) & \(0.107\) & \(0.113\) & \(0.089\) & \(0.146\) & \(0.155\) & \(0.139\) & \(0.460\) & \(0.478\) & \(0.437\) \\ \(\widehat{d}_{MLE}\) & \(0.378\) & \(0.494\) & \(0.454\) & \(0.705\) & \(0.747\) & \(0.699\) & \(0.901\) & \(0.909\) & \(0.885\) \\ \hline \(\delta=0.075\) & & & & & & & & \\ \(\widehat{d}_{CC}\) & \(0.896\) & \(0.915\) & \(0.901\) & \(1.000\) & \(1.000\) & \(1.000\) & \(1.000\) & \(1.000\) & \(1.000\) \\ \(\widehat{d}_{B}\) & \(0.145\) & \(0.159\) & \(0.153\) & \(0.227\) & \(0.302\) & \(0.259\) & \(0.805\) & \(0.846\) & \(0.819\) \\ \(\widehat{d}_{MLE}\) & \(0.627\) & \(0.661\) & \(0.654\) & \(0.876\) & \(0.907\) & \(0.883\) & \(1.000\) & \(1.000\) & \(1.000\) \\ \hline \end{tabular}
\end{table}
Table 5: Simulated rejection rates in Scenario 5.
be due to the noise inherent to the simulation experiments. Analogously, the rejection rates under the alternative hypothesis show a steady behavior for the three metrics and both resampling procedures. Based on previous considerations, one can state that parameters \(b\) and \(p\) do not have a substantial impact on the behavior of the tests based on MBB and SB when the dependence structures of the underlying processes can be characterized by the first few lags. Note that this is a good property of these procedures, since it frees the user from having to perform hyperparameter selection to obtain suitable values of both parameters, which is usually computationally intensive.
In sum, the results presented in Section 4.3 corroborate the great performance of the test based on \(\widehat{d}_{CC}\) and indicate that a proper selection of parameters \(b\) and \(p\) is not essential for an appropriate behavior of the resampling procedures MBB and SB.
## 5 Application
This section is devoted to show an application of the proposed tests. To that aim, we consider a collection of series that was employed in Section 6.2 of [11] for clustering purposes. Specifically, the dataset contains 40 protein sequences.
Figure 1: Rejection rates as a function of \(b\) and \(p\) for procedures MBB (left panels) and SB (right panels). Scenario 1 with \(T=200\).
Proteins are large molecules constituted of one or more chains of simple organic compounds called amino acids. There are 20 different amino acids making up the proteins of any living organism. Therefore, each protein sequence in the considered database can be seen as a CTS with 20 categories. In [11], the number of categories in each CTS was reduced to 3 by using the so-called protein sequence encoding. Specifically, the amino acids were categorized into three classes according to its hydrophobicity, which is a common transformation [32, 33]. It is worth highlighting that the application of categorical processes to protein data has been considered in several works [15, 34]. Half of the proteins in the database are found in different parts of human beings, while the other half are present in several variants of COVID-19 virus. The maximum, minimum and median lengths for the CTS in the database are \(T=2511\), \(T=165\) and \(T=426\), respectively.
In [11], the metrics \(\widehat{d}_{CC}\), \(\widehat{d}_{B}\) and \(\widehat{d}_{MLE}\) were used in combination with the standard partitioning around medoids (PAM) procedure [35] to perform clustering in the dataset of protein sequences. Specifically, a number of \(K=2\) groups was given as input to the PAM algorithm. Thus, note that the main goal of this task was not to obtain groups of series which have been generated from the same stochastic process, but to determine whether the corresponding metrics are able to determine the underlying protein families (human and COVID-19), which are assumed to define the true partition. In order to achieve the former objective, we propose to consider the clustering method based on \(p\)-values introduced by [36] along with the hypothesis tests constructed in this manuscript. In particular, the procedure of [36] is a hierarchical clustering approach starting from a pairwise matrix of \(p\)-values (which can be seen as a similarity matrix). In fact, a clustering homogeneity criterion for this method is implicitly provided by specifying a threshold significance level \(\alpha\) (e.g., 0.05 or 0.01), which automatically determines the number of groups. In this way, those elements with associated \(p\)-values greater than \(\alpha\) will be grouped together, which implies that only those series whose dynamic structures are not significantly different at level \(\alpha\) will be placed in the same group. It is worth mentioning that a function implementing this clustering procedure is available by means of the R package **TSclust**[37].
Based on the above considerations, the clustering method based on the \(p\)-value previously described was applied to the dataset of protein sequences by considering the 9 hypothesis tests proposed in this paper. For the sake of simplicity and illustration, only the results associated with the metric \(\widehat{d}_{CC}\) and the bootstrap approach MBB are provided. Note that the corresponding test showed the best overall performance in the simulation experiments of Section 4. Computation of the dissimilarity \(\widehat{d}_{CC}\) was carried out by considering \(\mathcal{L}=\{1,2\}\), since this set was chosen according to the selection procedure proposed in Section 3.4 of [11], which is aimed at finding the optimal collection of lags for a joint analysis of a CTS dataset. A straightforward adaptation of the MBB method to the case of series with unequal lengths was considered. A number of \(B=500\) bootstrap replicates were used to compute the pairwise matrix of \(p\)-values and a threshold significance level \(\alpha=0.05\) was employed for the hierarchical clustering mechanism.
As an illustrative step to understand the partition produced by the considered clustering procedure, we performed a two-dimensional scaling (2DS) based on the pairwise dissimilarity matrix for distance \(\widehat{d}_{CC}\). In this way, a projection of the protein sequences on a two-dimensional plane preserving the original distances as well as possible is available. The location of the 40 series in the transformed space is displayed in Figure 2. Different colors were used to distinguish human proteins from COVID-19 proteins.
According to the 2DS plot, it is clear that the dissimilarity \(\widehat{d}_{CC}\) is able to detect both groups of protein families to some extent. However, these groups exhibit a different degree of variability (e.g., the points representing COVID-19 proteins are more concentrated than the ones representing human proteins). Interestingly, the partition defined by both underlying families is far from being the one identified by the considered clustering approach. In fact, the hierarchical procedure based on \(p\)-values determines the existence of only one group containing more than one series. Specifically, this group includes thirteen series associated with COVID-19 proteins. These series correspond to the points in Figure 2 which lie inside the rectangle. Note that it is reasonable that the generating processes of these time series are not significantly different, since the corresponding pairwise distances are very close to zero in accordance with the 2DS plot. Each one of the remaining series constitutes an isolated group, which indicates rejection of the null hypothesis of equality of generating structures in all their pairwise comparisons.
It is worth emphasizing that the above application clearly highlights the usefulness of the proposed hypothesis tests. Specifically, we showed that, even in a classical machine learning problem as clustering, an approach based on these tests can lead to dramatically different conclusions than the ones obtained using
Figure 2: Two-dimensional scaling plane based on distance \(\widehat{d}_{CC}\) for the 40 protein sequences. The points inside the rectangle represent time series whose generating processes are not significantly different according to the test based on \(\widehat{d}_{CC}\) and MBB.
more conventional techniques. In fact, while a traditional clustering algorithm detects two groups of series displaying similar dependence structures in the protein dataset (those associated with both protein families), the approach based on \(p\)-values indicates that the series corresponding to human proteins are not so similar, since the equality of generating processes for each pair of them is rejected. Note that the latter approach is more informative and can lead to interesting insights that can not be reached by using standard clustering procedures.
## 6 Conclusions
This work deals with the construction of hypothesis tests for comparing the generating processes of two CTS, which are based on two main elements:
* A distance measure between CTS evaluating discrepancy between the marginal distributions and the dependence structures of the series. Specifically, we consider two metrics relying on model-free features (\(\widehat{d}_{CC}\) and \(\widehat{d}_{B}\)) and a parametric dissimilarity (\(\widehat{d}_{MLE}\)) assuming a particular class of categorical models.
* A resampling procedure used to properly approximate the asymptotic distribution of the corresponding dissimilarities under the null hypothesis even when this hypothesis is not true. Particularly, we employ a parametric bootstrap approach based on estimated model coefficients and two extensions of the well-known moving blocks bootstrap (MBB) and stationary bootstrap (SB).
Each combination of dissimilarity measure and resampling procedure gives rise to a different hypothesis test. Both a great ability of the metric to discriminate between underlying structures and a high capability of the resampling mechanism to provide a proper approximation of the corresponding asymptotic distribution are essential to get a good performance of the procedures. The proposed procedures were assessed in a broad simulation study including different types of categorical processes with several levels of complexity. The numerical experiments resulted in the following conclusions:
* Under the null hypothesis, most tests respect the significance level rather well when a sufficiently large value for the series length is considered.
* When the null hypothesis is not true, the test based on \(\widehat{d}_{CC}\) and the MBB exhibits the highest power, which is advantageous for the practitioners, since neither the dissimilarity nor the resampling mechanism assume a specific class of categorical model.
The sensibility of methods MBB and SB with respect to their input parameters was also analyzed, and the results indicated that both techniques exhibit
approximately the same behavior for a broad range of values for the corresponding parameters. Finally, the test based on \(\widehat{d}_{CC}\) and MBB was applied to a dataset of protein sequences along with a clustering procedures based on the \(p\)-values of the test, and interesting conclusions were reached.
There are three main ways in which this work can be extended. First, new hypothesis tests similar to the ones proposed here could be constructed by employing additional dissimilarities and resampling procedures. Second, note that bootstrap approaches have to be used in this work due to the impracticality of deriving the asymptotic null distribution of the distances under the general assumption of stationarity. However, by making some additional assumptions (e.g., by considering a specific type of generating models), the computation of the corresponding distributions could be substantially simpler. In such a case, it would be interesting to analyze the advantages and disadvantages of a test based on these distributions with respect to the ones introduced in this manuscript. Third, the clustering methods based on \(p\)-values applied in Section 5 could be rigorously analyzed. In particular, their performance in several simulation scenarios could be assessed by comparing these procedures with alternative clustering approaches.
|
2310.05973 | Observation of strong attenuation within the photonic band gap of
multiconnected networks | We theoretically and experimentally study a photonic band gap (PBG) material
made of coaxial cables. The coaxial cables are waveguides for the
electromagnetic waves and provide paths for direct wave interference within the
material. Using multiconnected coaxial cables to form a unit cell, we realize
PBGs via (i) direct interference between the waveguides within each cell and
(ii) scattering among different cells. We systematically investigate the
transmission of EM waves in our PBG materials and discuss the mechanism of band
gap formation. We observe experimentally for the first time the wide band gap
with strong attenuation caused by direct destructive interference. | Pengbo Zhu, Runkai Chen, Xiangbo Yang, Yanglong Fan, Huada Lian, Zhen-Yu Wang | 2023-09-29T03:46:14Z | http://arxiv.org/abs/2310.05973v1 | # Observation of strong attenuation within the photonic band gap of multiconnected networks
###### Abstract
We theoretically and experimentally study a photonic band gap (PBG) material made of coaxial cables. The coaxial cables are waveguides for the electromagnetic waves and provide paths for direct wave interference within the material. Using multiconnected coaxial cables to form a unit cell, we realize PBGs via (i) direct interference between the waveguides within each cell and (ii) scattering among different cells. We systematically investigate the transmission of EM waves in our PBG materials and discuss the mechanism of band gap formation. We observe experimentally for the first time the wide band gap with strong attenuation caused by direct destructive interference.
## I Introduction
Dielectric structures with dielectric constants arranged periodically on the wavelength scale can exhibit photonic band gaps (PBGs), in which the propagation of electromagnetic (EM) waves are inhibited [1; 2]. When an incident wave enters a PBGs, it decays rapidly, forming an attenuation mode, a property that is of great importance for the modulation of EM waves [3; 4; 5]. Photonic crystals are conventional PBG materials with PBGs and have attracted large interests both theoretically and experimentally [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. However, direct measurement of EM waves in photonic crystals is challenging [17; 18; 4; 13].
A periodic network structure consisting of waveguides has been proposed and localization of states have been observed inside this material [19; 20]. Such a waveguide network essentially introduces resonant loops, which can be resonated and antiresonated in a certain frequency range [19; 20; 21; 22]. In particular, the waveguide network made of coaxial cables is easy to implement in experiments and allows for a more intuitive study of the EM wave transmission inside the structure, as the amplitude and phase changes of the nodes in each part of the network can be better measured, which will allow for further in-depth study and understanding of the EM wave behavior in the PBGs [19; 20; 21; 22].
It is also convenient to modify the network structure to have interesting features. For example, it was theoretically found that the use of multiple waveguides to connect the same pair of nodes in the network can lead to strong attenuation of the wave propagation within the PBGs [23]. This motivates further theoretical studies in various structures with interesting properties [24; 25; 26; 27]. Optical waveguide networks can produce rich photon attenuation modes [28], interesting comb optical transmission spectra [29; 30], extremely wide PBGs [12; 21; 23], super-photon localization [24; 28; 31; 32]. Some of the properties of optical waveguides are applied to micro-cantilever sensor [33], passive optical device [34], and the physics of \(\mathcal{PT}\)-symmetry can also be investigated in it [35; 36; 37; 38].
In this paper, we experimentally design a one-dimensional (1D) networks system constructed from coaxial cables, where multiple cables are used to realize different paths for the EM waves such that the EM waves can have direct interference within each unit cell [see Fig. 1]. Changing the relative length ratios of the coaxial cables, we find that direct destructive interference between different paths leads to a very wide PBG. Within this wide band gap, the propagation of EM wave is strongly prohibited. Remarkably, under certain conditions, the incident EM wave in a PBG can not travel through even one unit cell and are almost totally reflected at the boundary of the material, because of the strong attenuation caused by direct interference.
The paper is organized as follows: Section II describes the modeling and methodology used to study our PBG material as well as our experimental setup. Section III investigates the energy band structure and attenuation characteristics of our material. We draw our conclusions in Sec. IV.
## II Model, theory, and methods
### Network Structure
Fig. 1(a) is a schematic structure of our PBG material made of connected waveguides containing \(N\) unit cells. In each cell there are two connected waveguides with lengths
\(x_{1}\) and \(x_{2}\) for direct interference, and these interference structures are linked by waveguides of a length \(d\). We use coaxial cables to realize the waveguides for EM waves. Fig. 1 illustrates a unit cell in which the cables are connected to each other by connectors. The ports of the connectors can be used to measure the amplitudes of the EM waves. Fig. 1 shows the structure of a coaxial cable, which consists of a center conductor, a layer of insulating material, a mesh fabric shield, and an outer sheath.
Compared with traditional photonic crystals, the structure of our networks is easier to realize; the use of coaxial cables is more flexible from the measurement point of view. In photonic crystals, EM waves are measured mainly on the surface of the material [4; 16; 20], whereas in the waveguide networks, one can measure the phase and amplitude of EM waves inside any unit cells, thus bringing convenience to experimental studies.
### Transmission line networks
Pictures of our experimental samples are shown in Fig. 2, with Fig. 2 showing a network system consisting of 5 unit cells. The connectors used for our experiments are shown in Fig. 2. The red ports of the connector are used to connect the upper and lower arms of the interference structure in each unit cell, while the blue ports are used to connect the interference structures of successive unit cells or for measurement of the EM waves. In our experiments, we used a coaxial cable (type RG58C/U) with the waveguide length shown in Fig. 2. The details of the cable-node connection can be seen in the single-cycle unit cell formed by the connectors and the cables, as shown in Fig. 2.
### Equations for transmission line networks with single materials
For our networks system shown in Fig. 1,the propagation of EM waves in a coaxial cable line satisfies the following homogeneous wave equation [20; 21]:
\[\frac{\partial^{2}\psi_{nm}(x)}{\partial x^{2}}=\frac{\varepsilon\omega^{2}}{{ c_{0}}^{2}}\psi_{nm}(x), \tag{1}\]
where \(\psi_{nm}(x)\) denotes the voltage wave function of any segment between nodes \(n\) and \(m\), \(x\) is the distance measured from node \(m\), \(\omega=2\pi f\) is the angular frequency of EM wave, \(c_{0}\) is the wave speed of EM wave in vacuum, \(\varepsilon=\varepsilon^{\prime}+i\varepsilon^{\prime\prime}\) is the relative permittivity of the dielectric for the coaxial cables, and \(\varepsilon^{\prime}\) and \(\varepsilon^{\prime\prime}\) are the real and imaginary parts of the relative permittivity, respectively. The EM wave function can be written as a linear combination of two plane waves traveling in opposite directions [21; 23; 39; 31; 25]:
\[\psi_{nm}=\alpha_{nm}e^{i\kappa x}+\beta_{nm}e^{-i\kappa x}, \tag{2}\]
where \(i=\sqrt{-1}\) is the imaginary unit, \(\kappa=\frac{\omega}{c}\sqrt{\varepsilon}=\frac{\omega}{c}\sqrt{\varepsilon^ {\prime}+i\varepsilon^{\prime\prime}}\)[20; 21]. In the case of \(\varepsilon^{\prime\prime}\ll\varepsilon^{\prime}\), it is deduced that \(\kappa\) has the simple form that \(\kappa\cong k+(1/2L)i\), where \(k=\omega\sqrt{\varepsilon^{\prime\prime}}/c_{0}\) and \(L=\varepsilon^{\prime}/k\varepsilon^{\prime\prime}\) is the absorption length. For any node \(n\) of the network, its wave function is continuous at the node [21; 23; 31; 32; 25]:
\[\begin{array}{l}\left.\begin{array}{l}\psi_{nm}|_{x=0}=\psi_{n},\\ \psi_{nm}|_{x=x_{nm}}=\psi_{m},\end{array}\right.\end{array} \tag{3}\]
Figure 1: Schematic diagram of the waveguide network model. (a) Schematic structure of a one-dimensional coaxial cable network containing \(N\) unit cells. Each unit cell has two waveguides of different lengths \(x_{1}\) and \(x_{2}\) for interference. The structures for interference are linked by coaxial cables of length \(d\). An incident EM wave \(E_{i}\) will have a reflected (\(E_{r}\)) and a transmitted (\(E_{o}\)) parts. (b) We use connectors to realize the structure of a cell of the model. (c) The structure of the coaxial cable for our EM waveguides.
Figure 2: Transmission line networks. (a) Photograph of an experimental sample, which has a size of \(N=5\). The nodes are marked with red dots and the red curves indicate the connectivity of the network. (b) Nodes formed by connectors.Red circles mark the ports connecting the upper and lower arms of the interference structure. Blue circles indicate the ports for connecting the unit cells or for connecting measurement cables. (c) A coaxial cable waveguide with connectors at the ends. (d) A unit cell consisting of cables and connectors.
where \(\psi_{n}\) and \(\psi_{m}\) represent the wavefunction at the nodes \(n\) and \(m\), respectively. \(x_{nm}\) is the length of a waveguide connecting the nodes \(n\) and \(m\). With the use of Eqs. (2) and (3), one obtains [31, 25, 32]:
\[\psi_{nm}=\psi_{n}\frac{\sinh\left[i\kappa(x_{nm}-x)\right]}{\sinh(i\kappa x_{ nm})}+\psi_{m}\frac{\sinh(i\kappa x)}{\sinh(i\kappa x_{nm})}, \tag{4}\]
At each node \(m\), the wave function is continuous and the derivative of the wave function at node \(m\) gives the flux conservation condition [23, 25, 32]:
\[\sum\limits_{n}\frac{\partial\psi_{nm}(x)}{\partial x}\Bigg{|}_{x=0}=0, \tag{5}\]
where \(n\) is the sum of all nodes directly connected to \(m\). Substituting Eq. (4) into Eq. (5), one gets [19, 20, 21, 23, 25, 31, 39]:
\[-\psi_{n}\sum\limits_{m}\coth(i\kappa x_{nm})+\sum\limits_{m}\frac{\psi_{m}}{ \sinh(i\kappa x_{nm})}=0. \tag{6}\]
for the network consisting of coaxial cables without considering dissipation. The \(\coth\) and \(\sinh\) denote hyperbolic tangent and hyperbolic sine functions, respectively. We have added a constant \(\gamma\) to the \(i\kappa x_{nm}\) term of Eq. (6) to account for the additional losses caused by the connectors if we consider the dissipation of the material.
We obtain the absorption length \(L\) of the cables and the attenuation \(d\) of the connectors by using the calculated transmission coefficients in a single cell. Using the cable specifications given by the vendor, as well as our experimental measurements and data fitting, we obtained \(L/\text{m}=665{(f/\text{MHz})}^{-0.52}\) and \(\gamma=-0.005\). We used \(\varepsilon^{\prime}=2.3\).
Therefore, the transmission coefficient \(t\) theoretically is calculated using the generalized eigenfunction method to solve the equation, where \(t\) is the transmission amplitude of the outgoing wave. The transmittance is \(|t|^{2}\)[21, 25, 31, 32, 39].
### Generalized Floquet-Bloch's Theorem
The network equation describes the connection relationship between adjacent nodes in a network. In order to quantitatively analyze the energy bands of the system, we derive the dispersion relation of the network structure. To tackle the problem that in our network system the length of a unit cell is not well defined when the same pair of nodes in the unit cell are connected with cables of different lengths, we use the generalized version of the Floquet-Bloch theorem with the following Bloch function [23, 25, 31, 32, 39]:
\[\psi_{n+T}(\Phi)=\psi_{n}(\Phi)e^{i\Phi T}. \tag{7}\]
where the integer \(n\) denotes an arbitrarily node index, the integer \(T\) is the period of the network configuration, and \(\Phi\) is a dimensionless phase factor. For the 0, 2, and 4 nodes in Fig. 1, it follows from Eq. (6):
\[\begin{cases}-\psi_{1}\left[\coth(i\kappa x_{1})+\coth(i\kappa x_{2})+\coth( i\kappa d)\right]+\left[\frac{\psi_{0}}{\sinh(i\kappa d)}+\frac{\psi_{2}}{ \sinh(i\kappa x_{1})}+\frac{\psi_{2}}{\sinh(i\kappa x_{2})}\right]=0,\\ -\psi_{2}\left[\coth(i\kappa x_{1})+\coth(i\kappa x_{2})+\coth(i\kappa d) \right]+\left[\frac{\psi_{3}}{\sinh(i\kappa d)}+\frac{\psi_{4}}{\sinh(i\kappa x _{1})}+\frac{\psi_{1}}{\sinh(i\kappa x_{2})}\right]=0,\\ -\psi_{3}\left[\coth(i\kappa x_{1})+\coth(i\kappa x_{2})+\coth(i\kappa d) \right]+\left[\frac{\psi_{2}}{\sinh(i\kappa d)}+\frac{\psi_{4}}{\sinh(i\kappa x _{1})}+\frac{\psi_{4}}{\sinh(i\kappa x_{2})}\right]=0.\end{cases} \tag{8}\]
From Eq. (7), the wave functions at node 0 and node 4 are related to the wave function at node 2 as [23, 24, 32]:
\[\begin{array}{l}\psi_{0}=\psi_{2}e^{-i\Phi},\\ \psi_{4}=\psi_{2}e^{i\Phi},\end{array} \tag{9}\]
Substituting Eq. (9) into Eq. (8) gives
\[\begin{array}{l}\cos\Phi=f(\omega\sqrt{\varepsilon^{\prime}}/c_{0})\equiv \frac{\text{A}^{2}\text{B}^{2}-C-2B\sinh(i\kappa d)}{2B\left[\sinh(i\kappa x_{1 })+\sinh(i\kappa x_{2})\right]},\end{array} \tag{10}\]
where
\[\begin{array}{l}A=\coth(izd)+\sum\limits_{i=1}^{2}\coth(izx_{i}),\\ B=\sinh(izd)\prod\limits_{i=1}^{2}\sinh(izx_{i}),\\ C=\sinh^{2}(izx_{1})\sinh^{2}(izx_{2})\\ +\sinh^{2}(izx_{1})\sinh^{2}(izd)+\sinh^{2}(izx_{2})\sinh^{2}(izd).\end{array}\]
From Eq. (10) we obtain the band structure of our networks. Because of Eq. (7), when the solution \(\Phi\) is a complex number with a non-zero imaginary part \(\text{Im}(\Phi)\), the propagation of EM wave among the network nodes has a decay rate given by the amplitude of \(\text{Im}(\Phi)\). That is, we have a band gap when \(\Phi\) has no real solution in Eq. (10). When the amplitude of \(\text{Im}(\Phi)\) is larger, the propagation of EM wave among the network nodes has a stronger attenuation. On the other hand, a real \(\Phi\) solution corresponds to a propagation mode among the network nodes. Therefore, we obtain the dispersion relation for the energy bands from the existence of solutions with a real \(\Phi\)[23].
## III Results and Discussion
### Observation of large PBGs
The structure of our network allows us to tune the interference effect between the two arms by changing the lengths \(x_{1}\) and \(x_{2}\) (see Fig. 1). Without lost of generality, we assume \(x_{1}\geq x_{2}\). Furthermore, changing the length \(d\) of the cables that connect the interference structures together, we can also modify the scattering among different cells. From Eq. (10) and Fig. 3, we can see that the band structure changes with the ratio between \(x_{1}\) and \(x_{2}\) as well as the length \(d\). The forbidden bands are indicated by gray areas in Fig. 3. As demonstrated in Fig. 3, the ratio between the lengths \(x_{1}\) and \(x_{2}\) of the two arms has a large influence on the PBGs [23, 24, 25].
For the case of \(x_{1}=x_{2}\) there is no destructive interference, and therefore when \(d=0\) there is no PBG. When \(d>0\) there are small PBGs even \(x_{1}=x_{2}\), which is due to scattering among different cells. When \(x_{1}/x_{2}=2\), there is a strong destructive interference, and as a consequence, a large band gap is formed.
### Experimental demonstration of transmittance
To experimentally probe the system, we connected an Agilent E4438C vector signal generator at one side of the network to input voltage waves. An oscilloscope was connected at the other side of the network to measure the transmission coefficients and intensities. Based on the theoretical results shown in Fig. 1, we prepared different samples of the networks with three different sets of parameters, namely, Structure 1 (S1) with \((x_{1},x_{2},d)=(1,1,1)\) m, S2 with \((x_{1},x_{2},d)=(1,1,2)\) m, and S3 with \((x_{1},x_{2},d)=(2,1,1)\) m.
The experimentally measured transmittance \(|t|^{2}\), as well as the theoretically calculated values for the networks S1, S2 and S3 are shown in Fig. 4. The results in Fig. 4 show that the transmittance at the PBGs is getting smaller for a larger size \(N\) of the network. It is more interesting to see that the attenuation of EM waves within the PBGs is significantly larger for the network S3, which has the ratio \(x_{1}/x_{2}=2\), when comparing with other networks S1 and S2 which have \(x_{1}/x_{2}=1\). This shows that the destructive interference between the two arms \(x_{1}\) and \(x_{2}\) significantly enhances the formation of PBGs. The results are consistent with the band structures in Fig. 3.
### Wave Intensity Distribution at Characteristic Frequencies
In order to investigate the propagation modes of EM waves, we have systematically probed the S1 and S3 networks with various periods \(N=5\), \(12\), and \(25\). Firstly, we use the inverse participation ratio [20, 39, 40]:
\[\text{IPR}=\big{(}{\sum_{n}\big{|}\psi_{n}^{2}\big{|}}\big{)}^{2}\big{/}{\sum _{n}\big{|}\psi_{n}}|^{4} \tag{11}\]
to investigate the wave intensity distribution. For the propagation modes, the value of IPR increases with the size (i.e., \(N\)) of the network [20]. While for the case that the frequency of a EM wave lying within a PBG, the value of IPR hardly changes with the network size.
Figure 3: Band structures for different lengths of \(x_{1}\), \(x_{2}\), and \(d\), with the gray area indicating the band gap. Here we assume no absorption and losses,i.e., \(L=\infty\) and \(\gamma=0\).
We plot the IPR spectral distributions in Fig. 5, where we can see that for frequencies near the band edge, IPR increases with the size of the networks. As the frequency enters the gap, the IPR becomes smaller and the size dependence becomes smaller.
To have a clear picture of the EM wave distribution inside the network, we further conducted experiments to measure the intensities of the EM waves of all the nodes in the networks, for the EM wave frequencies indicated in Fig. 5. We show in Figs. 6 and 7 the voltage intensities of all the nodes for the networks S1 (with \(x_{1}=x_{2}\)) and S3 (with \(x_{1}=2x_{2}\)), respectively. One can see that the theoretical results agree with the experimental values. The results show that at the frequencies when the IPR increases with the network size \(N\), the energy of EM wave have a broad distribution, which indicates an extended state. At the frequencies when the IPR is not sensitive to the network size, the intensities are negligible over a broad range of nodes, which suggests an evanescent mode [28].
Note that as we can see in Fig. 7 for the network S3, which has \(x_{1}=2x_{2}\), an ultra strong attenuation of EM wave within a PBG is observed. In particular, for the frequency \(f_{23}=99\) MHz close to the center of the PBG, all the nodes within the network have negligible intensities, because the amplitude of the attenuation mode attenuates to zero even within the first unit cell and all the EM wave energy is reflected at the input side. This ultra strong attenuation is due to the direct destructive interference between the two arms of lengths \(x_{1}\) and \(x_{2}\). For the network S1, which has \(x_{1}=x_{2}\), the attenuation of EM wave within PBGs is weaker, when one compares the results in Figs. 6 and 7 as well as the plots in Fig. 8. The results show that the PBG due to direct interference between the waveguides within each cell is quite differ
Figure 4: The experimentally measured transmittance \(|t|^{2}\), as well as the theoretically calculated values for the networks S1, S2 and S3 are shown in Fig. 4. The results in Fig. 4 show that the transmittance at the PBGs is getting smaller for a larger size \(N\) of the network. It is more interesting to see that the attenuation of EM waves within the PBGs is significantly larger for the network S3, which has the ratio \(x_{1}/x_{2}=2\), when comparing with other networks S1 and S2 which have \(x_{1}/x_{2}=1\). This shows that the destructive interference between the two arms \(x_{1}\) and \(x_{2}\) significantly enhances the formation of PBGs. The results are consistent with the band structures in Fig. 3
Figure 5: (a) IPR spectra for the S1 networks with different values of \(N\). (b) as (a) but for the S3 networks.
Figure 6: Intensity of the wave distribution for \(N=5\) for different frequencies of incident EM waves in an S1 network. (a) is the simulated results while (b) is the results of experiment.
ent from the conventional PBGs due to scattering among different cells and provides a much stronger attenuation of EM waves.
To further investigate the strong attenuation mechanism of the network of S3 at the frequency \(f_{23}=99\) MHz, in Fig. 9 we calculated the wave intensity distribution between the signal generator output and the first contact of the network using the theory of Eq. (4). We find that the wave intensities in the coaxial cable from the signal output to the first node of the network are all less than \(0.0021\). Meanwhile, we performed experimental verification and found that the voltage waves are indeed attenuated rapidly in the first section of the coaxial cable. It is clear that this is not due to the geometric scattering structure in the network, but is the result of the interference between the incident and reflected waves in a single coaxial cable.
## IV Conclusions
We have theoretically and experimentally study a PBG material made of multiconnected coaxial cables. We have observed for the first time large PBGs which is due to direct wave interference within each unit cell. From the measured transmission spectra and intensities of EM waves inside the networks, we found that direct interference between two waveguides within the same unit cell provides a new mechanism of PBG formation. In particular, when the length ratio between the two arms \(x_{1}/x_{2}=2\), a PBG with ultra strong attenuation of EM waves is formed due to direct destructive interference. The strong attenuation within the PBG induces an ultra strong reflection of EM waves at the boundary of the network. Our results demonstrate a new way to realize PBGs and may have applications to the control of various waves (e.g., EM waves or acoustic waves). It would also be interesting to study how the multiconnected waveguides would influence the topological transport in transmission line network [41].
Figure 8: Simulated wave intensity distribution with N=12 for different frequencies of incident EM waves. (a) Intensity distribution map for the S1 network; (b) Intensity distribution map for the S3 network.
Figure 7: Intensity of the wave distribution for \(N=5\) for different frequencies of incident EM waves in an S3 network. (a) is the simulated results while (b) is the results of experiment.
Figure 9: Calculated wave intensity distribution as a function of the location \(x\) relative to node 1 inside a S3 network with \(N=5\) for a wave frequency \(f_{23}=99\) MHz. The length between the input node 0 and node 1 is 1 m.
## Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant Numbers 11674107, 61475049, 11775083, 61774062, 61771205, 12074131) and the Natural Science Foundation of Guangdong Province, China (Grant No. 2021A1515012030).
|
2301.13688 | The Flan Collection: Designing Data and Methods for Effective
Instruction Tuning | We study the design decisions of publicly available instruction tuning
methods, and break down the development of Flan 2022 (Chung et al., 2022).
Through careful ablation studies on the Flan Collection of tasks and methods,
we tease apart the effect of design decisions which enable Flan-T5 to
outperform prior work by 3-17%+ across evaluation settings. We find task
balancing and enrichment techniques are overlooked but critical to effective
instruction tuning, and in particular, training with mixed prompt settings
(zero-shot, few-shot, and chain-of-thought) actually yields stronger (2%+)
performance in all settings. In further experiments, we show Flan-T5 requires
less finetuning to converge higher and faster than T5 on single downstream
tasks, motivating instruction-tuned models as more computationally-efficient
starting checkpoints for new tasks. Finally, to accelerate research on
instruction tuning, we make the Flan 2022 collection of datasets, templates,
and methods publicly available at
https://github.com/google-research/FLAN/tree/main/flan/v2. | Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, Adam Roberts | 2023-01-31T15:03:44Z | http://arxiv.org/abs/2301.13688v2 | # The Flan Collection: Designing Data and Methods
###### Abstract
We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 models (Chung et al., 2022). Through careful ablation studies on the Flan Collection _of instruction tuning tasks and methods_, we tease apart the effect of design decisions that enable FlanT5 to outperform prior work by 3-17%+ across evaluation settings. We find task balancing and enrichment techniques are overlooked but critical to effective instruction tuning, and in particular, training with mixed prompt settings (zero-shot, few-shot, and chain-of-thought) actually yields stronger (2%+) performance in _all_ settings. In further experiments, we show Flan-T5 requires less finetuning to converge higher and faster than T5 on single downstream tasks--motivating instruction-tuned models as more computationally-efficient starting checkpoints for new tasks. Finally, to accelerate research on instruction tuning, we make the Flan 2022 collection of datasets, templates, and methods publicly available.1
Footnote 1: Data generation code available at: [https://github.com/google-research/FLAN/tree/main/flan/v2](https://github.com/google-research/FLAN/tree/main/flan/v2). Generation code allows users to vary mixtures rates, templates, prompt types and data augmentations techniques, for faster public research.
Introduction
Large language models such as PaLM (Chowdhery et al., 2022), Chinchilla (Hoffmann et al., 2022), and ChatGPT among others (Brown et al., 2020; Ouyang et al., 2022) have unlocked new capabilities in performing natural language processing (NLP) tasks from reading instructive prompts. Prior art has shown that instruction tuning--finetuning language models on a collection of NLP tasks formatted with instructions--further enhances the ability of language models to perform an unseen task from an instruction (Wei et al., 2021; Sanh et al., 2021; Min et al., 2022).
In this work, we evaluate the methods and results of _open sourced_ instruction generalization efforts, comparing their finetuning techniques and methods. And in particular, we identify and evaluate the critical methodological improvements in the "Flan 2022 Collection", which is the term we use for the collection _of data and methods for data augmentation and instruction tuning_, first implemented and used in Chung et al. (2022). Where Chung et al. (2022) focuses on the emergent and state-of-the-art results of combining Flan 2022 with PaLM 540B, this work focuses in on the details of the instruction tuning methods themselves, ablating individual factors, and comparing them directly to prior work by keeping the pretrained model size and checkpoint consistent.
The Flan 2022 Collection offers the most extensive publicly available set of tasks and methods for instruction tuning, which we have compiled in one place. We have also supplemented this with hundreds more of our own high-quality templates, richer formatting patterns, and data augmentations. We show that a model trained on this collection outperforms other public collections on all tested evaluation benchmarks, including the original Flan 2021 (Wei et al., 2021), T0++ (Sanh et al., 2021), Super-Natural Instructions (Wang et al., 2022c), and the concurrent work on OPT-IML (Iyer et al., 2022). As shown in Figure 1, this includes 4.2%+ and 8.5% improvements on the MMLU (Hendrycks et al., 2020) and BIG-Bench Hard (Suzgun et al., 2022) evaluation benchmarks respectively, for equally sized models.
Analysis of the Flan 2022 method suggests the strong results stem both from the larger and more diverse set of tasks, but also from a set of simple finetuning and data augmentation techniques. In particular, training on a mix of examples templatized with zero-shot, few-shot, and chain-of-thought prompts improves performance in every one of these settings, together. For instance, adding just 10% few-shot prompts improves zero-shot prompting results by 2%+. Additionally, enriching task diversity by inverting input-output pairs, as used in (Sanh et al., 2021; Min et al., 2022), along with balancing task sources, are both shown to be critical to performance. The resulting Flan-T5 model converges faster and at a higher performance than T5 models in single-task finetuning--suggesting instruction-tuned models offer a more computationally-efficient starting checkpoint for downstream applications, corroborating Aribandi et al. (2021) and Liu et al. (2022b).
We hope making these findings and resources publicly available will unify resources around instruction tuning and accelerate research into more general-purpose language models. We summarize this work's core contributions as follows:
* Methodological: Show that training with mixed zero- and few-shot prompts yields much better performance in **both** settings (Section 3.2).
* Methodological: Measure and demonstrate the critical techniques to effective instruction tuning: scaling Section 3.3, enriching task variety with input inversion (Section 3.4), adding chain-of-thought training data, and balancing different data sources (Section 3.5).
* Results: Demonstrate these technical choices yield 3-17% Held-Out task improvements over existing open source instruction tuning collections (Figure 1).
* Results: Demonstrate Flan-T5 serves as a stronger and more computationally-efficient starting checkpoint for single-task finetuning (Section 4).
* Open source the new Flan 2022 task collection, templates, and methods for public research.
## 2 Public Instruction Tuning Collections
Large Language ModelsInstruction tuning has emerged as a tool to make large language models (LLMs) and their abilities more useful for interactive dialog and functional tasks. Previous work (Raffel et al., 2020; Liu et al., 2019; Aghajanyan et al., 2021; Aribandi et al., 2021) experimented with large scale multi-task finetuning, to improve downstream single target finetuning, but without instruction prompts. UnifiedQA and others (Khashabi et al., 2020; McCann et al., 2018; Keskar et al., 2019) unified a wide range of NLP tasks into a single generative question answering format, using prompt instructions for multi-task finetuning and evaluation.
The First WaveSince 2020, several instruction tuning task collections have been released in rapid succession, outlined in Figure 2. Natural Instructions (Mishra et al., 2021), Flan 2021 (Wei et al., 2021), P3 (the Public Pool of Prompts, Bach et al., 2022) aggregated large NLP task collections and templatized them with instructions (_zero-shot prompting_), specifically for finetuning models to generalize to unseen instructions. MetaICL (Min et al., 2022) also consolidated other task collections (Ye et al., 2021; Khashabi et al., 2020) to train models to learn tasks "in-context" - from several input-output examples, known as _few-shot prompting_, but in this case without instructions. Each of these works affirmed the scaling benefits of task and template diversity,
Figure 2: A **Timeline of Public Instruction Tuning Collections** specifies the collection release date, detailed information on the finetuned models (the base model, their size, and whether the model itself is Public (\(\mathrm{P}\)) or Not Public (\(\mathrm{NP}\))), what prompt specification they were trained for (zero-shot, few-shot, or Chain-of-Thought), the number of tasks contained in the Flan 2022 Collection (released with this work), and core methodological contributions in each work.
and some reported strong benefits from inverting the inputs and outputs in templates to produce new tasks ("noisy channel" in Min et al., 2022).
The Second WaveA second wave of instruction tuning collections expanded prior resources: combining more datasets and tasks into one resource, like Super-Natural Instructions (Wang et al., 2022c) or OPT-IML (Iyer et al., 2022), adding multilingual instruction tuning in xP3 (Muennighoff et al., 2022), and Chain-of-Thought training prompts in Flan 2022 (Chung et al., 2022). Both the Flan Collection and OPT-IML contain most tasks represented in prior collections.2 Our work is positioned here, coalescing most of these collections (of collections) and their methods, as the strongest starting point for future open source work.
Footnote 2: Note that each work defines datasets, tasks, and task categories differently. For simplicity, we use their own definitions in Section 2.
New DirectionsConcurrent and future work is beginning to explore two new directions: (a) expanding task diversity even more aggressively with synthetic data generation, particularly in creative, and open-ended dialogue (Wang et al., 2022b; Honovich et al., 2022; Ye et al., 2022; Gupta et al., 2022), and (b) offering human feedback signals on model responses (Ouyang et al., 2022; Glaese et al., 2022; Bai et al., 2022a; Nakano et al., 2021; Bai et al., 2022b). We view most of these new directions as likely additive to a foundation of instruction tuning methods.
Tuning with Human FeedbackInstruction tuning on human feedback has demonstrated strong results on open-ended tasks, but at the expense of performance on a wide array of more traditional NLP tasks (Ouyang et al., 2022; Glaese et al., 2022; Bai et al., 2022a; Nakano et al., 2021). (See Ouyang et al. (2022)'s discussion of the "alignment tax".) Our work focuses specifically on instruction generalization, without human feedback, for two reasons. First, human feedback datasets are far less publicly available than instruction tuning datasets (and may be model-specific). Second, by itself, instruction generalization shows great promise in enhancing human preferred responses on open-ended tasks, as well as improving traditional NLP metrics (Chung et al., 2022). The extent of obtainable progress _without_ expensive human response demonstrations or ratings remains an open question, and an important pursuit to narrow the gap between public and non-public research.
The Importance of Open SourceHigh profile research is increasingly driven by non-public data, as in the case of GPT-3 and others (Ouyang et al., 2022; Glaese et al., 2022). The inaccessibility of these resources inhibits the research community's ability to analyze and improve these methods in the public domain. We narrow our purview to open source and accessible data collections, motivated by the goal of democratizing accessibility to research.
## 3 Flan 2022 Instruction Tuning Experiments
Recent research has yet to coalesce around a unified set of techniques, with different tasks, model sizes, and target input formats all represented. We open source a new collection, first introduced in Chung et al. (2022), denoted "Flan 2022", which combines Flan 2021, P3++3, Super-Natural Instructions, with some additional reasoning, dialog, and program synthesis datasets. We defer to Chung et al. (2022) for details of templatization and collection; and in this work we take a deeper look at key methodological improvements and compare the collection on equivalent model sizes to existing collections.
Footnote 3: “P3++” is our notation for all datasets in the Public Pool of Prompts (P3): [https://huggingface.co/datasets/bigscience/P3](https://huggingface.co/datasets/bigscience/P3)
In this section, we evaluate the design decisions in Flan and discuss four in particular that yield strong improvements to the instruction tuning recipe. These design components, outlined in Section 2, are: **(I)** using mixed zero-shot, few-shot, and Chain-of-Thought templates at training (Section 3.2), **(II)** scaling T5-sized models to 1800+ tasks (Section 3.3), **(III)** enriching tasks with input inversion (Section 3.4), and **(IV)** balancing these task mixtures (Section 3.5). In Section 3.1, we begin by measuring the value of each component and compare the final model against alternative instruction tuning collections (and their methods).
Experimental SetupWe finetune on the prefix language model adapted T5-LM (Lester et al., 2021), using the XL (3B) size for all models for consistency, unless otherwise stated. While other sizes of Flan-T5 are available, we felt XL was appropriately sized to run large-scale systematic ablations, while being sufficiently large to draw general conclusions. We evaluate on (a) a suite of 8 "Held-In" tasks represented within the 1800+ training task collection (4 question answering and 4 natural language inference validation sets), (b) Chain-of-Thought (CoT) tasks (5 validation sets), and (c) the MMLU (Hendrycks et al., 2020) and BBH (Suzgun et al., 2022) benchmarks as our set of "Held-Out" tasks, as they are not included as part of Flan 2022 finetuning. The Massivley Multitask Language Understanding benchmark (MMLU) broadly tests reasoning and knowledge capacity across 57 tasks in the sciences, social sciences, humanities, business, health, among other subjects. BIG-Bench Hard (BBH) includes 23 challenging tasks from BIG-Bench (Srivastava et al., 2022) where PaLM under-performs human raters. In our ablations, we also evaluate BBH with Chain-of-Thought inputs, following Chung et al. (2022). Additional finetuning and evaluation details are provided in Appendix A.
### Ablation Studies
Table 1 summarizes the mean contribution to Held-in, Held-out, and Chain-of-thought tasks, by individually deducting methods: mixture weight balancing ("- Mixture Balancing"), Chain-of-thought tasks ("- CoT"), mixed prompt settings ("- Few Shot Templates"), and Input Inversion ("- Input Inversion"). Flan-T5 XL leverages all four of these methods together. We also finetune T5-XL-LM on other collections, including Flan 2021, P3++, Super-Natural Instructions for comparison.
Each of the ablated components of Flan contributes improvements to different metrics: Chain-of-Thought training to Chain-of-Thought evaluation, input inversion to Held-Out evaluations (MMLU and BBH), few-shot prompt training to few-shot evaluations, and mixture balancing to all metrics.
As compared to T5-XL models trained on alternative instruction tuning collections (and their methods), Flan outperforms in almost every setting. While previous collections are tuned specifically to zero-shot prompts, Flan-T5 XL is tuned for either zero- or few-shot prompts. This yields performance margins of +3-10% for most of the zero-shot settings, and margins of 8-17% for the few-shot settings. Most impressively, Flan 2022 outperforms OPT-IML-Max's much larger (10x) 30B and (58x) 175B models. Next, we isolate some of Flan 2022's ablated methods individually, to examine the benefits of each.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline Model & Held-In & CoT & MMLU & BBH & BBH-CoT \\ \hline T5-XL Flan 2022 & **73.8** / **74.8** & 35.8 / **34.1** & **50.3** / **52.4** & 26.2 / **39.3** & **33.9** / **35.2** \\ \hline - CoT & 73.3 / 73.2 & 28.8 / 24.6 & 47.5 / 46.9 & 18.2 / 30.0 & 18.2 / 12.0 \\ - Input Inversion & **73.8** / 74.1 & 32.2 / 23.5 & 41.7 / 41.2 & 18.4 / 24.2 & 15.7 / 13.0 \\ - Mixture Balancing & 71.2 / 73.1 & 32.3 / 30.5 & 45.4 / 45.8 & 15.1 / 24.3 & 13.8 / 15.4 \\ - Few Shot Templates & 72.5 / 62.2 & **38.9** / 28.6 & 47.3 / 38.7 & 27.6 / 30.8 & 18.6 / 23.3 \\ \hline T5-XL Flan 2021 & 68.4 / 56.3 & 24.6 / 22.7 & 41.4 / 34.8 & **28.1** / 28.3 & 26.0 / 26.9 \\ T5-XL P3++ & 70.5 / 62.8 & 25.6 / 25.6 & 46.1 / 34.1 & 26.0 / 30.8 & 23.4 / 26.1 \\ T5-XL Super-Natural Inst. & 50.3 / 42.2 & 13.8 / 14.3 & 35.6 / 31.1 & 10.4 / 15.6 & 8.0 / 12.5 \\ GLM-130B\({}^{\ddagger}\) & - & - & - / 44.8 & - & - \\ OPT-IML-Max 30B\({}^{\ddagger}\) & - & - & 46.3 / 43.2 & - / 30.9 & - \\ OPT-IML-Max 175B\({}^{\ddagger}\) & - & - & 49.1 / 47.1 & - / 35.7 & - \\ \hline Flan 2022 - Next Best T5-XL & +3.3 / +12 & +10.2 / +8.5 & +4.2 / +17.6 & -1.9 / +8.5 & +7.9 / +8.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Method Ablations (top)** show the importance of each method for Flan-T5 XL. **Collection Ablations (bottom)** evaluate Flan-T5 XL against T5-XL finetuned on other instruction tuning collections: FLAN 2021, P3++, and Super-Natural Instructions. **Flan 2022 - Next Best T5-XL** shows the improvement of Flan-T5 XL over the next best T5-XL (comparatively sized) finetuned on another collection. Metrics are reported in both zero-shot / few-shot settings across Held-In, Chain-of-Thought, and Held-Out (MMLU, BBH) tasks. \({}^{\ddagger}\) We also include the results reported by OPT-IML (Iyer et al., 2022) and GLM-130B (Zeng et al., 2022).
### Training with Mixed Prompt Settings
Prior work has shown a wide variety of input templates per task can improve performance. However, separate from the wording of the instruction template, these prior LLMs mostly tune with template sets _targeted to a single prompt setting_: for zero-shot prompting (Wei et al., 2021; Sanh et al., 2021; Aghajanyan et al., 2021; Aribandi et al., 2021) or for few-shot prompting (Min et al., 2022; Wang et al., 2022c).
An underappreciated design decision in InstructGPT (Ouyang et al., 2022) was to mix training templates for each of these prompt settings, rather than target a single setting. However, since Ouyang et al. (2022) do not examine this choice, we expected a performance trade-off in finetuning for zero-shot or few-shot prompting performance - particularly for smaller models. Instead, we find training with mixed zero- and few-shot prompts significantly improves performance in **both** settings - most surprisingly, even for models with only 3B parameters.
Figure 3 shows (1) adding as little as 5% few-shot training templates can dramatically improve zero-shot performance, and (2) adding 10%+ of zero-shot data improves few-shot performance too. Both Held-In and Held-Out tasks peak anywhere between 10-90% of few-shot data, but this range is consistently higher than training with only one prompt setting.
### Scaling Small Models to 1.8k+ Tasks
The most recent and concurrent publicly available instruction tuning efforts, like Flan 2022, train on thousands of tasks (Wang et al., 2022c; Iyer et al., 2022), but operate on different task compositions and underlying training methods. To measure the impact of scaling model sizes and tasks for the Flan 2022 collection, we finetune T5-LM adapted models (Small, Base, Large, XL, XXL) on randomly selected task subsets (8, 25, 50, 100, 200, 400, 800, all 1873). Every finetuning run is guaranteed to include the Held-In tasks, so we can estimate how task scaling impacts the model capacity to maintain performance on a given task its already seen.
Figure 4 demonstrates that both Held-In and Held-Out tasks appear to benefit from adding hundreds of finetuning tasks. Held-in task evaluations peak around 200 total tasks, and diminish in performance as more tasks are added, though larger models peak later and diminish less. Held-out task performance increases log-linearly with the number of tasks, achieving the highest performances with all 1836 tasks.
Figure 3: **Training jointly with zero-shot and few-shot prompt templates improves performance on both Held-In and Held-Out tasks. The stars indicate the peak performance in each setting.**
Surprisingly, only T5-Small appears to exceed its Held-Out task performance before 1836 tasks, while larger model sizes continue to improve. These results suggest (a) even T5-Base may not have exhausted its capacity with thousands of tasks, and (b) the largest LMs could benefit from thousands more tasks for Held-In and Held-Out task performance.
One necessary assumption of this analysis is that all tasks are defined and counted equally. Section 3.5 demonstrates how not all task sources are equally beneficial to training, and the model performance may saturate from too many tasks from one source (e.g. Super-Natural Instructions). We would caution conclusions that task scaling beyond 1800 would translate to increased returns without also paying attention to task diversity and quality.
### Task Enrichment with Input Inversion
Prior instruction tuning work has enriched their diversity of tasks by inverting the \((x,y)\) input-output pairs in supervised tasks--referred to as "prompts not intended for the original task" in P3 (Bach et al., 2022) or the "noisy channel" in MetaICL (Min et al., 2022). For example, a dataset may be originally designed for, given a question \(x\), evaluate if a model can answer \(y\). Input inversion instead gives a model the answer \(y\) and trains it to generate the question \(x\). This is an easy method to enrich the task variety given a limited set of data sources. However, it isn't clear that this method remains helpful when 100s of unique data sources and 1000s of tasks are already available.
To assess this, we enrich our mixtures with input inverted tasks (details and examples in Appendix B) and measure the effect. In Table 1 we find this is not beneficial for Held-In performance, but strongly beneficial for Held-Out performance. These benefits invigorate the prospect of data augmentation techniques for LLM finetuning, which had previously been shown to have diminishing returns the longer models are pretrained (Longpre et al., 2020).
### Balancing Data Sources
Scaling architecture size and the number of tasks are effective, but our results suggest the mixture weighting deserves as much attention to optimize results. To converge on a balanced weighting, we omit different sets of task sources, one at a time (Flan 2021, T0-SF, Super-Natural Instructions, Chain-of-Thought, Dialog, and
Figure 4: **Performance Scaling Laws for the number of finetuning tasks and model sizes. Held-In performance (left) and Held-Out MMLU performance (right) are shown. The gold star indicates the peak performance for that model size.**
Program Synthesis), and rank their contributions on the MMLU benchmark.4.
Footnote 4: Following Chung et al. (2022) we refer to the subset of P3++ that is not in Flan 2021 as T0-SF (SF stands for “sans Flan”).
As shown in Table 2, Flan 2021 and T0-SF are among the most beneficial mixtures, followed by Super-Natural Instructions and Chain-of-Thought, with Dialog and Program Synthesis last. These findings are corroborated by Iyer et al. (2022) who extensively test data mixing proportions, and also determine their Flan 2021, T0-SF, and T5 mixtures are the most broadly beneficial. Additionally, they find Super-Natural Instructions has limited scaling benefits on Held-Out task performance, which they relate to its unique input format and instruction design. Notably, Chain-of-thought finetuning appears beneficial across all our evaluation settings, especially considering they contain far fewer tasks than Flan 2021, T0-SF or Natural Instructions.
We used these findings to significantly narrow the mixture weights search space, and used our practitioner's intuition from there. This strategy is simple but effective, as shown in Table 1, but leaves ample room for more sophisticated future work.
### Discussion
OPT-IML (Iyer et al., 2022) presents the closest comparison to this work, including a similar collection of tasks, examples and techniques. However, while their used tasks are all publicly sourced, their collection, with templates, processing, and example mixing, is not released, and as a result cannot be easily compared. Iyer et al. (2022) report that Flan-T5-XL (3B) and XXL (11B) outperforms OPT-IML-Max 175B on both MMLU and BBH. As they discuss, these differences may arise from any combination of pre-training, model architecture, and instruction tuning. Model architecture and pretraining before instruction tuning can play a significant role (Wang et al., 2022). But there are many other details in instruction tuning that may vary between Flan 2022 and OPT-IML. Likely candidates are are: example templatization, how the mixed input prompting procedures are used at training, and task composition.
How significant are each of these difference? While OPT-IML contains more tasks than Flan 2022, we estimate approximately \(94\%(2067/2207)\) are also used in the Flan 2022 collection5, and very few tasks in Flan 2022 are not contained in some format in OPT-IML. This suggests the overall difference in task diversity is not significant when using a shared definition of "task". Task mixture rates also emphasize similar sources, including Flan 2021 (46% vs 20%), PromptSource/P3 (28% vs 45%), and Super-Natural Instructions (25% vs 25%), for Flan 2022 and OPT-IML respectively.6 OPT-IML's other collections (Crossfit, ExMix, T5, U-SKG)
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Train Mixtures & \multicolumn{3}{c}{Metrics} \\ & Held-In & CoT & MMLU \\ \hline All (Equal) & 64.9 & 41.4 & 47.3 \\ \hline All - Flan 2021 & 55.3 & 38.6 & 45.7 \\ All - T0-SF & 63.2 & **43.4** & 44.7 \\ All - Super-Nat. Inst. & 65.9 & 42.2 & 46.8 \\ All - CoT & 65.6 & 29.1 & 46.8 \\ All - Prog. Synth. & 66.9 & 42.3 & 46.8 \\ All - Dialog & 65.4 & 40.3 & 47.1 \\ \hline All (Weighted) & **66.4** & 40.1 & **48.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Subsets of tasks are left out from an equally weighted mixture to measure their importance. **T0-SF and Flan 2021 finetuning are most important for MMLU, while Chain-of-Thought (CoT) finetuning is most important for Chain-of-Thought evaluation.**
are not weighted significantly: 4%, 2%, 2%, 2% respectively.
We believe example templatization and the mixed prompt formats may pose the largest differences with OPTIMLs instruction tuning. Our template repository was significantly updated from Flan 2021, adding variety not just in instructions, but also along dimensions. For instance, the templatization procedure varies where the instruction is placed (before or after few-shot prompts), the spacing and separators between few-shot and Chain-of-Thought prompts, and the formatting permutations of answer options (and their targets) for multiple-choice examples, which sometimes includes and sometimes excludes answer options in the inputs or exemplars. While we do not have dedicated experiments comparing many iterations of development, we found these procedures dramatically augment input variety and showed repeated performance improvements. Our example templatizing procedure is open sourced for inspection and future work.
## 4 Instruction Tuning Enhances Single-Task Finetuning
In applied settings, machine learning practitioners deploy NLP models finetuned (FT) specifically for a single target task, usually where finetuning data is already available. While prior work has shown the benefits of intermediate finetuning (Pruksachatkun et al., 2020; Vu et al., 2020) or multi-task finetuning (Aghajanyan et al., 2021; Aribandi et al., 2021) for downstream tasks, this has not been studied extensively for instruction-tuned models.
We evaluate Flan 2022 instruction tuning as an intermediary step before single target finetuning, to understand if Flan-T5 would serve as a better starting checkpoint for applied practitioners. We evaluate three settings in
Figure 5: **Flan-T5 Outperforms T5 on Single-Task Finetuning. We compare single-task finetuned T5, single-task finetuned Flan-T5, and Flan-T5 without any further finetuning.**
**Pareto Improvements to Single Task Finetuning** For both sets of Held-In and Held-Out tasks examined, finetuning Flan-T5 offers a pareto improvement over finetuning T5 directly. In some instances, usually where finetuning data is limited for a task, Flan-T5 without further finetuning outperforms T5 with task finetuning.
**Faster Convergence & Computational Benefits** Using Flan-T5 as a starting checkpoint has an added benefit in training efficiency. As demonstrated in Figure 6, Flan-T5 converges much more quickly than T5 during single target finetuning, as well as peaking at higher accuracies. These convergence results also suggest there are strong green-AI incentives for the NLP community to adopt instruction-tuned models, like Flan-T5 for single-task finetuning, rather than conventional non-instruction-tuned models. While instruction tuning is more computationally-expensive than single-task finetuning, it is a one-time cost. On the contrary, pretrained models that require extensive finetuning become more costly when aggregating over many millions of additional training steps (Wu et al., 2022; Bommasani et al., 2021). Instruction-tuned models offer a promising solution to significantly reduce the amount of finetuning steps across a wide swathe of tasks, if they are adopted as a new standard starting point for single-task finetuning.
## 5 Related Work
**Large Language Models** As the foundation of instruction tuning, the practice of pretraining one general-purpose language representation that is useful for multiple downstream tasks has a long tradition that goes back at least Mikolov et al. (2013) and Dai and Le (2015). In 2018, Peters et al. (2018) and Devlin et al. (2019) cemented the paradigm of pretraining a large model on a large unsupervised corpus, and the field of NLP quickly converged to using these models which substantially outperform the prior art of non-pretrained task-specific LSTM models on all tasks. However, the dominate way to access that high-quality syntactic and semantic knowledge encoded in pretrained models was not to prompt them with instructions, but to train an additional task-specific linear layer that maps the model activations into numerical class labels. A short year later, Radford et al. (2019), Raffel et al. (2020), and Lewis et al. (2020) popularized the notion that downstream tasks--and multiple tasks--can be jointly learned by directly using the pretrained LM head to generate the answers in natural language (cf. task-specific numerical class labels), the task-general nature of these generative models became the precursor to many multitask transfer learning studies (McCann et al., 2018; Khashabi et al., 2020; Ye et al., 2021; Vu et al., 2020), which in turn led to the first wave of instruction tuning as described in Section 2.
The continuing advancement in research on the pretraining corpora, architectures and pretraining objectives of LMs also has a large impact on instruction tuning. As of 2022, decoder-only left-to-right causal Transformers dominate the market of models larger than 100B (Brown et al., 2020; Thoppilan et al., 2022; Rae et al., 2021;
Figure 6: **Flan-T5 convergences faster than T5 on single-task finetuning for each of 5 Held-Out tasks from Flan finetuning.**
Figure 5: finetuning T5 directly on the target task as the conventional baseline (blue bars), using Flan-T5 without further finetuning (beige bars), and finetuning Flan-T5 further on the target task (red bars).
Chowdhery et al., 2022; Hoffmann et al., 2022), and all models of such size class with fully public model parameters are decoder-only (Wang and Komatsuzaki, 2021; Le Scao et al., 2022; Zhang et al., 2022), the decision of which are often due to better hardware and software framework support. However, Raffel et al. (2020), Lewis et al. (2020), and Tay et al. (2022) have consistently found that left-to-right causal language modeling is a suboptimal objective, while Tay et al. (2022) and Wang et al. (2022) particularly showed that a mixture of non-sequential objectives is much superior for downstream tasks with zero-shot and few-shot prompting. An additional factor which remains under-explored is the relationship between pretraining corpora, instruction tuning, and downstream abilities. Typically, public models are all trained on one of a few public corpora: C4 (Raffel et al., 2020), The Pile (Gao et al., 2020), or ROOTs (Laurencon et al., 2022).
Instruction TuningIn Section 2 we outline major developments in instruction tuning. Other important developments include the prospect of complimenting or replacing few-shot in-context learning-the currently predominate method of evaluating pretrained and instruction-tuned models--with parameter-efficient tuning. As standard finetuning of models larger than 100B requires a high number of accelerators with the right interconnects often too expensive even for many industry labs, parameter-efficient tuning (a.k.a. continuous or soft "prompt tuning") shows that only updating a small subset of model parameters can reach comparable performance as fully tuning all model parameters (Lester et al., 2021; Vu et al., 2022; Hu et al., 2021; see He et al., 2022 for a detailed analysis). Notably, Liu et al. (2022) show that, due to the long sequence length of few-shot ICL and that the few-shot exemplars need to be repeatedly inferenced for evaluating every example, parameter-efficient tuning can be computationally cheaper and higher performing than in-context learning. Further, Liu et al. (2022), Vu et al. (2022), Wei et al. (2021), and Singhal et al. (2022) collectively show that both single-task and multi-task parameter-efficient tuning can be productively combined with instruction tuning, either before or after regular full-model instruction tuning. This line of work makes it easy for other researchers to build on top of a general-domain instruction-tuned model, and collect a custom instruction-tuning mixture for their use, e.g., with multiple modalities (Ahn et al., 2022; Huang et al., 2022; Xu et al., 2022) or special domains such as science and medicine (Lewkowycz et al., 2022; Singhal et al., 2022).
Problems Addressed by Instruction Tuning & Alignment TechniquesInstruction tuning is part of a line of work designed to "align" language models with more useful objectives and human preferences. In the absence of such methods, language models are known to demonstrate toxic/harmful behaviour (Sheng et al., 2019; Liang et al., 2021; Wallace et al., 2019), generate non-factual information (Maynez et al., 2020; Longpre et al., 2021; Devaraj et al., 2022), and other challenges in deployment and evaluation (Zellers et al., 2019; McGuffie and Newhouse, 2020; Talat et al., 2022). Analyzing, evaluating and mitigating these problems pose a promising direction for future work (Gao et al., 2022; Ganguli et al., 2022). Instruction tuning warrants greater investigation, as it has already demonstrated itself an encouraging remedy in reducing NLP bias metrics, as shown in Chung et al. (2022).
## 6 Conclusions
The new Flan 2022 instruction tuning collection unifies the most popular prior public collections and their methods, while adding new templates and simple improvements like training with mixed prompt settings. The resulting collection outperforms Flan 2021, P3++, Super-Natural Instructions, and OPT-IML-Max 175B on Held-In QA, NLI, and Chain-of-Thought tasks, and Held-Out MMLU and BBH, often by large margins. Results suggest this new collection serves as a more competitive starting point for researchers and practitioners interested in both generalizing to new instructions, or finetuning on a single new task.
## Acknowledgements
We would like to thank Ed H Chi, Xinyun Chen, and Colin Raffel for their advice and feedback on the paper. |
2309.13797 | q-Overlaps in the Random Exact Cover Problem | We prove upper and lower bounds for the threshold of the q-overlap-k-Exact
cover problem.
These results are motivated by the one-step replica symmetry breaking
approach of Statistical Physics, and the hope of using an approach based on
that of Mezard et al. (2005) to rigorously prove that for some values of the
order parameter the overlap distribution of k-Exact Cover has discontinuous
support. | Gabriel Istrate, Romeo Negrea | 2023-09-25T01:13:53Z | http://arxiv.org/abs/2309.13797v1 | # \(q\)-Overlaps in the Random Exact Cover Problem
###### Abstract
We prove lower and upper bounds for the threshold of the following decision problem: given \(q\in(0,1)\) and \(c>0\) what is the probability that a random instance of the \(k\)-Exact Cover problem [11] has two solutions of overlap \(qn\pm o(n)\)?
These results are motivated by the _one-step replica symmetry breaking_ approach of Statistical Physics, and the hope of using an approach based on that of [14] to prove that for some values of the order parameter the overlap distribution of \(k\)-Exact Cover has discontinuous support.
Keywords: exact cover, overlap, probabilistic method.
## 1 Introduction
The study of _phase transitions in Combinatorial Optimization problems_[18], [4] (see also [5, 8, 6, 16]) has recently motivated (and brought to attention) the geometric structure of the solution space of a combinatorial problem. Methods such as the _cavity method_ and assumptions such as _replica symmetry_ and _one step replica symmetry breaking_ make significant predictions on the geometry of solution space that are a source of inspiration (and a challenge) for rigorous work.
A remarkable advance in this area is due to Mezard et al. [14]. This paper has provided rigorous evidence that for the random \(k\)-satisfiability problem (with sufficiently large \(k\)) the intuitions concerning the geometry of the solution space provided by the 1-RSB approach are correct. The evidence is based the support of the overlap distribution, shown to be discontinuous via a study of threshold properties for the \(q\)-overlap versions of \(k\)-SAT.
In this paper we follow an approach based on the same idea, studying the overlap distribution of a different optimization problem, the _random \(k\)-Exact Cover_ problem. The phase transition in this problem has been studied in [10]. Zdeborova et al. [19],[13] have applied nonrigorous methods from Statistical Physics (the cavity approach) and have suggested that the _1-step Replica Symmetry Breaking_ assumption is valid. This motivates us to study the problem \(q\)-overlap \(k\)-Exact Cover (defined below), and prove lower and upper bounds on its satisfiability threshold.
Our ultimate goal would be to show that for a certain range of the order parameter the \(k\)-Exact problem has a discontinuous overlap distribution. However, in this paper we cannot accomplish this goal, as the upper and lower bounds provided are too crude to guarantee this. Still, we believe that the insights provided by our partial result may be useful towards eventually obtaining such bounds.
## 2 Preliminaries
We assume knowledge of the method of modeling the trajectory algorithms on random inputs using difference/differential equations using the principle of deferred decision. This is by now a material in standard textbooks [17] and surveys [2]. We will also assume knowledge of somewhat lesser popular techniques in this area, such as the "lazy server" approach [2].
**Definition 1**.: _Let \(\mathcal{D}=\{0,1,\ldots,t-1\}\), \(t\geq 2\) be a fixed set. Consider the set of all \(2^{t^{k}}-1\) potential nonempty binary constraints on \(k\) variables \(X_{1},\ldots,X_{k}\). We fix a set of constraints \(\mathcal{C}\) and define the random model CSP\((\mathcal{C})\). A random formula from CSP\({}_{n,m}(\mathcal{C})\) is specified by the following procedure: (i) \(n\) is the number of variables; (ii) we generate uniformly at random, **with replacement**, \(m\) clauses from all the instantiations of constraints in \(\mathcal{C}\) on the \(n\) variables._
_When all constraints in \(\mathcal{C}\) are boolean, we write SAT\((\mathcal{C})\) instead of CSP\((\mathcal{C})\)._
The particular (CSP) problem we are dealing with in this paper is:
**Definition 2**.: _An instance \(\Phi\) of the \(k\)-Exact Cover is specified by a set of boolean variables \(V=\{x_{1},\ldots,x_{n}\}\) and a family of \(m\geq 1\) subsets of size \(k\) (called clauses) of \(V\). Instance \(\Phi\) is satisfiable if there is a truth assignment \(A\) of variables in \(V\) that makes exactly one variable in each clause evaluate to TRUE._
**Definition 3**.: _The Hamming distance between two truth assignments \(A\) and \(B\), on \(n\) variables is \(d_{A,B}=\frac{n}{2}-\frac{1}{2}\sum_{i=1}^{n}A(x_{i})B(x_{i})\). The overlap of truth assignments \(A\) and \(B\) is the fraction of variables on which the two assignments coincide, that is_
\[\text{overlap}(A,B)=\frac{\{i|A(x_{i})=B(x_{i})\}}{n}.\]
**Definition 4**.: _A set of constraints \(\mathcal{C}\) is interesting if there exist constraints \(C_{0},C_{1}\in\mathcal{C}\) with \(C_{0}(\overline{0})=C_{1}(\overline{1})=0\), where \(\overline{0},\overline{1}\) are the "all zeros" ("all ones") assignments. Constraint \(C_{2}\) is an implicate of \(C_{1}\) iff every satisfying assignment for \(C_{1}\) satisfies \(C_{2}\). A boolean constraint \(C\)_strongly depends on a literal _if it has an unit clause as an implicate. A boolean constraint \(C\)_strongly depends on a 2-XOR relation _if \(\exists i,j\in\{1,\ldots,k\}\) such that constraint "\(x_{i}\neq x_{j}\)" is an implicate of \(C\)._
In the following definition \(\varepsilon(n)\) is a function whose exact expression is unimportant (in that we get the same results), as long as \(n^{1/2}=o(\varepsilon(n))\), \(\varepsilon(n)=o(n)\):
**Definition 5**.: _Let \(\mathcal{D}=\{0,1,\ldots,t-1\}\), \(t\geq 2\) be a fixed set. Let \(q\) be a real number in the range [0,1]. The problem \(q\)-overlap-CSP\((\mathcal{C})\) is the decision problem specified as follows: (i) The input is an instance \(\Phi\) of CSP\({}_{n,p}(\mathcal{C})\); (ii) The decision problem is whether \(\Phi\) has two satisfying assignments \(A,B\) such that \(overlap(A,B)\in[q-\varepsilon(n)n^{-1},q+\varepsilon(n)n^{-1}]\). The random model for \(q\)-overlap-CSP\((\mathcal{C})\) is simply the one for CSP\({}_{n,m}(\mathcal{C})\)._
This definition particularizes to our problem as follows:
**Definition 6**.: _Let \(q\in(0,1)\). The \(q\)-overlap \(k\)-Exact Cover is a decision problem specified as follows:_
**INPUT:** _an instance \(F\) of \(k\)-Exact Cover with \(n\) variables._
**DECIDE:** _whether \(F\) has two assignments \(A\) and \(B\) such that_
\[\text{overlap}(A,B)\in[q-\varepsilon(n)n^{-1},q+\varepsilon(n)n^{-1}]. \tag{1}\]
_We refer to a pair \((A,B)\) as in equation (1) as satisfying assignments of overlap approximately \(q\)._
If \(A,B\) are two satisfying assignments and \(i,j\in\{0,1\}\) we will use notation \(A=i,B=j\) (\(A=B=i\), when \(i=j\)) as a shorthand for \(\{x:A(x)=i,B(x)=j\}\).
**Definition 7**.: _Let \(l\geq 1\) be an integer and let \(A,B\) be two satisfying assignments of an instance \(\Phi\) of \(k\) Exact Cover. Pair \((A,B)\) is called \(l\)-connected if there exists a sequence of satisfying assignments \(A_{0},A_{1},\ldots A_{l}\), \(A_{0}=A\), \(A_{l}=B\), \(A_{i}\) and \(A_{i+1}\) are at Hamming distance at most \(l\)._
**Definition 8**.: _For \(k\geq 3\), \(q\in(0,1)\) define_
\[q_{k}=\frac{\sqrt{(k-1)(k-2)}}{2+\sqrt{(k-1)(k-2)}}, \tag{2}\]
_and_
\[\lambda_{q,k}:=\left\{\begin{array}{ll}\frac{(k-1)q+\sqrt{(k-1)^{2}q^{2}+k( k-2)(k-1)(1-q)^{2}}}{2k}&\mbox{if }q\in(0,q_{k}),\\ q&\mbox{otherwise.}\end{array}\right. \tag{3}\]
Note that for \(q<q_{k}\) the expression for \(\lambda_{q,k}\) is the unique positive root of equation
\[\frac{k-2}{x}+\frac{(q-2x)}{(k-1)(\frac{1-q}{2})^{2}+x(q-x)}=0, \tag{4}\]
and is strictly less than \(q\). Also, \(\lambda_{q,k}>q/2\), since, by (3), \(\lambda_{q,k}>\frac{(k-1)q}{k}>q/2\).
**Definition 9**.: _For \(k\geq 3\), \(q\in(0,1)\) define \(F_{k,q}:(q/2,\lambda_{q,k})\rightarrow(0,\infty)\) by_
\[F_{k,q}(x)=\frac{\ln(\frac{x}{q-x})}{\frac{k-2}{x}+\frac{(q-2x)}{(k-1)(\frac{1 -q}{2})^{2}+x(q-x)}} \tag{5}\]
Note that \(F_{k,q}\) is well defined, monotonically increasing (the numerator is increasing, each term in the denominator is decreasing), and that \(\lim_{x\to q/2}F_{k,q}(x)=0\), \(\lim_{x\rightarrow\lambda_{q,k}}F_{k,q}(x)=\infty\). Thus function \(F_{k,q}\) is a bijection. Denote by \(G_{k,q}(x)\) its inverse.
## 3 Results
We first remark that
**Lemma 1**.: _For every \(k\geq 3\) and \(q\in(0,1)\) the problem \(q\)-overlap \(k\)-Exact Cover has a sharp threshold._
Proof.: The claim is a simple application of the main result in [6]. Indeed, in [6] we studied the existence of a sharp threshold for \(q\)-overlap versions of random constraint satisfaction problems. Previously in [5, 2], a characterization of CSP with a sharp threshold was given:
**Proposition 1**.: _Consider a generalized satisfiability problem SAT\((\mathcal{C})\) with \(\mathcal{C}\) interesting. (i) If some constraint in \(\mathcal{C}\) strongly depends on one literal then SAT\((\mathcal{C})\) has a coarse threshold; (ii) If some constraint in \(\mathcal{C}\) strongly depends on a 2XOR-relation then SAT\((\mathcal{C})\) has a coarse threshold; (iii) In all other cases SAT\((\mathcal{C})\) has a sharp threshold._
The folowing result (Theorem 8 in [6]) shows that under the same conditions as those in [5] the \(q\)-overlap versions also have a sharp threshold:
**Proposition 2**.: _Consider a generalized satisfiability problem SAT\((\mathcal{C})\) such that (i) \(\mathcal{C}\) is interesting (ii) No constraint in \(\mathcal{C}\) strongly depends on a literal; (iii) No constraint in \(\mathcal{C}\) strongly depends on a 2XOR-relation. Then for all values \(q\in(0,1]\) the problem \(q\)-overlap-SAT\((\mathcal{C})\) has a sharp threshold._
The conditions in Proposition 2 apply to the \(k\)-Exact Cover problem, which can be modeled as a CSP with a single \(k\)-ary constraint \(C_{k}(x_{1},x_{2},\ldots,x_{k})\) which requires that exactly one of \(x_{1},x_{2},\ldots,x_{k}\) be true. This is because constraint \(C_{k}\) is interesting, does not strongly depend on a literal and, for \(k\geq 3\), does not strongly depend on a 2-XOR relation.
Our main result gives lower and upper bounds on the location of this threshold:
**Theorem 1**.: _Let \(k\geq 3\) and let \(r_{up}(q,k)\) be the smallest \(r_{*}>0\) such that \(\forall r>r_{*}\)_
\[r\ln(P_{k}(G_{k,q}(r),(1-q)/2,(1-q)/2,q-G_{k,q}(r)))-G_{k,q}(r) \ln(G_{k,q}(r))-\] \[- (q-G_{k,q}(r))\ln(q-G_{k,q}(r))-(1-q)\ln((1-q)/2)\leq 0.\]
_Also let_
\[r_{lb}(q)=\left\{\begin{array}{ll}\frac{1}{6}\bigg{[}\frac{1}{(1-q)^{2}}-1 \bigg{]}&\mbox{for $q<1-\frac{1}{\sqrt{2}}$},\\ \frac{1}{6}&\mbox{otherwise.}\end{array}\right. \tag{6}\]
_Then:_
1. _For_ \(r>r_{up}(q,k)\) _a random instance of_ \(q\)_-overlap_ \(k\)_-Exact Cover with_ \(n\) _variables and_ \(m=rn\) _clauses has, with probability_ \(1-o(1)\)_, no satisfying assignments of overlap approximately_ \(q\)_._
2. _For_ \(0<r<r_{lb}(q)\) _a random instance of_ \(q\)_-overlap_ \(3\)_-Exact Cover with_ \(n\) _variables and_ \(m=rn\) _clauses has, with probability_ \(1-o(1)\)_, two satisfying assignments of overlap approximately_ \(q\)_._
Given the non-explicit nature of \(r_{up}(q)\), the only way to interpret the lower and upper bounds given in Theorem 1 is via symbolic and numeric manipulations of the quantities in the equation(s) defining \(r_{up}(q)\). A Mathematica notebook to this goal is provided as [9]. The conclusion of such an analysis is that the bounds in Theorem 1 are too crude to imply the existence of a discontinuity in overlap in the \(k\)-Exact Cover problem.
## 4 Proof of the upper bound (Theorem 1 (a))
Let \(\Phi\) be a random instance of \(k\)-Exact Cover. Our proof relies on the following fundamental observation:
**Lemma 2**.: _Let \(A,B\) be two satisfying assignments, and let \(C\) be a clause of length \(k\) in \(\Phi\). Denote by \(c_{0},c_{1},c_{2},c_{3}\) the number of variables of \(C\) in the sets \(A=B=0\), \(A=0,B=1\), \(A=1,B=0\), \(A=B=1\) respectively. Clause \(C\) is satisfied by both \(A\) and \(B\) if and only if_
\[\left\{\begin{array}{ll}c_{0}=k-2,&c_{1}=c_{2}=1,&c_{3}=0\\ \mbox{or}&\\ c_{0}=k-1,&c_{1}=c_{2}=0,&c_{3}=1\end{array}\right. \tag{7}\]
Proof.: The conditions that both \(A\) and \(B\) satisfy \(C\) are written as
\[\left\{\begin{array}{ll}c_{0}+c_{1}=k-1,&c_{2}+c_{3}=1\\ c_{0}+c_{2}=k-1,&c_{1}+c_{3}=1,\end{array}\right. \tag{8}\]
a system whose solutions are those from equation (7).
An immediate consequence of Lemma 2 is that the probability that a pair of assignments satisfies a random instance of \(k\)-EC depends only on numbers \(c_{0},c_{1},c_{2},c_{3}\):
**Lemma 3**.: _Let \(c_{0},c_{1},c_{2},c_{3}\) be nonnegative numbers. Then_
\[Pr[A,B\models\Phi\mid|A=B=0|=c_{0},\ldots|A=B=1|=c_{3}]=P^{*}(c_{0},c_{1},c_{2}, c_{3})^{rn},\]
_where_
\[P^{*}(a,b,c,d)=\frac{\left(\begin{array}{c}a\\ k-2\end{array}\right)\left(\begin{array}{c}b\\ 1\end{array}\right)\left(\begin{array}{c}c\\ 1\end{array}\right)+\left(\begin{array}{c}a\\ k-1\end{array}\right)\left(\begin{array}{c}d\\ 1\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}=\frac{\left(\begin{array}{c}a\\ k-2\end{array}\right)}{\left(\begin{array}{c}n\\ k\end{array}\right)}\Big{[}bc+\frac{(a-k+2)}{(k-1)}d\Big{]} \tag{9}\]
Proof.: We will prove that the probability that \(A,B\) satisfy a particular clause of \(\Phi\) is \(P^{*}(c_{0},c_{1},c_{2},c_{3})\). The result follows since the formula \(\Phi\) is obtained by sampling independently, with replacement, \(rn\) clauses.
Indeed, the total number of clauses is \(\binom{n}{k}\). By Lemma 2, the number of clauses satisfied by both \(A\) and \(B\) is \(\binom{a}{k-2}\cdot\binom{b}{1}\cdot\binom{c}{1}+\). The first term represents the number of clauses with \(k-2\) variables in the set \(A=B=0\), one in the set \(A=0,B=1\) and one in the set \(A=1,B=0\) (so that exactly one literal of \(C\) is true in both \(A\) and \(B\)). The second term counts the second type of favorable clauses.
We will use Lemma 3 to derive an upper bound via the first moment method.
Indeed, let \(Z=Z(q,F)\) be a random variable defined as
\[Z(q,F)=\sum_{A,B}\delta[|d_{A,B}-nq|\leq e(n)]\cdot\mathbf{1}_{\mathcal{S}(F) }(\mathbf{A})\cdot\mathbf{1}_{\mathcal{S}(F)}(\mathbf{B}). \tag{10}\]
where \(F=F_{k}(n,rn)\) is a random formula on \(n\) variables over \(m=rn\) clauses of size \(k\), the set \(\mathcal{S}(F)\) is the set of the EC-assignments to this formula.
Then:
\[E[Z(q,F)]=\sum_{A,B}\delta[|d_{A,B}-nq|\leq e(n)]\cdot Pr[A,B\models F]. \tag{11}\]
For fixed values \(a,b,c,d\) there are \(\binom{n}{a,b,c,d}=\frac{n!}{a!\cdot b!\cdot c!\cdot d!}\) pairs of assignments of type \((a,b,c,d)\). If we denote \(\lambda\stackrel{{ not}}{{=}}a+d=nq\pm\epsilon(n)\) and \(\mu\stackrel{{ not}}{{=}}b+c=n-\lambda\) then the system
\[\left\{\begin{array}{c}a+d=\lambda\\ b+c=n-\lambda\end{array}\right.\]
has at most \((\lambda+1)(n-\lambda+1)\) solutions in the set of nonnegative integers. Therefore, the number of quadruples \((a,b,c,d)\) in the sum \(E[Z]\) is at most
\[\sum_{\lambda=nq-\epsilon(n)}^{nq+\epsilon(n)}(\lambda+1)(n-\lambda+1)=\frac {1}{3}(1+2\epsilon(n))(3-\epsilon(n)-\epsilon(n)^{2}+3n+3n^{2}q-3n^{2}q^{2}) \stackrel{{ def}}{{=}}M.\]
So
\[P[Z>0]\leq E[Z]\leq M\cdot\max_{(a,b,c,d)}\binom{n}{a,b,c,d}\cdot P^{*}(a,b,c, d)^{rn} \tag{12}\]
We will compute the maximum on the right-hand side and derive conditions for which this right-hand side tends (as \(n\rightarrow\infty\)) to zero.
Indeed, denote \(\alpha=\frac{a}{n},\beta=\frac{b}{n},\gamma=\frac{c}{n},\delta=\frac{d}{n}\). Applying Stirling's formula \(n!=(1+o(1))\cdot\big{(}\frac{n}{e}\big{)}^{n}\sqrt{2\pi n}\), and also noting that
\[P^{*}(a,b,c,d)\leq(1+\frac{O(1)}{n})\cdot P_{k}(\alpha,\beta,\gamma,\delta),\]
with
\[P_{k}(\alpha,\beta,\gamma,\delta)=\alpha^{k-2}k(k-1)(\beta\gamma+\frac{ \alpha\delta}{k-1}) \tag{13}\]
we get
\[P[Z>0]\leq M\cdot\theta(1)\cdot\max_{(\alpha,\beta,\gamma,\delta)}\cdot\big{[} \Big{(}\frac{1}{\alpha^{\alpha}\beta^{\beta}\gamma^{\prime}\delta^{\delta}} \Big{)}\cdot P(\alpha,\beta,\gamma,\delta)^{r}\big{]}^{n}\]
Define
\[g_{r}(\alpha,\beta,\gamma,\delta)=\frac{P_{k}(\alpha,\beta,\gamma,\delta)^{r} }{\alpha^{\alpha}\beta^{\beta}\gamma^{\prime}\delta^{\delta}}\]
**Lemma 4**.: _For any \(r>0\) we have_
\[\max\Big{\{}g_{r}(\alpha,\beta,\gamma,\delta):\alpha+\delta=q,\beta+\gamma=1- q,\alpha,\beta,\gamma,\delta\geq 0\Big{\}}=g_{r}(\alpha_{*,r},\beta_{*,r}, \gamma_{*,r},\delta_{*,r}),\]
_with_
\[\left\{\begin{array}{l}\alpha_{*,r}=G_{k,q}(r),\\ \beta_{*,r}=\gamma_{*,r}=(1-q)/2,\\ \delta_{*,r}=q-G_{k,q}(r).\end{array}\right. \tag{14}\]
**Proof.**
First, it is easy to see that
\[g_{r}(\alpha,\beta,\gamma,\delta)\leq g_{r}\Big{(}\alpha,\beta_{*,r},\gamma_{ *,r},\delta\Big{)}. \tag{15}\]
Indeed, function \(x\ln(x)\) is convex, having the second derivative positive, and \(e^{x}\) is increasing so, by Jensen's inequality,
\[\beta^{\beta}\gamma^{\prime}=e^{\beta\ln(\beta)+\gamma\ln(\gamma)}\geq e^{( \beta+\gamma)\ln(\frac{\beta+\gamma}{2})}=\Big{(}\frac{\beta+\gamma}{2}\Big{)} ^{\beta+\gamma}=\beta_{*}^{\beta_{*,r}}\gamma_{*,r}^{\gamma_{*,r}}.\]
On the other hand since \(\beta\gamma\leq\left(\frac{\beta+\gamma}{2}\right)^{2}=\beta_{*,r}\gamma_{*,r}\), we have \(P(\alpha,\beta,\gamma,\delta)\leq P\Big{(}\alpha,\beta_{*,r},\gamma_{*,r}, \delta\Big{)}\) and equation (15) follows.
Also
\[g_{r}\Big{(}\alpha,\beta_{*,r},\gamma_{*,r},\delta\Big{)}\leq g_{r}\Big{(} \alpha_{*,r},\beta_{*,r},\gamma_{*,r},\delta_{*,r}\Big{)} \tag{16}\]
Indeed, replacing \(\delta=q-\alpha\), the expression
\[t(\alpha) = \ln g_{r}(\alpha,\beta_{*,r},\gamma_{*,r},q-\alpha)=\] \[= r\ln\Big{(}P_{k}(\alpha,\beta_{*,r},\gamma_{*,r},q-\alpha)\Big{)} -\alpha\ln(\alpha)-(q-\alpha)\ln(q-\alpha)-\beta_{*,r}\ln(\beta_{*,r})-\gamma _{*,r}\ln(\gamma_{*,r})\]
is a function of \(\alpha\) whose derivative is
\[t^{\prime}(\alpha) = r\frac{P_{k}^{\prime}(\alpha,\beta_{*,r},\gamma_{*,r},q-\alpha )}{P_{k}(\alpha,\beta_{*r},\gamma_{*r},q-\alpha)}-\ln(\alpha)-1+\ln(q-\alpha )+1=\] \[= r[\frac{k-2}{\alpha}+\frac{q-2\alpha}{\left(\frac{1-q}{2}\right) ^{2}+\alpha(q-\alpha)}]+\ln(\frac{q-\alpha}{\alpha}).\]
so \(t(\alpha)\) has a maximum on \([0,q]\) at \(\alpha_{*,r}\) which is a solution of equation
\[\frac{r(k-2)}{\alpha}+\frac{r(q-2\alpha)}{(k-1)\Big{(}\frac{1-q}{2}\Big{)}^{2}+ \alpha(q-\alpha)}=\ln(\frac{\alpha}{q-\alpha}), \tag{17}\]
or \(F_{k,q}(\alpha_{*,r})=r\). In other words \(\alpha_{*,r}=G_{k,q}(r)\) and \(\delta_{*,r}=q-G_{k,q}(r)\).
Formula (14) implies that \(P[Z>0]\stackrel{{ n\to\infty}}{{\to}}0\) as long as \(t(\alpha_{*,r})<0\). The critical value \(r_{up}(q,k)\) is therefore given by equation \(t(\alpha_{*,r})=0\), that is
\[\begin{array}{c}r\ln(P_{k}(G_{k,q}(r),(1-q)/2,(1-q)/2,q-G_{k,q}(r)))-G_{k,q} (r)\ln(G_{k,q}(r))-\\ -\hskip 28.452756pt(q-G_{k,q}(r))\ln(q-G_{k,q}(r))-(1-q)\ln((1-q)/2)=0.\end{array} \tag{18}\]
For \(k=3\), denoting (for simplicity) \(\alpha=G_{3,q}(r)\), we have \(P_{3}(\alpha,\beta,\gamma,\delta)=6\alpha(\beta\gamma+\frac{\alpha\delta}{2})\), so the equation (18) becomes
\[r\ln(6\alpha[(\frac{1-q}{2})^{2}+\frac{\alpha(q-\alpha)}{2}])-\alpha\ln( \alpha)-(q-\alpha)\ln(q-\alpha)=(1-q)\ln((1-q)/2)\]
while equation (17) becomes
\[r[1+\frac{\alpha(q-2\alpha)}{2\Big{(}\frac{1-q}{2}\Big{)}^{2}+\alpha(q- \alpha)}]=\alpha\ln(\frac{\alpha}{q-\alpha}),\]
Attempting a substitution of the type \(\frac{\alpha}{q-\alpha}=t\) in this last equation seems to turn the function \(F_{3}\) into a generalized version of the Lambert function. However, this generalization seems to be different from the versions already existing in the literature [15], so this attempt does not seem fruitful.
We refer again to the Mathematica notebook provided as [9]. In particular let us remark that the maximum value of \(r_{up}(q)\) is reasonably close to upper bound the threshold for 3-Exact Cover derived using the first-moment method in [12].
## 5 Proof of the lower bound (Theorem 1 (b))
We will use a constructive method. Just as in [11], we will derive a lower bound from the probabilistic analysis of an algorithm. However, the algorithm **will not** be the one from [11]. Instead, we will investigate (a variant of) the algorithm LARGEST-CLAUSE in Figure 1.
Intuitively, the reason we prefer the algorithm LARGEST-CLAUSE to the one from [11] is simple: unlike [11], our goal is not to simply solve an instance of \(k\)-EXACT COVER, but to create **two satisfying assignments** of controlled overlap. We we would like to accomplish that via an algorithm that iteratively assigns values to variables and is left (at some point) with solving a 2-XOR SAT formula. Our aim is to keep the number of set variables to a minimum, in order to create satisfying assignments with as large an overlap as possible. But that means that one must "destroy" all clauses of length different from two as fast as possible. Instead, the algorithm in [11] is focused on killing clauses of length 2.
The algorithm may seem incompletely specified, as its performance depends on function \(f(n)\). As it will turn out, the precise specification of function \(f(n)\) in the algorithm LARGEST-CLAUSE will
Figure 1: Algorithm LARGEST-CLAUSE
not matter for our purposes, as long as \(f\) is a function that grows asymptotically faster than the size of the giant component in a certain subcritical Erdos-Renyi random graph, which is with high probability \(O(log(n))\).
To analyze (versions of) Algorithm LARGEST-CLAUSE, we denote by \(C_{i}(t)\), \(i\geq 2\), the number of clauses of length \(i\) that are present after \(t\) variables have been set. Also define \(P(t),N_{t}\) to be the number of positive (negative) unit clauses present at stage \(t\). Finally, define functions \(c_{1},c_{2},c_{3},p,n:(0,1)\to\mathbf{R}_{+}\) by \(c_{i}(\alpha)=C_{i}(\alpha\cdot n)/n\), and similar relations for functions \(p(\cdot),n(\cdot)\). We will use a standard method, _the principle of deferred decisions_ to analyze algorithm LARGEST-CLAUSE. See [1] for a tutorial.
It is easy to show by induction that at any stage \(t\), conditional on the four-tuple \((P(t),N(t),C_{2}(t),C_{3}(t))\), the remaining formula is uniform.
We divide the algorithm in two phases: in the _first phase_ there exist clauses of length three. In the _second phase_ only clauses of length one and two exist.
If a variable is set to TRUE then a \(1\)-in-\(i\) clause containing that variable is turned into \(i-1\) negative unit clauses. If a variable is set to FALSE then a \(1\)-in-\(i\) clause is turned into a \(1\)-in-\((i-1)\) clause, in particular a \(1\)-in-\(2\) clause is turned into a positive unit clause. The dynamics is displayed in Figure 5.
The different dynamics of the flows in the cases when a positive (negative) literal is set makes the direct analysis of algorithm LARGEST-CLAUSE difficult. Therefore, we will instead analyze a version of the algorithm, given in Figure 3, using a "lazy-server" [1] idea. Specifically, instead of always trying to simplify the unit clauses, we will do so probabilistically (see Figure 3 for details).
Since the problems \(q\)-overlap EXACT COVER have a sharp threshold, it is enough to prove, for \(r<r_{lb}(q)\) that the algorithm finds a pair of assignments of overlap \(r\) with probability \(\Omega(1)\). This will be enough to conclude that with probability \(1-o(1)\) two satisfying assignments of overlap \(r\) exist.
Let \(U_{P}(t),U_{N}(t),U_{3}(t)\) be 0/1 variables that are one exactly when choice 1 (2,3) is selected, 0 otherwise. We can write the following recurrence relations describing the dynamics of the four-tuple \((P(t),N(t),C_{2}(t),C_{3}(t))\):
\[\left\{\begin{array}{l}C_{3}(t+1)=C_{3}(t)-U_{3}(t)-\Delta_{3}(t),\\ C_{2}(t+1)=C_{2}(t)-\Delta_{2}(t)+\Delta_{3,2}(t),\\ P(t+1)=P(t)-U_{P}(t)-\Delta_{1,P}(t)+\Delta_{2,P}(t),\\ N(t+1)=N(t)-U_{N}(t)-\Delta_{1,N}(t)+\Delta_{2,N}(t)+\Delta_{3,N}(t),\end{array}\right. \tag{19}\]
Figure 2: The dynamics of algorithm LARGEST-CLAUSE.
Figure 3: The “lazy-server” version of algorithm LARGEST-CLAUSE
where
\[\left\{\begin{array}{l}\Delta_{3}(t)\stackrel{{ d}}{{=}}Bin\Big{(}C_ {3}(t)-U_{3}(t),\frac{3}{n-t}\Big{)}.\\ \Delta_{2}(t)=\Delta_{2,N}(t)+\Delta_{2,P}(t)\stackrel{{ d}}{{=}}Bin \Big{(}C_{2}(t),\frac{2}{n-t}\Big{)}.\\ \Delta_{3,2}(t)\stackrel{{ d}}{{=}}U_{3}(t)+(U_{N}(t)+U_{3}(t)) \cdot Bin\Big{(}C_{3}(t)-U_{3}(t),\frac{3}{n-t}\Big{)}\\ \Delta_{3,N}(t)\stackrel{{ d}}{{=}}2U_{P}(t)\cdot Bin\Big{(}C_ {3}(t),\frac{3}{n-t}\Big{)}\\ \Delta_{2,P}(t)\stackrel{{ d}}{{=}}(U_{N}(t)+U_{3}(t))\cdot Bin \Big{(}C_{2}(t),\frac{2}{n-t}\Big{)}\\ \Delta_{2,N}(t)\stackrel{{ d}}{{=}}U_{P}(t)\cdot Bin\Big{(}C_{2}(t ),\frac{2}{n-t}\Big{)}\\ \Delta_{1,P}(t)\stackrel{{ d}}{{=}}Bin\Big{(}P(t)-U_{P}(t), \frac{1}{n-t}\Big{)}\\ \Delta_{1,N}(t)\stackrel{{ d}}{{=}}Bin\Big{(}N(t)-U_{N}(t), \frac{1}{n-t}\Big{)}\end{array}\right. \tag{20}\]
By an analysis completely similar to that of algorithm for random \(k\)-SAT (see e.g. [1]), we derive the following system of equations that describe the average trajectory path of Algorithm LAZY LARGEST-CLAUSE:
\[\left\{\begin{array}{l}c_{3}^{\prime}(t)=-\lambda_{3}(t)-\frac{3c_{3}(t)}{(1 -t)}.\\ c_{2}^{\prime}(t)=-\frac{2c_{2}(t)}{(1-t)}+\frac{3c_{3}(t)}{(1-t)}\cdot\big{(} \lambda_{2}(t)+\lambda_{3}(t)\big{)},\end{array}\right. \tag{21}\]
with initial conditions \((c_{2}(0),c_{3}(0))=(0,r)\).
In this paper we will make the simplest choice
\[\lambda_{1}(t)=\lambda_{2}(t)=\lambda_{3}(t)=1/3. \tag{22}\]
Differential equations (21) describe the dynamics of algorithm LARGEST-CLAUSE only for \(t\in[t_{3},t_{2})\), where \(t_{3}=0\) and \(t_{2}\in(0,1)\) is the smallest solution of equation \(c_{3}(t)=0\).
Simple computations lead us to formulas:
\[\left\{\begin{array}{l}c_{3}(t)=(r+\frac{1}{6})(1-t)^{3}-\frac{1-t}{6},\\ c_{2}(t)=\frac{(1-t)^{2}}{3}-\frac{(1-t)}{3}+2(r+\frac{1}{6})t(1-t)^{2},\end{array}\right. \tag{23}\]
which describe the dynamics of algorithm LARGEST-CLAUSE in range \(0\leq t<t_{2}=1-\frac{1}{\sqrt{6r+1}}\).
The average flow into positive unit clauses is
\[F_{2}^{P}(t) := \frac{2}{3}\cdot\frac{2c_{2}(t)}{1-t}+\frac{1}{3}\cdot\frac{2 \cdot 3c_{3}(t)}{1-t}=\] \[= \frac{4}{3}\Big{[}\frac{(1-t)^{2}}{3}-\frac{(1-t)}{3}+2(r+\frac{ 1}{6})t(1-t)\Big{]}+2(r+\frac{1}{6})(1-t)^{2}-\frac{1}{3}.\]
\[(F_{2}^{P})^{\prime}(t)=\frac{8r(1-2t)}{3}-4(r+\frac{1}{6})(1-t)=\Big{(}\frac {4r}{3}-\frac{2}{3}\Big{)}(1-t)-\frac{8r}{3}<0,\]
so \(F_{2}^{P}(t)\) has a maximum at \(0\), equal to \(2r\). For \(r<1/6\) this is less than \(1/3\), so it is balanced by being given the opportunity (with probability \(1/3\)) to consume a positive unit clause, if any.
The average flow into negative unit clauses is
\[F_{2}^{N}(t)=\frac{1}{3}\cdot\frac{2c_{2}(t)}{1-t}=\frac{2}{3}\cdot\Big{[} \frac{(1-t)}{3}-\frac{1}{3}+2(r+\frac{1}{6})t(1-t)\cdot\Big{]}=\frac{2t}{9} \Big{[}(6r+1)(1-t)-1\Big{]}.\]
The maximum of \(F_{2}^{N}(t)\) is reached at \(t=\frac{3r}{6r+1}\), which is in the interval \((t_{3},t_{2})\) for \(r>0\), and is equal to \(\frac{2r^{2}}{6r+1}=\frac{r}{3}(1-\frac{1}{6r+1})\), which is definitely less than \(\frac{1}{3}\), for \(r<1/6\).
The conclusion is that for \(r<1/6\) with probability \(1-o(1)\) both flows into positive and negative unit clauses can be handled by the lazy server with choice \(\lambda_{1}=\lambda_{2}=\lambda_{3}=1/3\) without creating contradictory clauses.
Around stage \(t_{2}n\pm o(n)\) clauses of length three and one run out. We are left with a system of \((c_{2}(t_{2})+o(1))n\) 1-in-2 clauses in the remaining \(\overline{n}=(1-t_{2})n\) variables. Consider graph \(G\) corresponding to these equations, where for every equation \(x\oplus y=1\) we add edge \((x,y)\) to \(G\).
By the uniformity lemma \(G\)_can be seen as an Erdos-Renyi random graph_\(G(\overline{n},\frac{\mu}{\overline{n}})\), with probability coefficient
\[\mu=2c_{2}(t_{2})/(1-t_{2})=3F_{2}(t_{2}).\]
Our maximum computation shows that for \(r\in(0,1/6)\), \(3F_{2}(t_{2})<1\). Thus \(G\) is a subcritical random graph, whose connected components are w.h.p. of size \(O(\log n)\). With constant probability (depending only on \(\mu\)), \(G\) is a bipartite graph. In this situation giving a value to an arbitrary node uniquely determines the values of variables in the connected component.
We create two assignments \(A\) and \(B\) as follows:
1. On variables \(x\) set by algorithm LARGEST-CLAUSE, \(A(x)=B(x)\), equal to the value given by the algorithm.
2. On variables in graph \(G\)\(A\) and \(B\) take opposite values. This can be accomplished by giving \(A,B\) different values on a set of fixed variables, one in each connected component of \(G\).
When graph \(G\) is bipartite \(A\) and \(B\) are satisfying assignments. When the connected components of \(G\) are of size \(O(\log n)\) we can create a path from \(A\) to \(B\) consisting satisfying assignments by consecutively flipping values of variables on which \(A\) and \(B\) are different, one connected component at a time. The overlap of \(A\) and \(B\) is equal to \(1-\frac{1}{\sqrt{6r+1}}\).
It follows that for any \(q\in(0,1)\), the \(q\)-overlap Exact Cover is satisfiable w.h.p. for \(q>1-\frac{1}{\sqrt{6r+1}}\), i.e. \(\frac{1}{6r+1}>(1-q)^{2}\), which can be rewritten as \(r<r_{lb}(q)\).
## 6 Remarks
The condition \(r<1/6\) in Theorem 1 has an easy probabilistic interpretation: it is the location of the phase transition for the random 3-uniform hypergraph [20]. In this range most connected components are small and tree-like or unicyclic, so the space of variables breaks down in independent clusters of size \(O(\log n)\). Thus we should expect that all overlaps in some range \((\lambda,1)\) are satisfied with probability \(1-o(1)\), which is exactly what happens, according to Theorem 1, for \(\lambda=1-\frac{1}{\sqrt{2}}\).
In fact we can state more: in this regime there is a single cluster of solutions, and the bounds on the overlap we provide are in fact bounds on the diameter of this cluster.
**Theorem 2**.: _Let \(r<1/k(k-1)\). There exists \(C>0\) such that, with probability \(1-o(1)\) (as \(n\to\infty\)), if \(\Phi\) is a random instance of \(k\)-Exact-Cover with \(n\) variables and \(rn\) clauses, any two satisfying assignments of \(\Phi\) are \(C\log(n)\) connected._
**Proof.** Since the formula hypergraph \(H\) of \(\Phi\) is subcritical, there exists [20]\(C>0\) such that w.h.p. all connected components of \(H\) have size at most \(C\log(n)\). That means that formula \(\Phi\) is the decomposition of several variable-disjoint formulas \(\Phi_{1},\ldots,\Phi_{p}\). In turn, satisfying assignments for \(\Phi\) are obtained by concatenating satisfying assignments for these formulas.
This argument immediately implies that any two satisfying assignments of \(\Phi\) are \(C\log(n)\) connected: Let \(A,B\) be two such satisfying assignments, and let \(A_{1},B_{1}\), \((A_{2},B_{2})\), \(\ldots,(A_{s},B_{s})\) be the restrictions of \(A\) and \(B\) on the components on which they differ.
One can obtain a path from \(A\) to \(B\) as follows (where variables \(x\) such that \(A(x)=B(x)\) are ommitted from representation):
\[A = (A_{1},A_{2},\ldots,A_{s})\rightarrow(B_{1},A_{2},\ldots,A_{s}) \rightarrow(B_{1},B_{2},\ldots,A_{s})\rightarrow\ldots\rightarrow\] \[\rightarrow (B_{1},\ldots,B_{s-1},A_{s})\rightarrow(B_{1},B_{2},\ldots B_{ s})=B.\]
The intermediate assignments are satisfying assignments since formulas \(\Phi_{1},\ldots\), \(\Phi_{p}\) are disjoint. They are at distance at most \(C\log(n)\) because of the upper bound on the component size of \(H\).
Using the above result we obtain the following analog of the result proven in [7] for 2-SAT:
**Corollary 1**.: _For \(r<1/k(k-1)\) a random instance of \(k\)-Exact Cover has a single cluster of satisfying assignments and an overlap distribution with continuous support._
The relative weakness of the bound \(r<1/k(k-1)\) comes from our suboptimal choice of parameters \(\lambda_{1}(t),\lambda_{2}(t),\lambda_{3}(t)\). For instance, for \(k=3\) the bound \(r<1/6\) comes entirely from handling positive unit clauses, while we have no problem satisfying negative ones, since the flow \(F_{2}^{N}(t)\) always stays below one. This suggests that we are disproportionately often taking care of negative unit literals.
In what follows we sketch an approach for a better choice of these parameters. We were not able to explicitly calculate \(\lambda_{1}(t),\lambda_{2}(t),\lambda_{3}(t)\), so we are unable to offer an improved analysis of the LAZY LARGEST-CLAUSE algorithm.
First, the algorithm has to be able to satisfy the positive unit flow, so
\[\lambda_{1}(t)\geq(\lambda_{2}(t)+\lambda_{3}(t))\cdot\frac{2c_{2}(t)}{1-t}.\]
Thus
\[\frac{\lambda_{1}(t)}{1-\lambda_{1}(t)}\geq\frac{2c_{2}(t)}{1-t}\]
in other words
\[\lambda_{1}(t)\geq\frac{2c_{2}(t)}{1-t+2c_{2}(t)}.\]
First, the algorithm has to be able to handle the negative unit flow, so
\[\lambda_{2}(t)\geq\lambda_{1}(t)\frac{6c_{3}(t)+2c_{2}(t)}{1-t}\]
We choose
\[\left\{\begin{array}{l}\lambda_{1}(t)=\frac{2c_{2}(t)+\varepsilon}{1-t+2c _{2}(t)}\\ \lambda_{2}(t)=\frac{(2c_{2}(t)+\varepsilon)(6c_{3}(t)+2c_{2}(t)+\varepsilon )}{(1-t)(1-t+2c_{2}(t))},\\ \lambda_{3}(t)=1-\lambda_{1}(t)-\lambda_{2}(t)=\frac{(2c_{2}(t)+\varepsilon) (6c_{3}(t)+2c_{2}(t)+\varepsilon)}{(1-t)(1-t+2c_{2}(t))}.\end{array}\right. \tag{24}\]
It is an open problem if this approach can be completed to a full analysis.
## 7 Conclusions
The obvious question raised by this work is to improve our bounds enough to display the discontinuity of overlap distribution, a property of \(k\)-Exact Cover we believe to be true.
Note that there are obvious candidate approaches to improving our bounds: first, the lower bound could be improved by trying a rigorous version of the (heuristic) upper bound approach of Knysh et al. [12]. Or, it could be improved by finding explicit expressions for the parameters in (and explicitly analyzing) the LAZY LARGEST-CLAUSE algorithm, along the lines described in the previous section. Neither one of these two approaches looks particularly tractable, though.
As for the upper bound, an obvious candidate is the second moment method. We have attempted such an approach. The problem is that it seems to require optimizing of a function of 16 variables without enough obvious symmetries that would make the problem tractable.
## 8 Acknowledgments
The authors thank the anonymous referees for useful comments, suggestions and corrections.
|
2309.12489 | Supper Bassian and Nearly Generalized Bassian Abelian Groups | In connection to two recent publications of ours in Arch. Math. Basel (2021)
and Acta Math. Hung. (2022), respectively, and in regard to the results
obtained in Arch. Math. Basel (2012), we have the motivation to study the near
property of both Bassian and generalized Bassian groups. Concretely, we prove
that if an arbitrary reduced group has the property that all of its proper
subgroups are generalized Bassian, then it is also generalized Bassian itself,
that is, if G is an arbitrary nearly generalized Bassian group, then either G
is a quasi-cyclic group or G is generalized Bassian of finite torsion-free
rank. Moreover, we show that there is a complete description of the so-called
super Bassian groups, that are those groups whose epimorphic images are all
Basian, and give their full characterization. Also, we establish that the
hereditary property of Bassian groups gives nothing new as it coincides with
the ordinary Bassian property. | Peter Danchev, Brendan Goldsmith | 2023-09-21T21:15:57Z | http://arxiv.org/abs/2309.12489v1 | # Supper Bassian and nearly generalized Bassian Abelian Groups
###### Abstract.
In connection to two recent publications of ours in Arch. Math. Basel (2021) and Acta Math. Hung. (2022), respectively, and in regard to the results obtained in Arch. Math. Basel (2012), we have the motivation to study the _near_ property of both Bassian and generalized Bassian groups. Concretely, we prove that if an arbitrary reduced group has the property that all of its proper subgroups are generalized Bassian, then it is also generalized Bassian itself, that is, if \(G\) is an arbitrary nearly generalized Bassian group, then either \(G\) is a quasi-cyclic group or \(G\) is generalized Bassian of finite torsion-free rank. Moreover, we show that there is a complete description of the so-called _super Bassian_ groups, that are those groups whose epimorphic images are all Basian, and give their full characterization. Also, we establish that the _hereditary_ property of Bassian groups gives nothing new as it coincides with the ordinary Bassian property.
Key words and phrases:Abelian groups, Bassian groups, supper Bassian groups, generalized Bassian groups, (hereditarily, nearly, super) generalized Bassian groups, Hopfian and co-Hopfian groups 2
## 1. Introduction
Let \(G\) be a group of finite order \(n\), and let \(\mathcal{P}\) be a finite order group of finite order. A group \(G\) is said to be _\(\mathcal{P}\)-_\(G\)_ if there exists a group \(G\) such that \(G\) is _\(\mathcal{P}\)-_\(G\)_ if there exists a group \(G\) such that \(G\) is _\(\mathcal{P}\)-_\(G\)_ if there exists a group
Unfortunately, as already commented above, there is no a full description of the generalized Bassian groups so far. However, the following complete characterization of Bassian groups was obtained in [1, Main Theorem]:
**Theorem** (Bassian). (i) _A reduced Abelian group \(G\) is Bassian if, and only if, all of the ranks \(r_{0}(G),r_{p}(G)\) (for \(p\) a prime) are finite_;
(ii) _A non-reduced Abelian group \(G\) is Bassian if, and only if, it has the form \(G=D\oplus R\), where \(D\) is a finite dimensional \(\mathbb{Q}\)-vector space and \(R\) is a reduced Bassian group_.
It follows at once from these two necessary and sufficient conditions that a Bassian group is necessarily countable and that the finite direct sum of Bassian groups is again a Bassian group. Note also that a consequence of this classification is that the Bassian groups are hereditarily (see, e.g., [7] for both Hopfian and co-Hopfian groups): a subgroup of a Bassian group is necessarily Bassian. The converse is definitely _not_ true since the quasi-cyclic groups \(\mathbb{Z}(p^{\infty})\) are examples of non-Bassian groups all of whose proper subgroups are Bassian.
Furthermore, a detailed analysis of the fundamental properties of generalized Bassian groups was given in [2] and [3], respectively, although a full characterization result was _not_ obtained there. In fact, it is _not_ clarified yet whether a subgroup of a generalized Bassian groups is again generalized Bassian, and even finding a suitable approach how to attack this question is still problematic. However, some further advantage in this matter was made in the latter paper. Concretely, it was shown in [3] that _any generalized Bassian group must have finite torsion-free rank_ (thus answering in the negative of a recent question posed in [2]), that _any generalized Bassian group is a direct sum of a Bassian group and an elementary group_ and that _every subgroup of a generalized Bassian group is also a direct sum of a Bassian group and an elementary group_, as well as it was conjectured there that _the direct sum of a Bassian group and an elementary group has to be a generalized Bassian group_ (compare also with our final Problem 3).
## 2. The Hereditary, Near and Super Bassian Properties
As mentioned in the Introduction, a group with a property \((\mathcal{P})\) is said to have the _hereditary \((\mathcal{P})\) property_, provided the whole group along with all of its proper subgroups possess the same property \((\mathcal{P})\); a slight variation of this concept, which we have named _nearly_ is the following: group is said to have the _near \((\mathcal{P})\) property_, provided each of its proper subgroup possesses property \((\mathcal{P})\). We are _not_ assuming that a group which is nearly \((\mathcal{P})\) necessarily has the property \((\mathcal{P})\).
### Hereditary Bassian group
First of all, we show that Bassian groups are, in fact, a familiar class of groups involving a hereditary condition.
**Proposition 2.1**.: _A group is Bassian if, and only if, it is a hereditarily Hopfian group._
Proof.: If \(G\) is a Bassian group, then, as observed in [1], every subgroup of \(G\) is also Bassian. Furthermore, since Bassian groups are also Hopfian, \(G\) is then hereditarily Hopfian.
Conversely, if \(G\) is a hereditarily Hopfian group, then, by the classification given in [7], \(G\) is an extension of a direct sum of finite primary groups by a finite rank torsion-free group. It thus follows from the characterization in [1] that \(G\) is Bassian, as asserted.
As an immediate consequence, we have the following surprising claim.
**Corollary 2.2**.: _A reduced group is hereditarily Bassian if, and only if, it is Bassian._
### Nearly Bassian groups
The next result gives us a complete characterization of nearly Bassian groups.
**Proposition 2.3**.: _An arbitrary group \(G\) is nearly Bassian if, and only if, either \(G\) is the quasi-cyclic group \(\mathbb{Z}(p^{\infty})\) for some prime \(p\), or \(G\) is Bassian._
Proof.: The sufficiency is straightforward: indeed, a proper subgroup of a quasi-cyclic group is finite and hence Bassian, while a subgroup of a Bassian group is again Bassian - see, e.g., the discussion in the Introduction of [2].
For the necessity, let \(T\) denote the torsion subgroup of \(G\) and \(d(T)\) the maximal divisible subgroup of \(T\). Clearly, \(d(T)\) is a summand of \(G\), so that \(G=d(T)\oplus X\) for some subgroup \(X\) of \(G\). We consider three possible cases:
_Case_ (i): \(X=\{0\}\). Then \(G\) is of the form \(G=\bigoplus\limits_{p\in I}D_{p}\), where each \(D_{p}\) is a divisible \(p\)-group and \(I\) is a set of primes. If \(\mid I\mid>1\), then \(G\) has a proper subgroup \(D_{q}\) for some prime \(q\), which is, by hypothesis, Bassian - contrary to [1, Proposition 3.1]. We conclude that \(G\) must be a divisible \(p\)-group for some prime \(p\), and so an identical argument shows that it must have rank 1. Thus, in Case (i), we obtain that \(G\cong\mathbb{Z}(p^{\infty})\) for some prime \(p\), as asserted.
_Case_ (ii): \(X\neq\{0\}\neq d(T)\). In this case, \(d(T)\) is a proper subgroup of \(G\) and hence Bassian - it is, however, impossible as observed in Case (i). So, this case cannot occur.
_Case_ (iii): \(d(T)=\{0\}\), that is, \(G=X\). We claim that \(G\) must be Bassian; for if not, then it follows from the classification of Bassian groups in [1] that at least one of the ranks \(r_{0}(G),r_{p}(G)\) (for a prime \(p\)) is infinite. If any \(r_{p}(G)\) is infinite, then the \(p\)-primary component of \(G\), and hence \(G\) itself, has a proper subgroup, \(H\) say, with \(r_{p}(H)\) infinite; but this is nonsense since \(H\) would then also be Bassian. A similar argument shows that, if \(r_{0}(G)\) is infinite, then again there is a proper subgroup \(H\) of \(G\) with \(r_{0}(H)\) infinite, which is also untrue because \(H\) would also be Bassian. Thus, in Case (iii), \(G\) is necessarily Bassian, as required.
### Super Bassian Groups
In this subsection we investigate a property that is stronger than the property of being Bassian. In fact, our objective is to determine those groups having the property that every epimorphic image of the group is Bassian; such groups are, of course, Bassian but the converse manifestly fails: a non-reduced Bassian group will always have an epimorphic image which is a quasi-cyclic group, \(\mathbb{Z}(p^{\infty})\), and the latter is not Bassian - see the Main Theorem of [1] quoted in the Introduction. However, a group which is reduced may have a divisible epimorphic image: indeed, Corner has constructed, for each positive integer \(n\geq 2\), a torsion-free group of rank \(n\) such that all its subgroups of rank \(n-1\) are cyclic while
the torsion-free quotient groups of rank 1 are all divisible - see, for example, [6, Exercise 5, Section 4, Chapter 12] and, of course, every reduced unbounded \(p\)-group \(G\) has a divisible epimorphic image of the form \(G/B\), where \(B\) is a basic subgroup of \(G\).
Thus, motivated by these facts, we will make use of the following, somewhat _ad hoc_ definition: A group \(G\) is said to have property \((\mathfrak{P})\) if no quasi-cyclic group is an epimorphic image of \(G\).
In keeping with standard terminology used in relation to Hopfian groups (see, e.g., [7]), we make the following definition.
**Definition 2.4**.: A group \(G\) is said to be _super Bassian_ if every epimorphic image of \(G\) is Bassian.
Evidently, a super Bassian group is Bassian and it follows from our discussion above that a super Bassian group is necessarily reduced and also that \(G\) has property \((\mathfrak{P})\). The classification we are seeking is given by our next result.
**Theorem 2.5**.: _Let \(G\) be an arbitrary group. The following three statements hold: (i) if \(G\) is torsion, then \(G\) is super Bassian if, and only if, \(G\) is reduced Bassian; (ii) if \(G\) is torsion-free, then \(G\) is super Bassian if, and only if, \(G\) is Bassian and has property \((\mathfrak{P})\); (iii) if \(G\) is mixed, then \(G\) is super Bassian if, and only if, \(G\) is Bassian and has property \((\mathfrak{P})\)._
Proof.: Observe that in each of the three cases to be considered necessity holds obviously, so we need only give the arguments guaranteeing sufficiency.
_Case_ (i): As \(G\) is reduced torsion and Bassian, then it has a primary decomposition of the form \(G=\bigoplus\limits_{p\text{ prime}}G_{p}\) and each \(p\)-primary component is a finite \(p\)-group. But then every epimorphic image of \(G\) must have a similar primary decomposition and thus is Bassian by the classification of Bassian groups. So, \(G\) is super-Bassian.
_Case_ (ii): Let \(G\) be a torsion-free Bassian group satisfying property \((\mathfrak{P})\). Then \(G\) is reduced and of finite torsion-free rank. Let \(\phi:G\twoheadrightarrow X\) be an arbitrary epimorphism
from \(G\) onto \(X\). We consider the three possibilities for \(X\): (a) \(X\) is torsion; (b) \(X\) is torsion-free; (c) \(X\) is mixed.
_Subcase_ (a): As \(X\) is torsion, \(X\) must be reduced since otherwise it would have a quasi-cyclic summand, \(Y\) say, which is then an epimorphic image of \(G\) contrary to our hypothesis that \(G\) has property \(\mathfrak{P}\). Let \(X\) have a primary decomposition \(X=\bigoplus_{p}X_{p}\) and consider a surjection \(\psi_{p}:G\twoheadrightarrow X_{p}\), where \(\psi_{p}\) is the composition of \(\phi\) with the canonical projection of \(X\) onto \(X_{p}\). If each \(X_{p}\) is bounded then \(X_{p}\) is finite since it is then a bounded image of a torsion-free group of finite rank. We claim all the components \(X_{p}\) must be bounded. Suppose, for a contradiction, that some component \(X_{p}\) is unbounded and let \(B_{p}\) be a basic subgroup of \(X_{P}\). Now the composition of \(\psi_{p}\) with the canonical projection of \(X_{p}\) onto \(X_{p}/pX_{p}\) is a surjection and, as \(X_{p}/pX_{p}\) is bounded, we have, as observed above, that \(X_{p}/pX_{p}\) is finite. This is impossible since \(X_{p}/pX_{p}\cong B_{p}/pB_{p}\) would then force \(B_{p}/pB_{p}\) to be finite, contrary to our assumption that \(X_{p}\) is unbounded. Hence, \(X\) has the property that each of its primary components is finite and so \(X\) is Bassian and reduced.
_Subcase_ (b): If \(X\) is torsion-free, then clearly \(X\) is also torsion-free of finite rank and so Bassian.
_Subcase_ (c): If \(X\) is mixed then it must be reduced and \(r_{0}(X)\) is finite. So, \(X\) is an extension of the torsion group \(T(X)\) by a torsion-free group of finite rank; in particular, \(X\) is an extension of a torsion group by a countable torsion-free group. Let \(T(X)=\bigoplus_{p}X_{p}\) be the primary decomposition of the torsion subgroup \(T(X)\) of \(X\) and let \(B_{p}\) be a basic subgroup of \(X_{p}\) for each prime \(p\) in the decomposition. Then, by an unpublished result of Corner, \(B_{p}\) is an epimorphic image of \(X_{p}\), say \(\eta_{p}:X_{p}\twoheadrightarrow B_{p}\). (The result of Corner has been used previously in [7] and [8, Theorem 4.1]; some details of his arguments may be found in Section 4 of the latter.) Then, the composition of \(\eta_{p}\) with the surjections of \(G\) onto \(X_{p}\) yields an epimorphism of \(G\) onto the torsion reduced group \(B_{p}\). The argument in part (i) tells us that each \(B_{p}\) is finite, whence each \(X_{p}\) is also finite. Thus, for each prime \(p\), we have that \(r_{p}(X)\) is finite and, as \(r_{0}(X)\) is also finite, we conclude that \(X\) is Bassian.
Combining the three subcases we see that in case (ii), each epimorphic image \(X\) of \(G\) is Bassian, so \(G\) is super Bassian, as claimed.
_Case_ (iii): Here we are assuming that \(G\) is mixed Bassian and has property \((\mathfrak{P})\). Let \(G\twoheadrightarrow X\) be an arbitrary epimorphism. We again consider the three subcases (d) \(X\) is torsion; (e) \(X\) is torsion-free; (f) \(X\) is mixed.
_Subcase_ (d): As before, \(X\) is necessarily reduced. Set \(Z=\phi(T(G))\leq X\). Since each \(p\)-primary component of \(T(G)\) is finite, the same holds for \(Z\) and hence \(Z\) is Bassian. Furthermore, \(X/Z=\phi(G)/\phi(T(G))\) is an epimorphic image of \(G/T(G)\), a torsion-free group of finite rank. Therefore, \(G/T(G)\) satisfies property \((\mathfrak{P})\) since, by assumption \(G\) does. Thus, by what we established in part (ii) above, the group \(X/Z\) is Bassian. So, we have an exact sequence
\[0\to Z\to X\to X/Z\to 0,\]
where \(Z\) is torsion Bassian and \(X/Z\) is also Bassian. It now follows from Lemma 2.6 below that \(X\) is Bassian, as asserted.
_Subcase_ (e): If \(G\) is mixed Bassian, then \(r_{0}(G)\) is finite and thus \(X\), as being a torsion-free epimorphic image of \(G\), is also of finite rank and Bassian.
_Subcase_ (f): Again we have that \(r_{0}(X)\) is finite and if \(Y=\phi(T(G))\), then \(Y\) is torsion Bassian. Exactly as in subcase (d), we have that \(X/Y=\phi(G)/\phi(T(G))\) is an epimorphic image of the finite rank torsion-free group \(G/T(G)\). Observe that \(G/T(G)\) has property \((\mathfrak{P})\) since \(G\) has this property. It now follows from case (ii) that \(X/Y\) is Bassian, so that \(X\) is an extension of the torsion Bassian group \(Y\) by the Bassian group \(X/Y\). Applying Lemma 2.6 stated below, we see that \(X\) is also Bassian, as wanted.
Combining the subcases (d),(e) and (f), we have established that if \(G\) is mixed Bassian and has property \((\mathfrak{P})\), then \(G\) is super Bassian, as desired.
It remains only to establish the cited above Lemma 2.6, which is a simple extension of [1, Proposition 2.3].
**Lemma 2.6**.: _If \(0\to B\to G\to C\to 0\) is an extension of a torsion Bassian group \(B\) by a Bassian group \(C\), then \(G\) is Bassian._
Proof.: Choose \(H\leq G\) such that \(H/B=T(C)\), the torsion subgroup of \(C\). Then, we have an exact sequence
\[0\to H\to G\to G/H\cong C/T(C)\to 0.\]
Since \(C\) is Bassian, the factor-group \(C/T(C)\) is torsion-free of finite rank and so, in view of [1, Proposition 2.3], it suffices to show that \(H\) is Bassian since it is certainly torsion. To that goal, let \(H=\bigoplus H_{p}\) be the primary decomposition of \(H\) and note that, if \(B_{p}\) is the corresponding \(p\)-primary component of \(B\), then \(B_{p}\leq H_{p}\) and \(H_{p}/B_{p}\cong C_{p}\), where the latter is the \(p\)-primary component of \(T(C)\). Since \(B,T(C)\) are both Bassian, the subgroups \(B_{p},C_{p}\) are finite giving that each primary component of \(H\) is also finite and so \(H\) is Bassian, as expected.
The requirement in Theorem 2.2 above that \(G\) have property (\(\mathfrak{P}\)) is a little unsatisfactory, because it is not immediate how to determine this from the internal structure of \(G\). Fortunately, when a group is Bassian, we have the following helpful equivalence.
**Proposition 2.7**.: _Let \(G\) be a Bassian group, then \(G\) has property (\(\mathfrak{P}\)) if, and only if, the torsion-free quotient \(G/T(G)\) has property (\(\mathfrak{P}\))._
Proof.: If \(G\) has property (\(\mathfrak{P}\)), then it is clear that \(G/T(G)\) has the same property even when \(G\) is not Bassian.
Conversely, assume the contrary that \(G\) does not have property (\(\mathfrak{P}\)), and let \(\phi:G\twoheadrightarrow\mathbb{Z}(p^{\infty})\) be a surjection onto some quasi-cyclic group \(\mathbb{Z}(p^{\infty})\). If \(T_{p}\) denotes the \(p\)-primary component of \(G\), then \(\phi(T(G))=\phi(T_{p})\) is finite since \(G\) is Bassian. Thus, \(\phi(T(G))=C\), where \(C\) is a finite cyclic subgroup of \(\mathbb{Z}(p^{\infty})\). Hence,
\[\phi(G)/\phi(T(G))=\mathbb{Z}(p^{\infty})/C\cong\mathbb{Z}(p^{\infty}).\]
But \(\phi(G)/\phi(T(G))\) is an epic image of \(G/T(G)\), so that there is an epimorphism \(G/T(G)\twoheadrightarrow\mathbb{Z}(p^{\infty})\) and, consequently, \(G/T(G)\) does not have property (\(\mathfrak{P}\)), as required.
It is also worthwhile noticing that, for a finite rank torsion-free group \(G\), the following are equivalent (see [3] as well):
(a) \(G\) has no homomorphic images that are quasi-cyclic, i.e. \(G\) has property (\(\mathfrak{P}\));
(b) \(F\) is a free subgroup of \(G\) such that \(G/F\) has finite \(p\)-torsion for all primes \(p\);
(c) For every free subgroup \(F\) of \(G\) such that \(G/F\) is torsion, \(G/F\) has finite \(p\)-torsion for all primes \(p\);
(d) For every prime \(p\), the localization \(G_{(p)}\) is a free \(\mathbb{Z}_{(p)}\)-module (i.e., it is locally free).
We end our work in this section with the following interesting question:
**Problem 1**. Describe the structure of hereditary generalized Bassian groups and super generalized Bassian groups by finding a complete structural characterization of them.
## 3. Nearly Generalized Bassian Groups
In this section, we consider the problem analogous to that addressed in Proposition 2.6 in Section 2, where Bassian groups are replaced by generalized Bassian groups. Thus, we shall say that a group \(G\) is _nearly generalized Bassian_ if every proper subgroup of \(G\) is generalized Bassian. Before proceeding to consideration of this problem, we present a brief review of known material about generalized Bassian groups. Fuller details and proofs may be found in [2] and [3], respectively.
\(\bullet\) A generalized Bassian group has finite torsion-free rank.
Since this fact is very important for our further presentation, we shall give an independent and more conceptual verification by modifying some of the arguments used in [2].
To that purpose, assume that \(G\) is generalized Bassian and is not torsion. If \(G\) is torsion-free, then it necessarily has finite rank by [2, Proposition 2.8]. If \(G\) is splitting mixed, then again it is of finite torsion-free rank by [2, Proposition 2.11].
So, consider the case where \(G\) is genuine mixed and assume, for a contradiction, that \(r_{0}(G)=\kappa\) is infinite. However, we know from [2, Proposition 3.3] that if \(T\) is the torsion subgroup of \(G\), then \(\kappa<\mid T\mid\). Furthermore, \(T=\bigoplus_{p}T_{p}\), where each \(T_{p}\) has the form \(T_{p}E_{p}\oplus F_{p}\), with \(E_{p}\) an elementary \(p\)-group and \(F_{p}\) a finite \(p\)-group. It follows from Remark 3.4 in [2] that \(\mid T_{p}\mid>r_{0}(G)\) for infinitely many primes \(p\), so that infinitely many of the \(E_{p}\) have \(p\)-rank \(>\kappa\).
Since \(G/T\) is torsion-free of infinite rank \(\kappa\), there is a subgroup \(H\) of \(G\) with \(G=H+T\) and \(\mid H\mid=\kappa\). Set \(B=H+\bigoplus+pF_{p}\), so that \(\mid B\mid\) is also \(\kappa\). Now, \(\bigoplus_{p}F_{p}\leq T\cap B\leq T=\bigoplus_{p}F_{p}\oplus\bigoplus E_{p}\), so there is a subgroup \(D\leq\bigoplus_{p}E_{p}\) with \(T=(T\cap B)\oplus D\). But \(B\cap D\leq T\cap B\cap D=\{0\}\) and so
\[G=H+T\leq B+T=B+(T\cap B)+D\leq G,\]
hence \(G=B+D=B\oplus D\).
Since \(G\) is generalized Bassian and \(B\) is a summand of \(G\), then \(B\) is also generalized Bassian utilizing [2, Lemma 2.3]. Note that, as \(D\) is torsion, \(r_{0}(G)=r_{0}(B)=\kappa\). However, \(B=H+\bigoplus_{p}F_{p}\), so our assumption that \(\kappa\) is infinite yields \(\mid B\mid=\mid H\mid=\kappa\), and thus it follows that the torsion subgroup \(t(B)\) of \(B\) must have cardinality \(\leq\kappa\). This means that \(\mid t(B)\mid\leq\kappa=r_{0}(B)\) contrary to [2, Proposition 3.3]. This contradiction forces at once \(r_{0}(G)\) to be finite when \(G\) is genuine mixed, thereby establishing the result after all, as asked for.
\(\bullet\) If \(G\) is generalized Bassian, then the torsion subgroup \(T\) of \(G\) is of the form \(T=\bigoplus_{p}T_{p}\) where each \(T_{p}\) is a \(p\)-primary group of the form \(T_{p}=F_{p}\oplus E_{p}\) where \(F_{p}\) is a finite \(p\)-group and \(E_{p}\) is an elementary \(p\)-group.
We note now a simple result that gives an initial insight into the structure of generalized Bassian groups.
**Proposition 3.1**.: _If \(G\) is a genuinely mixed group which is generalized Bassian, then \(G\) is an extension of an elementary group by a Bassian group._
Proof.: As noted above, each \(p\)-primary component of \(G\) is of the form \(T_{p}(G)=F_{p}\oplus E_{p}\), where \(F_{p}\) is a finite \(p\)-group and \(E_{p}\) is an elementary \(p\)-group. Furthermore,
\(r_{0}(G)\) is finite. Let \(E\) be the direct sum over the various primes \(p\) of the elementary groups \(E_{p}\), and let \(F\) be the corresponding direct sum of the finite \(p\)-groups \(F_{p}\); thus the torsion subgroup \(T\) of \(G\) has the form \(T=E\oplus F\).
Consider now the group \(G/E\): we have an exact sequence
\[0\to T/E\to G/E\to G/T\to 0.\]
Since \(T/E\cong\bigoplus_{p}F_{p}\) and each \(F_{p}\) is a finite \(p\)-group, we get that \(T/E\) is a torsion Bassian group. Furthermore, \(G/T\) is torsion-free of finite rank, and hence is Bassian. It follows from [2, Proposition 2.3] that \(G/E\) is Bassian. Thus, \(G\) is an extension of the elementary group \(E\) by the Bassian group \(G/E\), as stated.
In fact, Danchev and Keef have shown in [3] the more general fact that a generalized Bassian group is actually a split extension of an elementary group by a Bassian group. Their arguments are developed in the context of their deep investigations into mixed Abelian groups with bounded \(p\)-torsion. We present here another version of this result based on the results established in [2].
Recall that in [2], the class \(\mathcal{P}\) of groups \(G\) was considered, where \(G\in\mathcal{P}\) if there is an infinite set of primes \(\Pi\) with \(T\leq G\leq\bar{T}\), where \(T\) is the torsion subgroup of \(G\) and \(T\) is a direct sum of \(p\)-primary bounded components \(T_{p}\), \(T=\bigoplus_{p\in\Pi}T_{p}\) and \(\bar{T}=\prod_{p\in\Pi}T_{p}\). In [2, Corollary 3.14] a decomposition of a generalized Bassian group in \(\mathcal{P}\) was given: \(G=A\oplus H\), where \(A\) is torsion and each \(p\)-primary component of \(A\) is a direct sum of an elementary \(p\)-group and a finite \(p\)-group, while \(H\) is a Bassian group. An examination of the proof shows that the generalized Bassian property od \(G\) was used only for two purposes (i) to show that \(r_{0}(G)\) was finite and (ii) that \(A\) had the desired decomposition with primary components of the form "elementary plus finite". Thus, in fact, an extended version of [2, Corollary 3.14] holds as follows.
**Proposition 3.2**.: _Suppose that \(G\in\mathcal{P}\) with \(r_{0}(G)\) finite, and each \(p\)-primary component \(T_{p}\) of \(G\) has the form \(T_{p}=E_{p}\oplus S_{p}\), where \(E_{p}\) is an elementary \(p\)-group and \(S_{p}\) is a finite \(p\)-group. Then \(G\) has a decomposition \(G=A\oplus H\) with \(A\) torsion and having primary components of the form \(E^{\prime}_{p}\oplus F_{p}\) with \(E^{\prime}_{p}\) elementary and \(F_{p}\) finite, and \(H\) Bassian._
Now, consider an arbitrary genuinely mixed group \(G\) which is generalized Bassian. Then, certainly each primary component of \(G\) has the form "elementary plus finite", so that the torsion subgroup of \(G\) has the form \(T=E\oplus S\), with \(E\) elementary and \(S\) a direct sum over primes \(p\) of finite \(p\)-groups. Therefore, we have an exact sequence
\[0\to T\to G\to G/T\leq D\to 0,\]
where \(D\) is a finite dimensional \(\mathbb{Q}\)-space. Applying Proposition 24.6 in [4] (see also Exercise (6) in [6]), we obtain a group \(G^{\prime}\) with
\[0\to T\to G^{\prime}\to D\to 0;\]
note that \(G^{\prime}\in\mathcal{P}\) and has the same torsion subgroup as \(G\). Thus, Proposition 3.2 enables us to get a decomposition \(G^{\prime}=A\oplus H\) with \(A\) torsion and \(H\) Bassian. Furthermore, \(A\) has the form \(A=E\oplus F\) with \(E\) elementary and \(F\) is a direct sum over primes \(p\) of finite \(p\)-groups. Absorbing \(F\) into \(H\), we write that \(G^{\prime}=E\oplus H^{\prime}\), where \(H^{\prime}=F\oplus H\). Since \(H\) is Bassian and each primary component of \(F\) is finite, it now follows easily that \(H^{\prime}\) is Bassian, as required.
Finally, observe that as \(E\leq T\), modularity gives us that
\[G=G\cap(E\oplus H^{\prime})=E\oplus(G\cap H^{\prime});\]
since Bassian groups are hereditarily Bassian - see, for example, the comment in the Introduction to [2] - it follows that \(H_{0}=G\cap H^{\prime}\) is Bassian, so that \(G=E\oplus H_{0}\) with \(E\) elementary and \(H_{0}\) Bassian.
Thus, we have established the following statement from [3].
**Theorem 3.3**.: _If \(G\) is a genuine mixed group which is generalized Bassian, then \(G\) has a decomposition of the form \(G=E\oplus H\) with \(E\) elementary and \(H\) Bassian._
It follows from the classification of torsion, torsion-free and mixed generalized Bassian groups - see, for instance, Proposition 2.8, Theorem 2.10 and Proposition 2.11 in [2] - that an "elementary plus Bassian" decomposition also holds in each of these three cases. So, we derive the following consequence.
**Corollary 3.4**.: _If \(G\) is an arbitrary generalized Bassian group, then \(G\) has a decomposition \(G=E\oplus H\), where \(E\) is elementary and \(H\) is Bassian._
We return now to consideration of groups which are nearly generalized Bassian. Such groups are, of course, of finite torsion-free rank. (We remark that this does not require the full force of the result of Danchev and Keef mentioned in the first bullet point above: if a nearly generalized Bassian group is not of finite torsion-free rank then it will have a free subgroup of infinite rank, but such a group is definitely _not_ generalized Bassian by [2, Example 2.7].) In addition, if \(G\) is a nearly generalized Bassian group, then we have:
\(\bullet\) If \(X\) is a proper torsion subgroup of \(G\), then \(X\) is generalized Bassian and so \(X\) is of the form \(X=\bigoplus X_{p}\), where each \(X_{p}\) is a \(p\)-group of the form \(X_{p}=E_{p}\oplus F_{p}\) with \(E_{p}\) elementary and \(F_{p}\) finite.
Our first step in classifying groups which are nearly generalized Bassian is to deal with divisible groups.
**Proposition 3.5**.: _Let \(G\) be a divisible group which is nearly generalized Bassian. Then, the following three points are true: (i) \(G\) is either torsion or torsion-free; (ii) if \(G\) is torsion, then \(G\cong\mathbb{Z}(p^{\infty})\) for some fixed but arbitrary prime \(p\); (iii) if \(G\) is torsion-free, then \(G\) is a finite dimensional \(\mathbb{Q}\)-vector space and \(G\) is generalized Bassian (and even Bassian)._
Proof.: (i) If \(G\) is mixed and divisible, then it must have a proper subgroup which is isomorphic to \(\mathbb{Z}(p^{\infty})\) for some prime \(p\). Since \(\mathbb{Z}(p^{\infty})\) is not generalized Bassian, no such group exists.
(ii) If \(G\) is torsion divisible of rank greater than one, then \(G\) has a proper quasi-cyclic subgroup which would then be generalized Bassian. As observed above, this is impossible, so \(G\) has rank one and thus is isomorphic to \(\mathbb{Z}(p^{\infty})\) for some prime \(p\). (iii) If \(G\) is torsion-free divisible, then, as observed above, \(G\) must be of finite rank, as required.
It follows from an easy application of the above Proposition 3.5 that if \(G\) is a nearly generalized Bassian group which is neither divisible nor reduced, then \(G\) has the form \(G=\bigoplus_{n}\mathbb{Q}\oplus G^{\prime}\), where \(0\neq n\) is finite and \(G^{\prime}\) is a reduced nearly generalized Bassian group.
Our next result clarifies the situation for reduced nearly generalized Bassian groups which are either torsion, torsion-free or splitting mixed, respectively.
**Proposition 3.6**.: _Let \(G\) be a reduced group which is nearly Bassian. Then if \(G\) is either torsion, torsion-free or splitting mixed, then \(G\) is generalized Bassian._
Proof.: Since finite groups are always generalized Bassian (and even Bassian), we may assume that \(G\) is infinite. Furthermore, we have observed above that \(r_{0}(G)\) is finite, so if \(G\) is torsion-free it is a reduced group of finite rank which is then certainly generalized Bassian.
If \(G\) is a \(p\)-group, then we see that \(G\) has a direct decomposition \(G=\mathbb{Z}(p^{n})\oplus H\) for some finite \(n\). But then, by hypothesis, \(H\) is a generalized Bassian \(p\)-group and hence has the form \(H=E\oplus F\), where \(E\) is an elementary \(p\)-group and \(F\) is finite. But then \(G\), itself, has a similar decomposition and hence, with the aid of [2, Theorem 2.10], \(G\) is generalized Bassian. If, however, \(G\) is torsion, then it follows easily using the primary decomposition of \(G\) and the result just established for \(p\)-groups that \(G\) is generalized Bassian, as wanted.
Finally, if \(G\) is splitting mixed, both its torsion and torsion-free parts are reduced generalized Bassian and so \(G\), itself, is then generalized Bassian by virtue of [2, Corollary 2.12], as desired.
Combining the results obtained above and using the classification results in [2], it follows readily that we obtain:
**Proposition 3.7**.: _Let \(G\) be a nearly generalized Bassian group which is either torsion, torsion-free or splitting mixed, then either \(G\cong\mathbb{Z}(p^{\infty})\) for some prime \(p\), or \(G\) is generalized Bassian._
To complete the classification of nearly generalized Bassian groups, it remains to consider the situation in which \(G\) is nearly generalized Bassian and genuinely mixed. For the remainder of this section we shall assume \(G\) is of this type.
The following observations will be useful in dealing with the genuinely mixed nearly generalized Bassian groups.
\(\bullet\) If \(\varphi:G\to G/N\) is an injection then, since \(r_{0}(G)\) is finite, we have \(r_{0}(G)\leq r_{0}(G/N)=r_{0}(G)-r_{0}(N)\), so that \(r_{0}(N)=0\) or, equivalently that \(N\leq T\).
\(\bullet\) If \(\varphi\) embeds \(G\to G/N\), then we have that \(\varphi(T)\leq T(G/N)=T/N\) since \(N\leq T\). Furthermore, as we are assuming \(G\) is genuinely mixed, \(T\) itself is generalized Bassian, and it follows that \(N\) is a direct summand of \(T\), so that if \(N=\bigoplus N_{p}\), then each primary component \(N_{p}\) of \(N\) is a direct summand of the corresponding \(T_{p}\).
\(\bullet\) Since \(\varphi(T)\leq T/N\), the full invariance of the primary components gives that \(\varphi(T_{p})\leq T_{p}/N_{p}\) and so \(\varphi(pT_{p})\leq p(T_{p}/N_{p})=(pT_{p}+N_{p})/N_{p}\). Hence, as \(pT_{p}=pF_{p}\), we have that
\[pF_{p}=\varphi(pT_{p})\leq(pF_{p}+N_{p})/N_{p}\cong pF_{p}/(N_{p}\cap pF).\]
However, \(pF_{p}\) is finite so that \(N_{p}\cap pF_{p}=\{0\}\), and it now follows from Lemma 3.8 below that \(N_{p}\) is necessarily elementary.
We are now ready to prove the promised lemma applied above.
**Lemma 3.8**.: _Suppose that \(A,B,C\) are \(p\)-groups with \(A\leq B\oplus C\). If \(B\) is elementary and \(A\cap pC=\{0\}\), then \(A\) is elementary._
Proof.: Let \(x\in A\), so that \(x=b+c\) with \(b\in B,c\in C\). Then \(px=pb+pc=pc\in A\cap pC=\{0\}\).
**Remark 3.9**.: We can actually say a bit more. When we work with a single prime \(p\), then we can modify the \(p\)-primary component so that in the decomposition \(E_{p}\oplus F_{p}\) that \(F_{p}\) has no summands of order \(p\). The resulting enlarged elementary group will continue to be a direct summand of \(G\). In this situation, we can conclude that
\(N_{p}\leq E_{p}\): if \(N_{p}\) is not contained in \(E_{p}\), then \(N_{p}\cap F_{p}\) is a non-zero summand of the elementary group \(N_{p}\) and hence a summand of \(T_{p}\). This, however, is impossible since if \(\langle x\rangle\) is a summand of \(N_{p}\), then the height \(ht_{T_{p}}(x)\) is necessarily \(0\), but any \(x\in N_{p}\cap F_{p}\) must be of the form \(x=py\), because \(F_{p}\) has no summands of order \(p\). In fact, it is easy to see that this procedure will work for any finite set of primes. Thus, in the genuinely mixed situation, we may assume that \(N_{p}\) is actually a summand of \(T_{p}\).
We, thus, now come to the following key statement.
**Proposition 3.10**.: _Let \(G\) be a genuinely mixed group which is nearly generalized Bassian. Then, \(G\) is generalized Bassian._
Proof.: Let us notice the crucial fact that, if \(\varphi:G\to G/N\) is an embedding for some \(N\neq 0\), then from our observations above we can find a decomposition \(G/N=(T_{p}/N_{p})\oplus(G^{\prime}/N^{\prime})\), where \(N^{\prime}=\bigoplus\limits_{q\neq p}N_{q}\) and \(G=T_{p}\oplus G^{\prime}\).
So, we are now in the following situation: \(\varphi\) is an embedding of \(G\) into \(G/N\) and we can decompose \(G=T_{p}\oplus G^{\prime}\) with \(T_{p}\neq\{0\}\) and \(N=N_{p}\oplus N^{\prime}\), where \(N^{\prime}=\bigoplus\limits_{q\neq p}N_{q}\). Then, \(G/N=(T_{p}/N_{p})\oplus(G^{\prime}/N^{\prime})\). Now, let \(\pi\) be the canonical projection of \(G/N\) onto \(G^{\prime}/N^{\prime}\) and set \(\psi=\pi\varphi\). Consider the restriction \(\psi^{\prime}=\psi\upharpoonright G^{\prime}\), a mapping from \(G^{\prime}\to G^{\prime}/N^{\prime}\). We claim that \(\psi^{\prime}\) is monic; if not, then there is a nonzero \(x\in G^{\prime}\) such that \(\psi^{\prime}(x)=0\), so that \(\varphi(x)\in\text{Ker}\pi=T_{p}/N_{p}\), which quotient is a \(p\)-group. However, if \(x\) is a torsion element of \(G^{\prime}\), then its order is relatively prime to \(p\). Since \(\varphi\) is monic, it preserves orders, so that \(x=0\), a contradiction. Thus, \(x\) must be an element of infinite order, which maps to an element of order \(p^{n}\) for some finite \(n\). This, too, is impossible, so \(\psi^{\prime}\) is monic, as claimed.
The mapping \(\pi\varphi:G^{\prime}\to G^{\prime}/N^{\prime}\) is an embedding and, by the hypothesis posed on \(G\), we know that \(G^{\prime}\) is generalized Bassian, whence \(N^{\prime}\) is a direct summand of \(G\). It now follows immediately that \(N\) is then a direct summand of \(G\), since, as observed above, \(N_{p}\) is a direct summand of \(T_{p}\). Consequently, \(G\) is also generalized Bassian, as asserted.
Summarizing the results obtained above, we have established the following curious assertion that is our main motivating result.
**Theorem 3.11**.: _If \(G\) is an arbitrary nearly generalized Bassian group, then either \(G\) is a quasi-cyclic group or \(G\) is generalized Bassian._
We close our work in this section with the following two difficult questions (compare also with the queries from [3]).
**Problem 2**. If the Bassian group \(B\) has no direct summands isomorphic to \(\mathbb{Z}(p)\) for each prime \(p\), is then the direct sum \(E\oplus B\) a generalized Bassian group, whenever \(E\) is an elementary group?
We would like to notice that in [3] was established that _the direct sum of an elementary group and a Bassian group such that every its elementary summand is finite is a generalized Bassian group_.
**Problem 3**. Does it follow that the direct sum of a Bassian group and a generalized Bassian group remains a generalized Bassian group?
## 4. Concluding Remarks
Two obvious questions arise: what can be said about the hereditarily and super generalized Bassian groups (see Problem 1 quoted above)? A partial answer is easily given in relation to the first question.
If \(G\) is generalized Bassian and either torsion, torsion-free or splitting mixed, then every subgroup of \(G\) is likewise generalized Bassian. By utilizing the 'elementary plus Bassian' decomposition of Danchev-Keef given in Theorem 3.3 in Section 3, it is clear that any subgroup of a generalized Bassian group is again the direct sum of an elementary group and a Bassian group. However, it is not clear, and it seems to be an extremely difficult question, as to how to show that such a group is generalized Bassian. However, several examples are given in [3] of sufficient conditions to ensure the generalized Bassian property holds, but no a complete result is known yet; see Conjecture 1.3 in [3].
**Funding:** The scientific work of the first-named author (P.V. Danchev) was supported in part by the Bulgarian National Science Fund under Grant KP-06 No. 32/1 of December 07, 2019, as well as by the Junta de Andalucia under Grant FQM 264, and by the BIDEB 2221 of TUBITAK.
|
2309.11180 | Weak ergodicity breaking transition in randomly constrained model | Experiments in Rydberg atoms have recently found unusually slow decay from a
small number of special initial states. We investigate the robustness of such
long-lived states (LLS) by studying an ensemble of locally constrained random
systems with tunable range $\mu$. Upon varying $\mu$, we find a transition
between a thermal and a weakly non-ergodic (supporting a finite number of LLS)
phases. Furthermore, we demonstrate that the LLS observed in the experiments
disappear upon the addition of small perturbations so that the transition
reported here is distinct from known ones. We then show that the LLS dynamics
explores only part of the accessible Hilbert space, thus corresponding to
localisation in Hilbert space. | Aydin Deger, Achilleas Lazarides | 2023-09-20T10:02:38Z | http://arxiv.org/abs/2309.11180v1 | # Weak ergodicity breaking transition in randomly constrained model
###### Abstract
Experiments in Rydberg atoms have recently found unusually slow decay from a small number of special initial states. We investigate the robustness of such long-lived states (LLS) by studying an ensemble of locally constrained random systems with tunable range \(\mu\). Upon varying \(\mu\), we find a transition between a thermal and a weakly non-ergodic (supporting a finite number of LLS) phases. Furthermore, we demonstrate that the LLS observed in the experiments disappear upon the addition of small perturbations so that the transition reported here is distinct from known ones. We then show that the LLS dynamics explores only part of the accessible Hilbert space, thus corresponding to localisation in Hilbert space.
_Introduction:-_ Isolated quantum systems thermalise: the expectation values of local operators at long times are determined by the values of a small number of conserved quantities (typically, the energy, so that the expectation values coincide with those in the microcanonical ensemble) [1]. This is encapsulated in the Eigenstate Thermalization Hypothesis (ETH) [2; 3], which plays the role for quantum systems that the ergodic hypothesis does for classical, forming the bridge between unitary quantum dynamics and statistical mechanics.
Generic systems satisfy ETH as a matter of course [4]. Exceptions robust to weak perturbations include many-body localised systems in the presence of disorder [5; 6; 7] (although there is currently debate on whether these are localised or glassy [8; 9; 10]) or quasiperiodic potentials [11; 12]. Such systems disobey ETH either throughout the spectrum [13] or in a finite fraction of it [14].
Recently, experiments in Rydberg atoms have observed that certain initial conditions result in abnormally slow decay of the initial state [15]. Consequently, these systems can exhibit a "weak" violation of ergodicity, meaning that a limited number of non-thermalising eigenstates are present within an otherwise ergodic (thermal) system. This has triggered significant theoretical activity focussing on a class of constrained models, central among them the so-called PXP model [16; 17; 18; 19; 20].
In the PXP model, the bulk of the eigenstates satisfies the ETH, but a small number (a vanishing fraction of Hilbert space) violate it-these are called scarred states by analogy to the scarred states discussed in quantum chaos [21]. At the same time they have high overlap with certain experimentally relevant states. Thus, while generic initial states result in thermalisation, starting from one of these few initial states results in the observed anomalous behaviour. The central feature of the PXP model explaining this behaviour is then prove the existence of these scarred states.
A pertinent question that arises is the stability of these states when subjected to perturbations. Stability with respect to certain local perturbations preserving has been studied in: Ref. [22], which found evidence for proximity of the PXP model to an (unknown) integrable model; Ref. [23] which found that the scarred states are unstable (hybridise with the thermal states) in the thermodynamic limit; [24] which found that a subset of the scars do remain stable in the thermodynamic limit. This latter model also studies a translationally-invariant modification of the PXP model in which the constraints all have the same, tunable range of range \(\alpha\) and which hosts a few low-entropy states. The slow decay of the special initial states of PXP disappears, however, once the spatial range of the constraints is increased beyond that of the PXP model [25]. Finally, Ref. [26] studies an ensemble of random Hamiltonians, defined as adjacency matrices of random graphs [27]. While the members of this ensemble only have nonvanishing matrix elements between states differing by a single spin flip, a generic member of the ensemble cannot be written as a sum of local terms-the model is therefore intrinsically nonlocal. Additionally, this work focusses on spectral, rather than dynamical, properties.
In this Letter, we focus on the existence of initial states exhibiting slow decay in models with local PXP-like constraints but of spatially random range. We refer to these states as long-lived states (LLS). We find a phase transition between a fully ergodic (thermal) phase and one with weakly broken ergodicity supporting long-lived states (LLS) as the constraint strength \(\mu/N\) increases above a threshold. The LLS exhibit robust oscillations, returning close to their initial states repeatedly before ultimately decaying. These states are not connected to the LLS present in the clean PXP model: The latter disappear when we introduce local perturbations to the PXP model, and our LLS only appear once we increase the mean random constraint length. Meanwhile the bulk spectral properties of the model (such as level statistics) are insensitive to this transition, which is however marked by abrupt changes in both the probability and density of LLS. We finally establish that the LLS in our model are nontrivial, exploring only a small fraction of the accessible Hilbert space.
This paper is organised as follows. We first introduce
the model and the class of states we are interested in, then demonstrate the existence of a phase transition between a fully thermal and a weakly non-ergodic phase. This constitutes our main result.
We then establish that the LLS explore only a fraction of the accessible Hilbert space, and finally show that the PXP model is an exceptional member of the ensemble we consider-members of the ensemble with the same constraint range as the PXP model but no translational invariance do not support LLS.
_Model:-_ Our randomly-constrained model is described by the Hamiltonian
\[H=\sum_{i=1}^{N}X_{i}\prod_{j=1}^{r_{i}}P_{i-j}P_{i+j}, \tag{1}\]
where \(X_{i},Z_{i}\) are the usual Pauli spin operators and \(P_{i}=(\mathds{1}-Z_{i})/2\) projects to the down (facilitating) state of spin \(i\). Thus, the spin at \(i\) can flip only if the \(2r_{i}\) spins a distance \(r_{i}\) on either side are in the facilitating state. We select the \(r_{i}\) independently by drawing random integers from a uniform distribution from interval \([\mu-\epsilon,\mu+\epsilon]\) with a mean \(\mu\).
For the model of Eq. 1, like for PXP [28] as well as models displaying Hilbert space shattering [29; 30; 31], Fock space breaks up into disconnected components-spin configurations belonging to one are not reachable from the others by repeated action of the Hamiltonian [32]. In this and what follows, we focus on the largest such component of the graph [33]. This sector is always ergodic as far as level statistics and eigenstate properties are concerned [34] but, as we will show that, depending on \(\mu\), there are non-ergodic states.
Our analysis is primarily focused on the return probability, \(\mathcal{L}(t)=\left|\left\langle\alpha|\exp(-iHt)\left|\alpha\right\rangle \right|^{2}\right.\) starting from product states \(\left|\alpha\right\rangle\). We aim to pinpoint those \(\left|\alpha\right\rangle\) states that exhibit revivals, where the system periodically reverts to a state proximate to its initial configuration. These states will be referred to as LLS, bearing conceptual resemblance to the two \(\mathbb{Z}_{2}\) states in the PXP model, which have high overlaps with scarred eigenstates.
In what follows we establish that both the probability that such states exist, \(p\), and their density \(\rho=N_{\rm LLS}/\mathcal{D}_{\mathcal{H}}\) (with \(N_{\rm LLS}\) the number of such states and \(\mathcal{D}_{\mathcal{H}}\) the dimension of the largest connected component of the Hilbert space) departs from \(0\) at finite values \(\mu_{c}^{p,\rho}/N\) with \(N\) the number of spins; for \(\mu<\mu_{c}^{p}\) there are no long-lived states, while for \(\mu>\mu_{c}^{\rho}\) a finite fraction of Fock states result in long-lived oscillations. Within our numerical analysis, \(\mu^{p,\rho}\), appear to be either identical or very close.
At first sight, this appears to contradict known results, since the PXP model is a particular realisation of our model for \(\mu=1,\epsilon=0\) but is known to have LLS. However, we will later show that there is no contradiction: local perturbations in the PXP model eliminate the scarred states, and consequently its LLS. The LLS we study only appear in the presence of stronger perturbations. Thus our results indicate the presence of a distinct phase with weakly broken ergodicity, unconnected to the one for the PXP model.
_Weak ergodicity breaking transition:-_ We use the scaled mean constraining range \(\mu/N\) as a tuning parameter, varying which (for fixed \(\epsilon=1\)) the probability and density of LLS departs from \(0\) (that is, LLS appear) at some critical \(0.2\leq\mu_{c}^{p,\rho}/N\leq 0.3\). We call this transition _weak ergodicity breaking transition_ because it is not visible in the usual ergodicity measures such as level statistics or eigenstate properties [35] but rather is only visible in dynamics starting from a small number of initial states.
To be more concrete, we first provide a precise definition of the LLS and then characterise these states in terms of their strength and persistence. Starting with a given Fock state \(\left|\alpha\right\rangle\) we evolve it using Eq. 1 up to some time \(t_{\rm max}\) and then calculate the return probability \(\mathcal{L}(t)\). For \(\alpha\) to qualify as a LLS, we count the number of times, \(N_{\rm th}\), that the return probability \(\mathcal{L}(t)\) goes above a given threshold \(\mathcal{L}_{\rm th}\). In what follows we define an LLS as one for which \(N_{\rm th}\geq 3\) for \(\mathcal{L}_{\rm th}=0.5\). We have checked that our main findings are qualitatively the same for other definitions of \(N_{\rm th}\) and \(\mathcal{L}_{\rm th}\)[36].
Under this definition, the \(\mathbb{Z}_{2}\) states are categorised as LLS in the PXP model. Consequently, this definition facilitates the connections between LLS and non-thermal many-body states. That is, the presence of LLS for a specified \(\mu\) implies the existence of eigenvectors that highly overlap with the LLS, giving rise to quantum many-body scarring.
In Fig. 1, we report conclusive evidence of the aforenamed phase transition. The top panel (a) shows LLS probability for different system sizes as a function of \(\mu/N\) while the bottom panel (b) shows the density of LLS-the fraction of Fock states in the largest connected cluster that are LLS. Both of these measures depart from \(0\) at around \(\mu/N\approx 0.2\).
For the probability \(p\), the trend in Fig. 1(a) with increasing system size makes it clear that, in the thermodynamic limit, \(p\) rapidly rises to \(1\) above \(\mu/N=\mu_{c}^{p}/N\approx 0.2\), so that at least one LLS appears. Meanwhile, the density \(\rho\) displays a more intricate behaviour as depicted in Fig. 1(b). Initially, it too departs from \(0\) at \(\mu/N\approx\mu_{c}^{p}/N\) and just above it trends to a finite value as \(N\) increases. At \(\mu/N>\mu_{c}^{p}/N\approx 0.3\), the density increases with system size (see inset), so that one can confidently state that for \(\mu/N>\mu_{c}^{p}/N\) a finite density of Fock states are LLS. While there exists a parameter regime \(\mu_{c}^{p}/N<\mu/N<\mu_{c}^{p}/N\) where the behaviour appears to be non-monotonic, with two peaks appearing, we believe this to be a finite-size effect for the following reason. In the Supp. Mat. we show plots of both \(p\) and \(\rho\) for different thresholds \(\mathcal{L}_{\rm th}=0.6\) and \(0.7\), larger than the value \(0.5\) used here. We notice that for \(0.6\), the density also picks up this peak for lower values of \(L\), but it dis
appears (with the trough between the peaks filled in) for larger \(L\). Conversely, for \(0.7\) the trough remains even for the largest system size we use. Since \(p\) is more sensitive than \(\rho\) (as it detects a single LLS), we conclude that this behaviour will also be mirrored for \(\rho\), but at larger (and inaccessible to us) sizes, and thus believe the actual transition in \(\rho\) to lie at the point where it departs \(0\) for the first time, around \(0.2\). We have not been able to ascertain the origin of this behaviour, nor their eventual fate in the thermodynamic limit; whether they merge into one peak, remain separate, or one disappears remains to be determined in future work, but none of these possible scenarios change our general conclusions.
In conclusion, from our results in Fig. 1 it is evident that, first, LLS definitely appear for \(\mu/N>\mu_{c}^{p}/N\approx 0.2\) and, second, that a _finite density_ of such states appears for \(\mu/N>\mu_{c}^{p}/N\approx 0.3\)
Two obvious questions present themselves at this point. First: Could it be that the largest connected component in Hilbert space is small enough that we are simply seeing recurrences because of its finiteness (as opposed to the revivals being due to the dynamics exploring only a subspace of that)? Second: From Fig. 1, it would appear that the PXP model, a specific realisation of \(\mu/N=1/N\), should display no LLS. But, as is well known, it does have LLS; so how can our results be reconciled with that?
We answer each of these questions in turn.
_Truncated Lanczos Iterations:-_ In order to address the first question, we analyse the fraction of the Hilbert space of the largest connected component explored by the dynamics starting from the LLS. To do so we use essentially the Lanczos algorithm. In brief, this involves the following steps: Given an initial vector \(\left|\alpha\right\rangle_{0}\) and a matrix \(H\), at the n\({}^{th}\) step one constructs a vector \(\left|\beta_{n}\right\rangle=H\left|\alpha_{n}\right\rangle-u_{n}\left|\alpha _{n}\right\rangle\) with \(u_{n}=\left\langle\alpha_{n}\right|H\left|\alpha_{n}\right\rangle\) and \(v_{n}=\sqrt{\left\langle\beta_{n}\right|\left.\beta_{n}\right\rangle}\); then \(\left|\alpha_{n+1}\right\rangle=\left|\beta_{n}\right\rangle/v_{n+1}\). After \(m\) such iterations, one forms the matrix \(H_{\text{eff}}(m)=VTV^{\dagger}\) where \(V\) has the \(\left|\alpha_{n}\right\rangle\) as columns and \(T\) has the \(u_{n}\) on the main diagonal and the \(v_{n}\) on the first off-diagonal constitutes an approximation to \(H\). In principle, \(m=\mathcal{D}_{H}\) exactly reproduces \(H\).
Our approach _truncates_ this procedure at some order \(m\), determined by minimising the following cost function with respect to \(m\):
\[I=\frac{1}{t_{\text{max}}}\min_{m}\int_{0}^{t_{\text{max}}}dt\big{|}\mathcal{ L}(t)-\mathcal{L}_{\text{TLI}}(m,t)\big{|}. \tag{2}\]
Here, \(\mathcal{L}(t)\) represents the return probability of a LLS evolved with (1), while \(\mathcal{L}_{\text{TLI}}(m,t)\) denotes the return probability with an effective Hamiltonian \(H_{\text{eff}}(m)\) constructed by using the Lanczos algorithm. We terminate the minimization procedure when \(I\leq 0.01\). The aim of this truncation is to determine what fraction of Hilbert space is explored by the dynamics by explicitly constructing it-its dimension is clearly \(m_{c}\).
For each realisation at a given \(\mu/N\), we obtain the \(m_{c}\) for each LLS, then average over all the LLS and realisations. The resulting \(m_{c}\) as a function of \(\mu/N\) is shown in Fig. 2, scaled by \(N\) (left panel) or \(\mathcal{D}_{H}\) (right panel). Our findings elucidate several key aspects. Firstly, the number of states \(m_{c}\) required is linear in system size \(N\) for given \(\mu/N\) (left panel), exhibiting a universal behaviour. Secondly, a notable decrease in \(m_{c}\) is observed with increasing constraint range (left panel). Lastly, the _fraction_ of the Hilbert space involved in the dynamics for given \(\mu/N\) decreases with increasing system size (right panel).
Let us summarise this calculation and the conclusions to be drawn from it. We have shown that the dynamics of the LLS is restricted to a Krylov subspace of a dimensionality that is a decreasing fraction of the dynamically accessible Hilbert space. This implies that the LLS are caused by nontrivial dynamics inside the largest connected cluster, rather than simply a result of the dimension of the largest cluster decreasing with \(\mu/N\).
_Disappearance and re-emergence of quantum scars:-_ We now come to the apparent contradiction mentioned earlier, namely, that in Fig. 1 the probability for LLS to exist for \(\mu/N=1\) vanishes, while the PXP model is a particular realisation of \(\mu/N=1/N\rightarrow_{N\rightarrow\infty}0\). The resolution of this paradox is that the PXP model is a singular point: Changing a single \(r_{i}\neq 1\) results in a rapid decay of the oscillations and destroys the unique spectral structure. Fig. 3(a) shows the return probability starting
Figure 1: Weak ergodicity transition. (a) Probability of finding at least one LLS for mean constraint range \(\mu/N\); for \(N<18\) we averaged over \(1000\) realisations per \(\mu/N\) and up to time \(t_{\text{max}}=18\), while for \(N>20\) we used \(100\) realisations up to \(t_{\text{max}}=50\). (b) Density \(\rho\) of LLS versus of \(\mu/N\) jumps sharply away from \(0\) around \(\mu/N\sim 0.3\). The inset shows \(\rho\) vs system size \(N\) for \(\mu/N=\{0.28,0.3,0.32\}\); above (below) \(\mu/N\sim 0.3\), \(\rho\) increases (decreases) with system size. We believe the bump between \(\mu/N=0.2\) and \(0.3\) to be a finite-size effect (see text). This indicates a phase transition in the thermodynamic limit.
from a \(\mathbb{Z}_{2}\) state for both standard PXP and for the PXP model modified by setting a _single_\(r_{i_{0}}=2\) for some \(i_{0}\). In Fig. 3(b), we show the overlap of the \(\mathbb{Z}_{2}\) state with the eigenstates of the model for the case of the standard PXP (blue) and our perturbed model with \(r_{i_{0}}=2\); the characteristic peaks that are known from the PXP model disappear. Thus, weak, local perturbations of the PXP model disrupt the scars (thus also the LLS, \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{2}^{\prime}\) in the notation of [28]), which implies that the phase with LLS that we uncover at higher \(\mu/N\) is not connected to the scarred phase of the PXP model.
At this point, it's natural to question how a _local_ perturbation can disrupt _global_ quantities such as the eigenstate overlaps; after all, in the thermodynamic limit, a local perturbation should be negligible, so how is it capable of eliminating the oscillations? The resolution to this paradox is that the eigenstates most directly pertain to infinite-time results of observables via the eigenstate thermalisation hypothesis (ETH). Thus, as we show in Fig. 3(c), the oscillations starting from a Neel \(\mathbb{Z}_{2}\) state with a model with a single \(r_{i_{0}}=2\) results in oscillations decaying inside a lightcone spreading out from \(i_{0}\); only after a time \(\propto N\) will they decay everywhere. The effect is visible in spectral properties such as the eigenstate expectation values only because those are relevant for the infinite-time limit.
_Conclusion:-_ In this work we have studied an ensemble of random, local, constrained models parameterised by the mean constraint range \(\mu\). We find that typical members of the ensemble transition from a low-\(\mu/N\) phase with no LLS to a high-\(\mu/N\) phase with high density of LLS. This appears to contradict known results for the PXP model, which is a special case for \(\mu=1\). We reconcile the two results by showing that increasing the constraint range at a single sit of PXP causes its unique spectral features to disappear.
There remain a number of open questions on the nature and origins of these LLS remain. Numerical experiments with the Lanczos methods (not shown) suggest that the dynamics of some, but not all, of our LLS is well-reproduced by replacing the Hamiltonian by the adjacency matrix of a hypercube with the initial LLS as a node [18]. Can this idea be extended to include all of them? Do all LLS correspond to dynamics on a small number of special subgraphs (analogously to how the PXP LLS are due to adjacency matrices corresponding to a hypercube, or the "motifs" of Ref. [26])? We leave the answers to such questions for future work.
We thank Juan P. Garrahan for helpful discussions. This work was supported by EPSRC Grant No. EP/V012177/1.
|
2309.16837 | Measuring lepton number violation in heavy neutral lepton decays at the
future muon collider | The future muon collider has the potential to discover feebly interacting
particles in a wide range of masses above the electroweak scale. It is
particularly suitable to search for heavy neutral leptons (HNLs), as their
production cross section $\sigma \sim m_W^{-2}$ is not suppressed by the new
physics scale. We demonstrate that the muon collider, with the capacity to
observe up to $10^5$ events in the previously unexplored TeV mass range,
provides the means to measure the fraction of lepton number violating (LNV)
processes with precision at the level of a percent. This capability enables
elucidating the nature of HNLs, allowing us to differentiate between Majorana,
Dirac, and non-minimal scenarios featuring multiple degenerate HNLs. We link
the observed fraction of LNV processes to the parameters of the model with
three degenerate HNLs, which could be responsible for generating baryon
asymmetry in the Universe. Additionally, we present a simple estimate for the
number of signal events, as well as analyze the feasibility of vector boson
fusion processes in searches for HNLs. | Oleksii Mikulenko, Mariia Marinichenko | 2023-09-28T20:31:22Z | http://arxiv.org/abs/2309.16837v2 | # Probing leptogenesis at the future muon collider
###### Abstract
The future muon collider has the potential to discover feebly interacting particles in a wide range of masses above the electroweak scale. It is particularly suitable to search for heavy neutral leptons (HNLs), as their production cross section \(\sigma\sim m_{W}^{-2}\) is not suppressed by the new physics scale. We demonstrate that the muon collider, with the capacity to observe up to \(10^{5}\) events in the previously unexplored TeV mass range, provides the means to measure the fraction of lepton number violating (LNV) processes with precision at the level of a percent. This capability enables elucidating the nature of HNLs, allowing us to differentiate between Majorana, Dirac, and non-minimal scenarios featuring multiple degenerate HNLs. We link the observed fraction of LNV processes to the parameters of the model with three degenerate HNLs, which could be responsible for generating baryon asymmetry in the Universe. Additionally, we present a simple estimate for the number of signal events, as well as analyze the feasibility of vector boson fusion processes in searches for HNLs.
## 1 Introduction
A few puzzles remained to be addressed by the Standard Model (SM) of particle physics, such as the origin of neutrino masses and the origin of matter-antimatter asymmetry in the Universe. A notable solution is to complete the SM with the right-handed counterparts of the left-handed neutrinos - heavy neutral leptons (HNL). With such a little adjustment, it is possible to explain the current neutrino oscillation data through the seesaw mechanism [1; 2; 3; 4; 5; 6; 7], while simultaneously providing the HNLs with a neutrino-like weak interaction, suppressed by a small mixing angle \(U_{\alpha}^{2}\ll 1\), \(\alpha=e\), \(\mu\), \(\tau\). At the same time, the new particles may produce the observed amount of baryon asymmetry in the Universe [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Both these goals may be achieved with GeV-TeV scale HNLs that can be probed by the current and proposed future experiments [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42].
The muon collider [43; 44] is a possible way forward to explore physics at the TeV scale, alternative to the proposed future \(pp\) and \(ee\) colliders: Future Circular Collider (FCC) [45; 46], Circular Electron-Positron collider (CEPC) [47], Compact Linear Collider (CLIC) [48], and International Linear Collider (ILC) [49]. Muons can be accelerated to high energies up to \(1-10\,\mathrm{TeV}\), equivalent to a \(10-100\,\mathrm{TeV}\) proton collider [43; 50]. At the same time, muons are fundamental particles and provide a clean environment, allowing for a simple description in high-energy scatterings. Finally, new physics in the lepton sector may be probed more efficiently in processes with muons, as compared to hadron colliders.
We consider the two standard setups with the energies \(\sqrt{s}=3\), \(10\,\mathrm{TeV}\) and integrated luminosities \(\mathcal{L}=1\), \(10\,\mathrm{ab}^{-1}\), respectively. Throughout the paper, the muon beams are assumed to be fully polarized, with muons having negative and antimuons having positive helicity correspondingly. The detector apparatus is designed to cover angles above \(\theta_{\mathrm{det}}=10^{\circ}\) relative to the beam axis.
In these setups, the muon collider may serve as a powerful instrument in searching for feebly interacting HNLs in the region above the electroweak scale. The reason behind it relies on the following considerations. The cross section of new physics processes at the scale \(M\) has the form
\[\sigma_{M}\sim\frac{g^{4}}{16\pi M^{2}}=10^{2}\,\mathrm{fb}\times g^{4}\left( \frac{10\,\mathrm{TeV}}{M}\right)^{2}, \tag{1}\]
where \(g\) is the coupling constant of the considered interaction. The luminosity for different runs is assumed to scale as \(\mathcal{L}\propto s\) in order to account for the suppression of the cross section at larger energies. The corresponding number of events is
\[\mathcal{L}\sigma\sim 10^{6}g^{4}\sim 10^{4}\,\mathrm{events} \tag{2}\]
for \(g^{2}\sim 0.1\) (electroweak coupling). However, this is not the case for TeV-scale HNLs, which are produced mainly in the \(t\) channel process, mediated by the \(W\)-boson [42]. The propagator that may carry a light-like transferred momentum \(q^{2}\to 0\) introduces a divergence in the differential cross section, which is regularized by the mediator mass. Therefore, the cross section avoids the suppression by the new physics scale and is enhanced to
\[\sigma\sim U_{\mu}^{2}\sigma_{\mathrm{weak}},\qquad\sigma_{\mathrm{weak}}\sim \frac{g^{4}}{16\pi m_{W}^{2}}\sim 200\,\mathrm{pb}\gg\sigma_{m_{N}} \tag{3}\]
with the expected number of produced HNLs being
\[N_{\mathrm{ev}}\sim U_{\mu}^{2}\mathcal{L}\sigma_{\mathrm{weak}}\sim\frac{U_ {\mu}^{2}}{10^{-8}}\times\frac{\mathcal{L}}{10\,\mathrm{ab}^{-1}}\,\mathrm{ events}. \tag{4}\]
The studies of the sensitivity to HNLs of an experiment at the muon collider have recently been performed in [51; 52; 53], with the obtained sensitivity down to \(U_{\mu}^{2}\sim 10^{-6}\). The potential to explore HNLs above the electroweak scale with a high number of events provides a unique opportunity to study the properties of the new particles. The possibility to infer the underlying physics model of the HNLs, if detected, and link it to the origin of neutrino masses and baryon asymmetry has attracted significant attention [54; 55; 56; 57; 18; 58]. In the minimal scenario of leptogenesis with two quasi-degenerate HNLs, the range of the mixing angles is beyond the expected sensitivity reach. However, the existence of the third HNL may significantly alter the dynamics of the production of matter-antimatter asymmetry, yielding a much
larger parameter space [59, 60], which may be probed by an experiment hosted at the muon collider, see Fig. 1.
In this paper, we discuss the possible implications of the muon collider to reveal the underlying properties of the HNLs. Namely, we analyze the possibility of probing the scenario of leptogenesis with 3 HNLs. In Sec. 2, we complement the previous simulation-based result with simple analytical estimates for the signal yield for HNLs. Additionally, we discuss caveats and limitations associated with probing HNL couplings to \(e\), \(\tau\) lepton flavors via vector boson fusion (VBF). In Sec. 3, we obtain the sensitivity to observations of lepton number violation and the implications of such measurement to a model of 3 HNLs. We discuss our findings in Sec. 4.
## 2 Analytic estimate of the sensitivity reach
In the framework of the type-I seesaw mechanism, the Lagrangian with \(n\) right-handed neutrinos has the following form [61]:
\[\mathcal{L}=\mathcal{L}_{SM}+i\overline{N}_{I}\gamma^{\mu}\partial_{\mu}N_{I} -F_{\alpha I}\overline{L}_{\alpha}N_{I}\tilde{H}-\frac{1}{2}M_{IJ}\overline{N }_{I}^{C}N_{J}+h.c., \tag{1}\]
Figure 1: Constraints on heavy neutral leptons from [61], see reference therein. The shaded areas are excluded regions by direct detection and electroweak precision measurements (above) or seesaw limit (below). The parameter space above \(m_{N}>100\,\mathrm{GeV}\) is consistent with 3 HNL leptogenesis [60] and is a target for searches at multi-TeV colliders.
where \(N_{I}\), \(I=\overline{1,n}\) are HNL states with Majorana mass matrix \(M_{IJ}\); \(F_{\alpha I}\) are Yukawa couplings to active lepton flavors \(\alpha=e,\mu,\tau\); \(L_{\alpha}=(\nu_{\alpha},l_{\alpha})^{T}\) is the lepton left doublet; and \(H\) is the Higgs doublet.
Due to the mixing with heavy sterile neutrinos, the active neutrino flavor eigenstates become
\[\nu_{\alpha}=U^{\rm PMNS}_{\alpha j}\nu_{j}+\theta_{\alpha I}N_{I}^{C}, \tag{2}\]
where \(\nu_{j}\) and \(N_{I}\) are mass eigenstates, and \(U^{\rm PMNS}\) is the Pontecorvo-Maki-Nakagawa-Sakata matrix [62]. The small parameters \(\theta_{\alpha I}\) control the interaction strength of the HNLs with the \(W\), \(Z\), and \(H\) bosons. Their explicit form is given by
\[\theta_{\alpha I}=\frac{v}{\sqrt{2}}\sum_{I}F_{\alpha J}(M^{-1})_{IJ} \tag{3}\]
with \(v=246\,\)GeV being the Higgs vacuum expectation value.
To characterize HNL interactions, it is convenient to define the mixing angles:
\[U^{2}_{\alpha I}=|\theta_{\alpha I}|^{2},\qquad U^{2}_{\alpha}=\sum_{I}U^{2}_ {\alpha I},\qquad U^{2}_{I}=\sum_{\alpha}U^{2}_{\alpha I},\qquad U^{2}=\sum_{ \alpha I}U^{2}_{\alpha I}. \tag{4}\]
At high-energy colliders, the detection of an HNL occurs by reconstructing a fully visible decay \(N\to lW\to lqq\), without a neutrino in the final state. The total number of events can be estimated by
\[N_{\rm ev}=2\mathcal{L}\times\sum_{\alpha}U^{2}_{\alpha}\sigma_{N_{\alpha}} \times\mathrm{Br}(N\to W(qq)l)\times\epsilon_{\rm eff}. \tag{5}\]
Here, the factor 2 takes into account the Majorana nature of the HNLs, and \(\sigma_{N_{\alpha}}\) represents the production cross section of a Dirac HNL-particle (i.e., excluding charge-conjugated channels) with unit mixing angle \(U^{2}_{\beta}=\delta_{\alpha\beta}\). The detection efficiency \(\epsilon_{\rm eff}\) is the probability of observing an HNL decay when it occurs. The branching ratio of the HNL decay is
\[\mathrm{Br}(N\to lW\to lqq)=\mathrm{Br}(N\to lW)\cdot\underbrace{\mathrm{Br}( W\to qq)}_{=0.676}. \tag{6}\]
The expressions for the decay widths of right-handed neutrinos are presented in Appendix A. In the TeV mass range, when \(M_{N}\) significantly exceeds the Higgs boson mass (\(M_{N}\gg m_{H}\)), the branching ratio of HNL decay into a \(W\) boson approaches \(\mathrm{Br}(N\to lW)\approx\frac{1}{2}\).
### \(\mu\)-mixing
\[\sigma_{N_{\mu}}\approx\frac{g_{W}^{4}}{16\pi m_{W}^{2}}\left(1-\frac{m_{N}^{ 2}}{s}\right)=\underbrace{\sigma_{\rm weak}}_{\sim 200\,\mathrm{pb}}\left(1- \frac{m_{N}^{2}}{s}\right). \tag{7}\]
he production in the collinear process comes at the cost that the produced particles move at small angles with respect to the axis, which may suppress the detection efficiency. Suppose we assume that the detector system covers angles above \(\theta_{\rm det}\). In that case, the detection efficiency can be estimated by the fraction of HNL whose decay products can deviate by angles \(\theta>\theta_{\rm det}\) from the initial beam axis. We choose the reference value \(\theta_{\rm det}=10^{\circ}\) (corresponding to the pseudo-rapidity cut \(\eta=2.44\)). There are two limiting cases for the calculation of efficiency:
1. If HNLs are not too light, they are produced with small boosts, and their decay products are not focused at the beam line. The typical opening angle between the decay products for a boosted particle may be estimated as \(\theta_{\rm dec.}\sim 1/\gamma_{N}\), where \(\gamma_{N}\) is the gamma factor of the HNL. The detection efficiency, i.e. the probability of emitting decay products in the detector coverage is simply: \[\epsilon_{\rm eff}\sim 1-\left(\frac{\theta_{\rm det}}{\theta_{\rm dec.}} \right)^{2}\sim 1-\theta_{\rm det}^{2}\frac{4sm_{N}^{2}}{(s+m_{N}^{2})^{2}}.\] (8)
2. If HNLs are sufficiently light, such that \(1/\gamma_{N}<\theta_{\rm det}\), and the decay products do not deviate sufficiently from the initial direction of the HNL motion. Therefore, it becomes necessary to account for the nonzero angle of the HNLs itself. It can be easily computed from the angular dependence of the cross section \(\frac{d\sigma}{dt}\propto(m_{W}^{2}-t)^{-2}\) with the requirement of deflection by an angle larger than \(\theta_{\rm det}\). \[\epsilon_{\rm eff}\sim\frac{\sigma_{N\nu}[\theta>\theta_{\rm det}]}{\sigma_{ N\nu}}=\left[1+\frac{s}{m_{W}^{2}}\left(1-\frac{m_{N}^{2}}{s}\right)\sin^{2} \left(\frac{\theta_{\rm det}}{2}\right)\right]^{-1}.\] (9) It should be noted that this effect becomes relevant in the regime when \(m_{N}^{2}\ll s\), therefore, the detection efficiency approaches a constant value independent of HNL mass. This approximation might be inaccurate for very light HNLs, where the separation of the decay products and reconstruction of the kinematics becomes more challenging. Hence, we restrict ourselves to \(m_{N}>200\,{\rm GeV}\).
Figure 2: Feynman diagram for \(\mu^{-}\mu^{+}\to\nu N\) with \(W\)-boson exchange in the \(t\)-channel.
In our discussion, we omit direct cuts on the \(p_{T}\) of the decay products, because in both scenarios it is limited by \(p_{T}\gtrsim\frac{\sqrt{s}}{2}\theta_{\rm det}>100\,{\rm GeV}\). Therefore, we approximate the detection efficiency with the two simple formulas (8), (9), keeping the maximal of the two.
\[\epsilon_{\rm eff}(m_{N}|s,\theta_{\rm det})=\max\left[1-\theta_{\rm det}^{2} \frac{4sm_{N}^{2}}{(s+m_{N}^{2})^{2}},\left(1+\theta_{\rm det}^{2}\frac{s}{4m_ {W}^{2}}\right)^{-1}\right]. \tag{10}\]
To verify our estimates, we made a toy Monte-Carlo generator of the decay of an HNL into two massless particles. The HNL is produced at various angles \(\theta\) with the weights proportional to \(\frac{d\sigma}{d\cos\theta}\). The decay products are created in HNL's reference frame and are boosted to the lab frame. The fraction of decays where both decay products have \(\theta>\theta_{\rm det}\) is shown in Fig. 3, together with the estimate (10).
To estimate the sensitivity of an experiment at the muon collider, it is necessary to know the number of expected SM background events \(N_{\rm bkg}\). Then, the exclusion region (at \(1\sigma\) confidence level) corresponds to the expected number of HNL events
\[N_{\rm ev}=\sqrt{N_{\rm bkg}}. \tag{11}\]
The number of background events lacks a simple analytical estimate, therefore we extract it from the simulations performed in the previous studies. For this purpose, we used the results of the previous simulation-based studies [51; 52; 53]. By fitting the background spectrum from [51] and choosing 100 GeV bins, equal to the size of a
Figure 3: Comparison of the detection efficiency, computed with a simple formula (10), and the results of a toy Monte-Carlo generator. The generator simulates HNL production and decay into two massless particles, and estimates the fraction of decay with both daughter particles moving into the detector.
peak for the reconstructed mass, we found that for a 3 TeV muon collider \(N_{\rm bkg}\) can be approximated as
\[N_{\rm bkg}=4.0\cdot 10^{4}\left(\exp\bigg{[}-1.6\cdot\frac{m_{N}}{1\,{\rm TeV}} \bigg{]}+0.03\right). \tag{12}\]
To validate our estimates, we compared (12) with the results from [52] and [51]. Our calculations indicate that there are roughly \(\sim 10^{4}\) events in the 100 GeV bin.
For the 10 TeV collider, the background is dominated by \(\mu^{+}\mu^{-}\to qqlll\nu\) for \(m_{N}\ \lesssim\ 5\) TeV and \(\mu^{+}\mu^{-}\to qql\nu\) for \(m_{N}\ \gtrsim\ 5\) TeV. Additionally, there is a contribution from \(\mu^{+}\mu^{-}\to qqll\), uniform in the invariant mass. To determine the background in this scenario, we used the ratio between cross sections of those three processes (0.40 pb, 0.21 pb, and 0.46 pb correspondingly) to the total cross section from [52] assuming a linear \(N_{\rm bkg}(m_{N})\) dependence. Our derived equation is
\[N_{\rm bkg}=1.5\cdot 10^{5}\,\left(1-0.07\cdot\frac{m_{N}}{1\,{\rm TeV}} \right). \tag{13}\]
In addition, we performed a check with [51] and [53]. Here, we noticed a disagreement between the latter and previously mentioned papers, where the background is 10 times larger. Unfortunately, we are unable to explain this discrepancy.
The sensitivity to HNLs obtained by the presented simple estimates is shown in Fig. 4. Given the somewhat arbitrary estimates for the background, we plot the
Figure 4: The \(N_{\rm ev}\) map in the \((m_{N},U^{2})\) parameter space. The black band represents the sensitivity reach with the use of the background fits (12), (13), allowing them to vary by a factor of two. The gray area at the level \(\sim 10^{-3}\) is excluded by constraints from the electroweak precision observables [63; 64]. The FCC sensitivity is taken from [42]. The current charge lepton flavor violation constraints (cLFV) [65; 66; 67; 68; 69; 70] may in principle exclude mixing angles up to \(10^{-4}\). However, they rely on the simultaneous mixing of a HNL with electron and muon neutrinos. While not directly probing \(U_{\mu}^{2}\), these constraints are nevertheless relevant for realistic HNL scenarios.
sensitivity in the form of a band, allowing for variation in the background by factor two. We note that the consistency of our estimates with the previous results is at the same level as the (dis)agreement between them.
### Feasibility of vector boson fusion for \(e\), \(\tau\)-mixings
A natural process that may experience a collinear enhancement of the cross section and produce HNL through mixing with \(\nu_{e}/\nu_{\tau}\) is vector boson fusion (VBF), see Fig. 5. This type of interaction has been identified as a promising process for the muon collider to probe new physics [71; 50; 72]. In this section, we demonstrate that this process does not provide an advantage compared to the \(s\)-channel annihilation process into \(N\nu\). Moreover, we highlight the caveats of the effective \(V\) approximation when considering HNL production.
The relevant reactions for HNL production are \(W^{-}W^{+}\to\nu N\), \(ZW^{\pm}\to l^{\pm}N\), \(\gamma W^{\pm}\to l^{\pm}N\), and \(ZZ\to\nu N\), shown in Fig 5. Their cross sections can be estimated with the use of the effective \(V\) approximation. The two necessary ingredients of the computations are the probability distribution \(f_{V}(\xi,Q^{2})\) to emit a vector boson with energy fraction \(\xi\equiv\frac{E_{V}}{E_{\mu}}\) and virtuality \(q^{2}=-Q^{2}<0\), and the fusion cross section \(\sigma(\hat{s},Q_{1}^{2},Q_{2}^{2})\), with the center of mass energy \(\hat{s}=\xi_{1}\xi_{2}s\). A detailed treatment of the emission of \(V\) may be found elsewhere, see e.g. [73; 74; 50; 75; 76].
For our purposes, we capture the general behavior of the emission probability by approximating
\[f_{V}(\xi,Q^{2})\sim\frac{g_{V}^{2}}{16\pi^{2}}\frac{1}{\xi}\frac{1}{Q^{2}+m_ {V}^{2}}, \tag{14}\]
which results in
\[\sigma_{\rm VBF}\sim\frac{g_{V}^{4}}{256\pi^{4}}\int\frac{d\xi_{1}}{\xi_{1}} \frac{d\xi_{2}}{\xi_{2}}\frac{dQ_{1}^{2}}{Q_{1}^{2}+m_{V}^{2}}\frac{dQ_{2}^{2 }}{Q_{2}^{2}+m_{V}^{2}}\times\theta(\hat{s}-m_{N}^{2})\sigma(\hat{s},Q_{1}^{2 },Q_{2}^{2}) \tag{15}\]
with \(\theta\) being the Heaviside step-function.
Figure 5: Feynman diagrams of HNL production through vector boson fusion.
The next step of the effective \(V\) approximation is to replace the off-shell cross section by its value for on-shell vector bosons \(\sigma(\hat{s},-m_{V}^{2},-m_{V}^{2})\). However, this step becomes invalid for the specific process of HNL production. For instance, consider a process \(W^{-}W^{+}\to\nu N\). The total cross section for real \(W\) bosons is divergent because of the \(t\)-channel singularity. Namely, the range of variable \(t\) in the limit \(m_{W}^{2}\ll m_{N}^{2},s\) is
\[m_{W}^{2}-s\left(1-\frac{m_{N}^{2}}{s}\right)\leq t\leq m_{W}^{2}\left[1-\frac {1}{4}\left(1-\frac{m_{N}^{2}}{s}\right)\right]. \tag{16}\]
Therefore, this variable crosses the zero value, i.e. contains an on-shell lepton in the \(t\)-channel propagator, for HNL masses
\[m_{N}^{2}<s-m_{W}^{2}, \tag{17}\]
i.e. the singularity appears for almost all HNL masses except for a narrow region near the kinematic threshold \(s\approx m_{N}^{2}\).
This type of singularity has been first noticed in the context of hadron collisions [77; 78; 79], and has received attention for applications in cosmology [80; 81] and in the context of the processes at a muon collider [82; 83; 84; 85]. It had been suggested to regularize the singularity by accounting for the decay width of the incoming particles or the transversal size of the beam, in the case of the muon collider. Both suggestions rely on the fact that the initial state particles cannot be treated as well-defined plane waves. This violates the energy-momentum conservation and prevents the particle in the propagator from being exactly on-shell.
It is important to recognize that in the context of HNL production, the singularity cannot be resolved by implying kinematic cuts on the final particles. As discussed in Sec. 2.1, sufficiently heavy HNLs may be produced with small boosts, and their decay products can be detected with \(\approx 1\) efficiency. The key parameter is the total number of HNLs created in the process.
Therefore, the cross section \(\sigma(\hat{s},Q_{1}^{2},Q_{2}^{2})\) should be evaluated strictly for off-shell vector bosons with negative invariant mass \(m_{W}^{2}\to-Q^{2}\). In this case, there is no zero crossing in the \(t\) range (16). Using the unitarity constraints that forbid \(\sigma\) from growing with energy, we can estimate the upper bounds on the cross section by:
\[\sigma(s,Q_{1}^{2},Q_{2}^{2})\lesssim\frac{g^{4}}{16\pi}\begin{cases}\frac{1}{ m_{W}\Gamma_{W}},&Q^{2}\lesssim m_{W}\Gamma_{W},\\ \frac{1}{Q^{2}},&m_{W}\Gamma_{W}\lesssim Q^{2}\lesssim m_{W}^{2},\\ \frac{1}{m_{W}^{2}},&m_{W}^{2}\lesssim Q^{2},\end{cases} \tag{18}\]
where \(Q^{2}\) represents any of the quantities \(Q_{1}^{2}\), \(Q_{2}^{2}\). The estimate for the case \(Q^{2}\lesssim m_{W}\Gamma_{W}\) accounts for the instability of the incoming particles, essentially employing the same argument that was used for the \(t\)-channel singularity [83].
The integration over \(Q_{1,2}^{2}\) in Eq. (15) can be split into three ranges, listed in Eq. (18). This results in the following upper bound
\[\hat{\sigma}(\hat{s})\equiv\int\frac{dQ_{1}^{2}}{Q_{1}^{2}+m_{W}^{2 }}\frac{dQ_{2}^{2}}{Q_{2}^{2}+m_{W}^{2}}\sigma(\hat{s},Q_{1}^{2},Q_{2}^{2}) \lesssim\\ \lesssim\frac{g^{4}}{16\pi}\left(\pi\frac{(m_{W}\Gamma_{W}/m_{W} ^{2})^{2}}{m_{W}\Gamma_{W}}+\pi\frac{1-\Gamma_{W}/m_{W}}{m_{W}^{2}}+\frac{1}{m _{W}^{2}}\left(\ln\frac{\hat{s}}{m_{W}^{2}}\right)^{2}\right), \tag{19}\]
where the first two terms are negligible compared to the third term, which is the usual \(\sigma_{\rm weak}\) cross section enhanced by the collinear emission of the vector bosons. Therefore, the final VBF cross section for HNL production (15) is bounded by
\[\sigma_{\rm VBF}\lesssim\sigma_{\rm weak}\times\frac{g^{4}}{256\pi^{4}} \times\left(\ln\frac{s}{m_{W}^{2}}\right)^{2}\times\ln\frac{s}{m_{N}^{2}} \approx 10^{-3}\times\sigma_{\rm weak}. \tag{20}\]
For comparison, the cross section of \(s\)-channel production is
\[\sigma_{s}\sim\frac{m_{W}^{2}}{s}\sigma_{\rm weak}=(10^{-4}\,\mbox{--}\,10^{ -3})\times\sigma_{\rm weak} \tag{21}\]
for the center-of-mass energies \(\sqrt{s}=3\) - \(10\,\mathrm{TeV}\).
Therefore, we conclude that the account for the VBF processes does not provide a significant improvement in sensitivity compared to the \(s\)-channel. Moreover, in the most optimistic scenario, the sensitivity contour can reach \(U_{\alpha}^{2}\sim 10^{-3}\), while the current constraints on \(e\) and \(\tau\) mixing are \(U_{e}^{2}\lesssim 10^{-3}\) and \(U_{\tau}\lesssim 10^{-2}\)[64].
At the same time, vector boson fusion may become the dominant process for the production of \(\nu_{e}\)- and \(\nu_{\tau}\)-mixing HNLs at muon colliders of much higher energy above \(10\,\mathrm{TeV}\). An accurate analysis of the effective boson approximation would be required in this case.
## 3 Probing lepton number violation and leptogenesis
The muon collider is a perfect instrument to measure the Majorana vs. Dirac nature of the HNL. Similar studies at various collider and beam-extracted experiments have been performed in [42; 86; 87; 88; 54]. The specific advantage of the muon collider stems from the fact that the HNL momentum tends to point in the same forward direction \(\theta<\pi/2\) as the muon it originated from.
Each reconstructed signal event can be classified by the initial lepton number, which is labeled according to the corresponding hemisphere containing the total momentum of the decay products. This lepton charge is then compared with the charge of the produced muon in the \(N\to\mu qq\) decay to eventually label the process as lepton number violating (LNV) or conserving (LNC).
### Sensitivity to lepton number violation
To quantify the measure of lepton number violation, we define the parameter \(A\), being twice the probability that HNL, produced in the mixing with a lepton, decays into the opposite-charge lepton:
\[A=2\times\text{Br}(N_{l}\to\bar{l}\dots).\]
With this definition, a Dirac HNL corresponds to \(A=0\), while for a purely Majorana HNL \(A=1\).
Given the total number of detectable HNL decays \(N_{\text{ev}}\), the counts of observed LNV-like and LNC-like events contain three contributions: from correctly and incorrectly classified signal events, and from background events:
\[N_{\text{LNV}} =\frac{A}{2}\cdot(1-f)N_{\text{ev}}+\left(1-\frac{A}{2}\right) \cdot fN_{\text{ev}}+N_{\text{LNV, bkg}},\] \[N_{\text{LNC}} =\left(1-\frac{A}{2}\right)\cdot(1-f)N_{\text{ev}}+\frac{A}{2} \cdot fN_{\text{ev}}+N_{\text{LNC, bkg}},\]
where \(f\) is the probability of incorrect classification of the reconstructed HNL decay, \(N_{\text{LNV/C, bkg}}\) are contributions of the SM background to each event class.
Incorrect classification of the signal may happen due to backward production of an HNL at large angles \(\theta>\pi/2\), and because of poor reconstruction of the total momentum. The probability \(f_{B}\) of backward production (\(\theta>\pi/2\)) can be approximated numerically as
\[f_{B}\approx 0.07\frac{m_{W}^{2}}{s}\times\frac{(1+m_{N}^{2}/s)^{2}}{1-m_{N}^{2} /s}\approx 5\cdot 10^{-5}\,\frac{3\,\text{TeV}^{2}}{s},\quad s-m_{N}^{2} \gg m_{W}^{2}, \tag{10}\]
where the prefactor is computed by accounting for both \(W\) and \(Z\)-boson-mediated processes. The total momentum of the HNL is \(p_{N}\sim\frac{1}{2}\sqrt{s}(1-m_{N}^{2}/s)\). Assuming that the uncertainty of momentum reconstruction is not too large, i.e. \(\lesssim m_{W}\), both effects are negligible in the large range of HNL masses up to the production threshold \(s-m_{N}^{2}\lesssim m_{W}^{2}\). For the sake of simplicity, we ignore this specific case and set \(f=0\).
We assume that the contribution of the SM background is symmetric with respect to LNV/LNV-like events: \(N_{\text{LNV, bkg}}=N_{\text{LNC, bkg}}=N_{\text{bkg}}/2\). This assumption needs to be verified by background simulations, given that the SM processes with the initial muons may be more likely to produce the same sign lepton in the forward direction. In this case, the background for the LNV-like events becomes lower, improving the overall sensitivity. Therefore, we consider our assumption conservative.
The sensitivity to \(A\) is estimated using the \(\chi^{2}\) test. For a model that is parametrized \((N_{\text{ev}},A)\), we may test an alternative model \((\tilde{N}_{\text{ev}},\tilde{A})\) by defining the quantity
\[\chi^{2}(\tilde{N}_{\text{ev}},\tilde{A}|N_{\text{ev}},A,N_{\text{bkg}})= \frac{(\tilde{N}_{\text{LNV}}-N_{\text{LNV}})^{2}}{\tilde{N}_{\text{LNV}}}+ \frac{(\tilde{N}_{\text{LNC}}-N_{\text{LNC}})^{2}}{\tilde{N}_{\text{LNC}}}. \tag{11}\]
We define the precision of the \(A\) determination as the interval \(\Delta A\) of variation of \(\tilde{A}\), in which \(\chi^{2}(\tilde{A})\) does not exceed the \(1\sigma\) quantile of the chi-squared distribution with 2 degrees of freedom \(\chi^{2}<2.30\), after marginalization over \(\tilde{N}_{\rm ev}\). Using the estimates for the number of signal and background events, we show this sensitivity in Fig. 6.
The numerical results for the precision \(\Delta A\) of the determination of \(A\) may be approximated by a simple relation
\[\Delta A\approx\frac{\sqrt{N_{\rm ev}\cdot A+N_{\rm bkg}}}{N_{\rm ev}}. \tag{3.3}\]
If uncertainty is dominated by the background fluctuations, the sensitivity scales as \(\Delta A\sim 1/U^{2}\). This always holds for the Dirac case \(A=0\). Conversely, once \(N_{\rm ev}A>N_{\rm bkg}\), the sensitivity starts to grow with the mixing angle at a smaller pace \(\propto 1/\sqrt{U^{2}}\).
### Implications for leptogenesis
In the scenario with two degenerate HNLs \(N_{1}\), \(N_{2}\), the coupling matrix \(\theta_{\alpha I}\) and mass matrix has the form
\[\theta_{\alpha I}=\frac{1}{\sqrt{2}}\begin{pmatrix}U_{e}&iU_{e}\\ U_{\mu}&iU_{\mu}\\ U_{\tau}&iU_{\tau}\end{pmatrix},\qquad\qquad M_{IJ}=\begin{pmatrix}m_{N}- \frac{\Delta M}{2}&0\\ 0&m_{N}+\frac{\Delta M}{2}\end{pmatrix}. \tag{3.4}\]
The two HNLs form a quasi-Dirac pair, \(\Psi=\frac{1}{\sqrt{2}}(|N_{1}\rangle+i|N_{2}\rangle)\), \(\bar{\Psi}=\frac{1}{\sqrt{2}}(|N_{1}\rangle-i|N_{2}\rangle)\). This pair violates the lepton charge only due to the relative oscillations between the HNLs arising from a slight difference in their mass \(\Delta M\).
The expression for the parameter \(A\) has the form:
\[A=2\frac{\int_{0}^{\infty}|g_{-}(t)|^{2}\,dt}{\int_{0}^{\infty}(|g_{+}(t)|^{2} +|g_{-}(t)|^{2})dt}, \tag{3.5}\]
where \(g_{\pm}\) are the matrix elements for LNC/LNV transitions, defined as
\[g_{+}(t)=\Psi^{\dagger}{\cal U}(t)\Psi,\qquad g_{-}(t)=\bar{\Psi}^{\dagger}{ \cal U}(t)\Psi, \tag{3.6}\]
where \(U(t)\) is the evolution matrix. In the 2 HNL model, it has the form
\[{\cal U}(t)=\exp\left(-itM_{IJ}-\frac{\Gamma}{2}t\right), \tag{3.7}\]
and the lepton number violation parameter is given by the ratio [90; 91]:
\[A=\frac{\Delta M^{2}}{\Gamma^{2}+\Delta M^{2}}. \tag{3.8}\]
In the 3 HNL scenario, the coupling and mass matrix can be rewritten as
\[\theta_{\alpha I}=\frac{1}{\sqrt{2}}\begin{pmatrix}U_{e}&iU_{e}&0\\ U_{\mu}&iU_{\mu}&0\\ U_{\tau}&iU_{\tau}&0\end{pmatrix},\qquad\qquad M_{IJ}=m_{N}\cdot\mathbb{1}_{3 \times 3}+\Gamma\begin{pmatrix}0&0&\xi_{1}\\ 0&\mu_{2}&\xi_{2}\\ \xi_{1}&\xi_{2}&\mu_{3}\end{pmatrix}, \tag{3.9}\]
assuming that the couplings to the active neutrino sector explicitly respect the lepton
Figure 6: Constraints on lepton number violation \(\Delta A\) deviations for the pure Dirac \(A=0\) (left) and Majorana \(A=1\) (right) cases for the \(\sqrt{s}=3\,\mathrm{TeV}\) (top) and \(\sqrt{s}=10\,\mathrm{TeV}\) (bottom) muon collider. The gray area is the excluded region. The green shaded area represents the parameter space where successful leptogenesis with neither thermal nor vanishing initial conditions is possible [60]. The green dashed line represents the scale above which the 3 HNLs in the Majorana-like case would produce large loop corrections to active neutrino masses [89].
symmetry. All deviations in the mass matrix are defined as normalized to the decay width. In this case, the evolution matrix has the form
\[\mathcal{U}(t)=e^{-im_{N}t}\exp\left[-\Gamma t\begin{pmatrix}\frac{1}{2}&0&i\xi_ {1}\\ 0&\frac{1}{2}+i\mu_{2}&i\xi_{2}\\ i\xi_{1}&i\xi_{2}&i\mu_{3}\end{pmatrix}\right]. \tag{3.10}\]
The larger number of independent parameters adds two difficulties in relating the LNV parameter to the underlying model. First, a simple analytic expression of the form (3.8) is missing, and in general, \(A\) should be evaluated numerically. Second, the measurement of \(A\) adds only one constraint to the set of splitting parameters \(\{\mu_{i},\,\xi_{i}\}\), insufficient for reconstructing all the parameters.
To quantitatively explain the way LNV occurs, we need to separate the two main effects. If the oscillations between the interacting states \(|N_{1}\rangle\) and \(|N_{2}\rangle\) are rapid enough \(\mu_{2}\gtrsim 1\), we return to the case of 2 HNLs that exhibit Majorana-like behavior \(A\sim 1\). If, in contrast, these oscillations are suppressed \(\mu_{2}\ll 1\), the third HNL starts to play a role. If \(|\xi|\gg|\mu_{3}|\), the massive states of sterile neutrinos become completely misaligned with the initial interacting states, and the mass splittings become of order \(\sim\Gamma\xi\). Finally, if \(|\xi|\ll|\mu_{3}|\), a situation similar to the seesaw mechanism occurs: the interacting states \(\Psi\) acquire a small admixture of the third HNL with an amplitude of order \(\frac{\xi^{2}}{\mu_{3}}\). Then the production and decay processes with a Dirac-like HNL with mixing angles \(U^{2}\) are accompanied by the processes with a Majorana HNL \(|N_{3}\rangle\) with mixing angles suppressed to \(U^{2}\cdot\frac{\xi}{\mu_{3}}\).
If the parameters that define the scale of the probability of LNV to happen are sufficiently small, i.e. \(|\mu_{2}|\), \(|\xi|\ll 1\) (but arbitrary \(\mu_{3}\)), the leading term expansion may be written explicitly:
\[A=\mu_{2}^{2}+\frac{2(\xi_{1}^{2}+\xi_{2}^{2})}{1+4\mu_{3}^{2}}. \tag{3.11}\]
In general, this expression cannot be directly linked to the mass splittings. To fix the convention, we define the "third" massive state with mass \(M_{3}\) as the eigenstate of the mass matrix that is closest to the initial third state in the basis of Eq. (3.9), and the first two as \(M_{2}>M_{1}\). Then, there are two mass splittings \(\Delta M_{12}\geq 0\) and \(\Delta M_{3}=M_{3}-\frac{M_{1}+M_{2}}{2}\). To show the variation in \(A\) for various combinations of masses, we performed a scan in the parameters \(\{\mu_{i},\,\xi_{i}\}\). The minimal and maximal values of \(A\) that can be achieved for given \(\Delta M_{12}\), \(|\Delta M_{3}|\) are shown in Fig. 7.
While \(A\) is not fixed for given mass splittings, one can nevertheless estimate the order of magnitude of the mass splittings \(\Delta M\sim\Delta M_{12},|\Delta M_{3}|\).
\[A\sim\frac{\Delta M^{2}}{\Gamma^{2}}:\qquad\frac{\Delta M}{m_{N}}\sim\frac{ \Gamma(m_{N})\sqrt{A}}{m_{N}}\approx\sqrt{A}\,U^{2}\left(\frac{m_{N}}{1\,{\rm TeV }}\right)^{2},\quad m_{N}\gtrsim 200\,{\rm GeV}. \tag{3.12}\]
his relation has been confirmed by the scan, which shows that the approximate variation range of the parameter \(A\) is \([0.5\Delta M_{12}^{2},\max(\Delta M_{12}^{2},\Delta M_{3}^{2})]/\Gamma^{2}\) for the mass splittings that differ less by an order of magnitude.
Given these relations, for a specific measurement of \(A\), one can potentially reduce the parameter space of the models with 3 HNL which may explain the baryon asymmetry of the Universe. A scan of the values of \(A\) with a mapping onto the sensitivity map of the muon collider experiments may reveal some potential benefits of this measurement. The lower bounds of \(\Delta M/m_{N}\) defined by Eq. (3.12) that may be probed by a muon collider (replacing \(A\) by \(\Delta A\)) are shown in Fig. 8.
## 4 Conclusions
In this paper, we examined the potential of future muon colliders to measure lepton number violation in searches for heavy neutral leptons. This probe may address the role played by the newfound particles in explaining the matter-antimatter imbalance. The sensitivity of the muon collider down to \(U_{\mu}^{2}\sim 10^{-6}\) provides a unique probe of the TeV-scale HNL, superior to the competing FCC, CEPC, CLIC, and ILC projects, and enters the parameter space where these sterile neutrinos might account for the observed value baryon asymmetry.
We discussed the procedure for measuring lepton number violation in HNL decays and estimated the sensitivity to such searches. With the potential to observe up to \(10^{5}\) events in the unexplored region, an experiment at the muon collider affords a sensitivity to the ratio of lepton number violating decays that can reach a percent level, as shown in Fig. 6. This sensitivity is crucial for deciphering the true nature of
Figure 7: The minimal and maximal values of \(A\) for given pair of mass splittings \(\Delta M_{12}\) and \(\Delta M_{3}\), see text for conventions.
the HNL -- be it Majorana, Dirac, or situated in a non-minimal framework involving multiple nearly degenerate heavy neutral leptons.
In the sensitivity range of the muon collider, the minimal scenario of leptogenesis with two HNL fails to produce the observed matter-antimatter imbalance, and three degenerate species are needed. We relate the properties of these HNLs to the lepton number violation parameter \(A\). We showed that in this case there is no direct link between the LNV ratio and the mass splittings, contrary to the minimal case of two HNLs. This distinction is attributed to a greater number of parameters responsible for breaking the approximate lepton symmetry. Nevertheless, the overall scale of the largest mass splitting between the HNLs is bounded by the measurements of LNV processes. The most stringent lower bounds on the \((\Delta M/m_{N})\) ratio that could be attained at the muon collider are shown in Fig. 8.
Lastly, we complemented the previous simulation-based estimates of the sensitivity reach of an experiment at the muon collider to sterile neutrinos. Moreover, we demonstrated the failure of the vector boson fusion processes to probe couplings to electron- and tau-neutrinos beyond the excluded bounds, as well as highlighted theoretical nuances accompanying the description of these processes.
Figure 8: The lower bounds on the scale of mass splitting \(\Delta M/m_{N}\) (defined as Eq. (3.12)) that may be probed by the LNV measurements at a muon collider, assuming an approximately Dirac-like scenario \(A\ll 1\). The black dashed line represents the sensitivity \(\Delta A=0.1\), for which measurements of the mass splitting via (3.12) is possible. Below it down to \(\Delta A=0.9\) (black solid line), the precision of the \(A\) measurement is insufficient, and only a preference towards the Majorana/Dirac nature can be established. In this region, \((\Delta M/m_{N})\) represents the transition point between the two cases.
## Acknowledgements
We thank Juraj Klaric for the discussions on the leptogenesis in the model of 3 HNLs and for proofreading the manuscript. The evaluation of the cross sections has been done with the use of FeynCalc [92, 93]. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (GA 694896), from the NWO Physics Vrij Programme "The Hidden Universe of Weakly Interacting Particles", No. 680.92.18.03, which is partly financed by the Dutch Research Council NWO, and from the Den Adel Fund in the form of a scholarship.
## Appendix A HNL decay width
There are 3 decay channels for HNLs. Their partial decay widths are [94, 95]:
\[\Gamma(N\to Wl_{\alpha})=\frac{g^{2}}{64\pi}U^{2}\frac{m_{N}^{3}}{M_{W}^{2}} \bigg{(}1-\frac{M_{W}^{2}}{m_{N}^{2}}\bigg{)}^{2}\bigg{(}1+2\frac{M_{W}^{2}}{m_ {N}^{2}}\bigg{)}, \tag{10}\]
\[\Gamma(N\to Z\nu_{\alpha})=\frac{g^{2}}{128\pi}U^{2}\frac{m_{N}^{3}}{M_{W}^{2} }\bigg{(}1-\frac{M_{Z}^{2}}{m_{N}^{2}}\bigg{)}^{2}\bigg{(}1+2\frac{M_{Z}^{2}}{ m_{N}^{2}}\bigg{)}, \tag{11}\]
\[\Gamma(N\to H\nu_{\alpha})=\frac{g^{2}U_{\alpha}^{2}}{128\pi}\frac{m_{N}^{3}} {M_{W}^{2}}\bigg{(}1-\frac{M_{H}^{2}}{m_{N}^{2}}\bigg{)}^{2}, \tag{12}\]
and approach the ratio \(\Gamma(N\to Wl):\Gamma(N\to Zl):\Gamma(N\to H\nu)=2:1:1\) in the limit \(m_{N}\gg M_{Z,W,H}\).
|
2309.09917 | Evaluation of Human-Understandability of Global Model Explanations using
Decision Tree | In explainable artificial intelligence (XAI) research, the predominant focus
has been on interpreting models for experts and practitioners. Model agnostic
and local explanation approaches are deemed interpretable and sufficient in
many applications. However, in domains like healthcare, where end users are
patients without AI or domain expertise, there is an urgent need for model
explanations that are more comprehensible and instil trust in the model's
operations. We hypothesise that generating model explanations that are
narrative, patient-specific and global(holistic of the model) would enable
better understandability and enable decision-making. We test this using a
decision tree model to generate both local and global explanations for patients
identified as having a high risk of coronary heart disease. These explanations
are presented to non-expert users. We find a strong individual preference for a
specific type of explanation. The majority of participants prefer global
explanations, while a smaller group prefers local explanations. A task based
evaluation of mental models of these participants provide valuable feedback to
enhance narrative global explanations. This, in turn, guides the design of
health informatics systems that are both trustworthy and actionable. | Adarsa Sivaprasad, Ehud Reiter, Nava Tintarev, Nir Oren | 2023-09-18T16:30:14Z | http://arxiv.org/abs/2309.09917v1 | # Evaluation of Human-Understandability of Global Model Explanations using Decision Tree
###### Abstract
In explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model's operations. We hypothesise that generating model explanations that are narrative, patient-specific and _global_(holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.
Keywords:Global Explanation End-user Understandability Health Informatics
## 1 Introduction
The field of explainable artificial intelligence (XAI) has witnessed significant advancements, primarily focusing on the interpretability of models. However, the interpretability of an AI model for developers does not seamlessly translate into end-user interpretability [3]. Even inherently interpretable models like decision trees(DT) and decision lists are challenging to use in applications due to the complexity and scale of data. Hence popular explanation techniques interpret black box models by considering an individual input and corresponding prediction - _local explanations_. Model-agnostic explanations such as Shapley values and Local Interpretable Model-Agnostic Explanations (LIME) offer insights into the features contributing to an individual prediction, revealing the importance of specific characteristics in decision-making. Nevertheless, they do not capture the complete model functioning, comprehensive utilization of data, and, most importantly, the interactions among features. They lack the ability to facilitate generalization or provide a complete mental model of the system's workings.
In critical domains such as healthcare and financial predictions, the interpretability of AI models by end-users holds significant importance. The understandability of the underlying AI model and the trust in its predictions can have life-altering implications for stakeholders. Enabling user intervention and action to modify predicted outcomes require explanations
that address the _How_ and _Why_ questions, as well as convey causal relationships [18; 21]. Achieving this necessitates an overall comprehension of the model. Further, the explanation should not only align the user's mental model with the AI system's model but also be perceived as understandable and trustworthy. We propose that a global model explanation hold greater potential for providing understandability and building trust compared to local model explanations. This study is a preliminary step towards testing this.
What qualifies as a global explanation and what methodologies would provide an overall understandability is relatively less researched. The comparison between global model explanations and local explanations for end users, along with various presentation aspects such as narrative and visualization, bears significance when building explanation-centric applications. This study delves into the understandability of local and global explanations, specifically in the context of a coronary heart prediction model. We address the following research question:
1. For non-expert users, do global explanations provide a better understanding of the AI's reasoning in comparison to (only) local explanations?
2. As the complexity of the explanation increases is there a difference in understandability and user preference for local and global explanations?
We use decision tree (DT) models which are interpretable by design, and construct local and global explanations with varying levels of complexity. We gauge the perceived understandability of these models and evaluate their effectiveness based on predefined tasks. We also measure the changes in users' mental models following exposure to the explanations. Figure 1 shows different evaluation parameters. The experiment identifies preferences in explanation types among different participant groups. It is found that while complexity does not have a significant effect on perceived understandability and completeness of explanation, errors in understanding increase with complexity. The obtained results offer valuable insights
Figure 1: A comparison of Local SHAP, Local and Global tree explanation of CHD risk prediction using decision tree model. Different evaluation parameters are computed based on end-user feedback of the explanation.
for designing narrative explanations for end-users and highlight the majority of participant preference for global explanations in healthcare risk models.
## 2 Related Work
In healthcare, a risk score is a quantifiable measure to predict aspects of a patient's care such as morbidity, the chance of response to treatment, cost of hospitalisation etc. Risk scoring is utilised for its predictive capabilities and in managing healthcare at scale. A predicted risk score is representative of the probability of an adverse outcome and the magnitude of its consequence. Article 22 of the General Data Protection Regulation (GDPR) mandates human involvement in automated decision-making and in turn understandability of a risk prediction model. Hence the use of risk scores requires the effective communication of these scores to all stakeholders - doctors, patients, hospital management, health regulators, insurance providers etc. With statistical and black-box AI models used in risk score computations, this is an added responsibility of the AI model developer to ensure the interpretability of these systems to all stakeholders.
Current regulations such as model fact tables [25] are useful for clinicians and approaches of local model interpretation [15, 24] to model developers. For a non-expert end-user who has limited domain knowledge and who is not trained to understand a fact table, these approaches will not explain a recommendation given to them. Further, explaining a risk prediction model to the end user should address the perceived risk from numeric values and previous knowledge of the user, any preferences and biases. In other words, the explanation presentation should address socio-linguistic aspects [18] involved.
Researchers have recognized that a good explanation should aim to align the user's mental model with the mental model of the system, promoting faithful comprehension and reducing cognitive dissonance [18]. Achieving such effectiveness is very context-dependent [1]. However, aspects of explanation presentation generalise across a broad spectrum of applications. The significance of narrative-style explanations is emphasised by [23] while [26], highlights the effectiveness of a combined visual and narrative explanation. Recent studies have evaluated existing systems in use [6, 16] and calls for focus on the design choices for explanation presentation in health informatics. Further, with tools available in the public domain such as QRisk3 from National Health Service (NHS), evaluating the impact and actionability of explanation approaches in use would enable improving them and ensure their safe usage.
Footnote 3: [https://qrisk.org/index.php](https://qrisk.org/index.php)
Before looking into evaluating black-box models, it would be worthwhile to explore what constitutes a good explanation in interpretable models such as DTs, decision lists [13] etc. DT algorithms are methods of approximating a discrete-valued target by recursively splitting the data into smaller subsets based on the features that are most _informative_ for predicting the target. DTs can be interpreted as a tree or as a set of if-else rules which is a useful representation for human understanding. The most successful DT models like Classification and Regression Trees (CART) [5] and C4.5 [22] are greedy search algorithms. Finding DTs by optimising for say a fixed size, is NP-hard, with no polynomial-time approximation [9]. Modern algorithms have attempted this by enforcing constraints such as the independence of variables [10] or using all-purpose optimization toolboxes [2, 4, 27].
In [12] authors attempt the optimisation of the algorithm for model interpretability to derive decision lists. The reduced size of the rules opens up the option of interpreting the decisions in their entirety and not in the context of a specific input/output alone - a global explanation. The authors highlight the influence of complexity on the understandability of
end-users. However, decision list algorithms still do not scale well for larger datasets. Optimal Sparse Decision Trees (OSDT) [8] and later improved with Generalized and Scalable Optimal Sparse Decision Trees (GOSDT) [14] algorithms produce optimal decision trees over a variety of objectives including F-score, AUC, and partial area under the ROC convex hull. GOSDT generates trees with a smaller number of nodes while maintaining accuracy on par with state-of-art models.
On explaining DTs for end-users, current studies have investigated local explanations using approaches such as counterfactuals [28], the integration of contextual information and identified narrative style textual explanations[17]. All these attempts to answer the _why_ questions based on a few input features and specific to a particular input. Extending these insights to global explanations should help better understanding of the model by end-users and allow generalisation of the interpretations, driving actionability.
## 3 Experiment Design
Our main research question is to determine what type of explanation are most relevant for non-expert end-users to be able to understand underlying risk model. We evaluate a local and global explanation by measuring user's perceived understanding and completeness. We also measure whether the user's mental model had changed after reading an explanation.
### Dataset and Modeling
For the experiment, we used the Busselton dataset [11], which consists of 2874 patient records and information about whether they developed coronary heart disease (CHD) within ten years of the initial data collection. This study is similar to the data collected by NHS to develop QRISK3 [7]. Computing a risk score demands that we also explain the risk score, data used, probability measures of the scoring algorithm in addition to model prediction. We limit the scope of this study to only explaining the model prediction and use the CHD observation from the dataset as target variable for prediction. Using GOSDT [14] algorithm, we fit the data to obtain decision tress. GOSDT handles both categorical and continuous variables. While the optimum model may have multiple closeby splits for numeric values, such splits can reduce the readability of the tree. Hence we preprocess the data by converting most of the features into categorical variables. We follow the categories as mandated by National Health Service(NHS). The data is pre-processed as described in Appendix 0.A, with 2669 records and 11 features.
The GOSDT algorithm generated a comprehensive decision tree for the dataset, comprising 19 leaf nodes at a depth of 5, achieving an accuracy of 90.9% (Figure 4 in Appendix 0.A). For the purpose of human evaluation and comparison of local and global explanations, it was necessary to have multiple DTs with comparable structures. Hence, we created subsets of the data by varying the ranges and combinations of _Age_ and _Gender_. By working with reduced data points, the size of the constructed trees was significantly reduced. To ensure larger trees for evaluation purposes, we enforced a consistent depth of 4. Ultimately, we selected four trees for the evaluation task as shown in Table 1.
As mentioned in [20], a higher complexity of explanation rules in clinical setting leads to longer response times and decreased satisfaction with the explanations for end-user. The authors refer to unique variables within the rules as cognitive chunks, which contribute to complexity in understanding. In our experiment, global explanations naturally contain more cognitive chunks. To prevent bias in the results, we incorporated two levels of difficulty for each explanation type. The easy level consisted of trees with similar structures, both local
and global, featuring 5 nodes and decision paths of equal length with an identical number of cognitive chunks. For ease of understanding, we henceforth refer to a particular combination of explanation type and difficulty level as a specific scenario, namely - local-easy, global-easy, local-hard, and global-hard. A local-SHAP explanation was generated utilizing the same tree as the local-easy scenario. We use kernel SHAP [15] to obtain feature importance for the local-easy tree for specific patient input. The SHAP explanation is treated as a baseline for evaluation.
The hard scenario for both explanation types, consist of larger trees of similar structures. The tree had 8 nodes for local-hard scenario and 9 nodes in case of global-hard scenario. For global explanations, the explanation presentation involves more cognitive chunks, potentially introducing bias by making the global-hard scenario challenging to comprehend. Nevertheless, we proceeded with evaluating this scenario in our experiment.
Another factor to consider when generating explanation is the possible contradiction between model explanation and general assumptions. For instance, a node _BMI = Normal_ appearing in decision rules for low CHD risk is expected but not in those for high risk. Communicating this contradiction in explanation would be important in its understandability. We also include this in our experiment. Explanation scenarios categorized as hard involved contradictory explanations, which could prove more challenging for comprehension. We addressed these cases using semifactual [19] explanations, employing phrase _even-if_. We assess the impact of such risk narrations on understandability. Table 1 provides a summary of the four trees used for explanation generation.
### Generation of Explanation
For a given CHD prediction model and a corresponding patient input, the local explanation is a set of necessary conditions and predicted decisions of high/low risk. For the decision tree model in Figure 1(a), given particular patient info as input, the decision rule that is triggered to predict high risk is highlighted in blue. The path followed for the decision can be represented textually as shown in Figure 1(b). This is one possible representation. A more natural language expression of the rule is treated as a local explanation for the experiment. The language generation is rule-based. Details of the generation algorithm and an example of the evaluated explanation are given in Appendix B.
The global tree explanation is a list of all the decision rules of the tree. For a particular patient, a combination of the global explanation and the specific rule triggered corresponding to the given patient input is treated as the global prediction explanation. Once again, this is a choice we make for this experiment. A list of all decision nodes similar to feature importance in SHAP could also be a possible global tree explanation. For the patient in Figure 1(a), the corresponding global explanation is shown in Figure 1(c). As the tree size becomes large, the
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Age** & **Gender** & **Leaf count** & **Accuracy** & **Explanation Type** \\ \hline \hline
70 - 79 & Female & 6 & 78.4 & Local Easy \\ \hline
60 - 84 & Female & 6 & 82.5 & Global Easy \\ \hline
60 - 70 & Male & 9 & 77.3 & Local Hard \\ \hline
65 - 70 & male & 10 & 85.4 & Global Hard \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of DTs and type of explanation generated.
* [15] M.Bender, M.Bender, and M.Bender, "A local explanation of the decision in 2a."
* [16] A patient has High risk of CHD if: [17]. (Is Heavy Smoker).
* [18] (HDL cholesterol is High) even if (not Heavy Smoker).
* [19] (BMI is Healthy).
* [20] (BMI is Healthy) and (daily alcohol consumption is greater. than 124.5ml) even if (not Heavy Smoker) and (Cholesterol is not High).
* [21] (Cholesterol is not High).
* [22] (Cholesterol is High) even if (not Heavy Smoker).
* [23] (HDL cholesterol is High) even if (not Heavy Smoker) and (Cholesterol is not High) and (BMI is Healthy).
* [24] (BMI is not Healthy) and (daily alcohol consumption is greater. than 124.5ml) even if (not Heavy Smoker) and (Cholesterol is not High).
* [25] (Cholesterol is not High).
* [26] (Cholesterol is not High).
* [27] (Cholesterol is not High).
* [28] (Cholesterol is not High).
* [29] (Cholesterol is not High).
* [30] (Cholesterol is not High).
* [31] (Cholesterol is not High).
* [32] (Cholesterol is not High).
* [33] (HDL cholesterol is High) even if (not Heavy Smoker) and (Cholesterol is not High) and (BMI is Healthy).
* [34] (BMI is not Healthy) and (daily alcohol consumption is greater. than 124.5ml) even if (not Heavy Smoker) and (Cholesterol is not High).
* [35] (Cholesterol is not High).
* [36] (Cholesterol is not High).
* [37] (Cholesterol is not High).
* [38] (Cholesterol is not High).
* [39] (Cholesterol is not High).
* [30] (Cholesterol is not High).
* [31] (Cholesterol is not High).
* [32] (Cholesterol is not High).
* [33] (HDL cholesterol is High).
* [34] (HDL cholesterol is High).
* [35] (HDL cholesterol is High).
* [36] (HDL cholesterol is High).
* [37] (HDL cholesterol is High).
* [38] (HDL cholesterol is High).
* [39] (HDL cholesterol is High).
number of rules and the number of features in each rule increase. This means the explanation size and the cognitive chunks in the explanation increase. The best way to frame natural language explanations, for these different cases, is a separate research problem that we do not address here. Further, we restrict the rules in global explanation to those corresponding to a single risk category - high risk. Since the particular case involves only two categories, this still provides coverage to possible predictions while keeping the explanation less verbose. The narration generation involves the same algorithm as in the case of local explanation.
In addition to the model accuracy, note that each leaf node has a probability and confidence associated with that particular decision. For a particular node, the probability is the ratio of training data points that fits the criteria of that node to the number of data points in its previous node. A low probability node denotes that, the particular decision was rare based on the training data. The statistical significance of this prediction denotes its confidence. Both these measures are used for generating decision narration. Appendix 0.B shows examples of the usage. To express the probabilities, we use verbal mapping proposed by [26]. An additional usage of _possibly_ is introduced to accommodate cases involving low confidence and high probability.
The SHAP explanation does not have associated confidence. We filter features with SHAP score greater that 0 and present them as bulleted points in descending order of importance.
### Evaluation
For evaluation, a within-subject survey is conducted with participants recruited on Prolific platform. We conducted a pilot study among peers and the feedback was used to improve the readability of the explanations and assess the time taken for the tasks.
The survey involves 5 patient scenarios namely local-SHAP, local-easy, local-hard, global-easy and global-hard. Each scenario consists of 2 pages. On the first page, the participant is provided with information about a patient. This consists of their features: age, gender, height, weight, BMI, blood pressure, different cholesterol values, smoking, and drinking habit. They are asked to enter the assumptions on what patient features may contribute to the AI model's prediction. This captures the mental model of the participant regarding CHD. Appendix 0.C shows examples of the pages used in the survey.
On the next page, participants are presented with the same patient, the risk of CHD (high or low) as predicted by the AI system along with an explanation. They are asked to enter feature importance once again based on their understanding of the explanation. They are also asked to rate the explanation on three parameters: completeness, understandability, and verboseness, using a 5-level Likert scale. Text feedback on each explanation and overall feedback at the end of the survey is collected.
The evaluation of each explanation has 3 parameters from a Likert rating based on participant perceptions. In addition, based on the task of choosing feature importance we compute two additional parameters: change in mental model and correctness of understanding. Change in mental model is defined as the updation of perceived feature importance before and after explanation. Let \(U=(u_{1},u_{2},...,u_{N})\) where \(u_{i}\in\{0,1\},1\leq i\leq N\) be the selected feature importance before explanation where N is the total number of features. Let \(V=(v_{1},v_{2},...,v_{N})\) where \(v_{i}\in\{0,1\},1\leq i\leq N\) be the selected feature importance after explanation. _Change in mental model_ is computed as
\[D_{m}=\frac{d(U,V)}{N}\]
where \(d\) is the _Hamming distance_ between \(U\) and \(V\).
For each explanation, based on the features that are shown in the narration, we also know the _correct_ feature importance. In the case of SHAP, these are the features with a SHAP
score greater than 0. For local explanations, these are the features in the decision path, and for global explanations, it is all the features in the tree. If the correct feature importance \(C=(c_{1},c_{2},...,c_{N})\) where \(c_{i}\in\{0,1\},1\leq i\leq N\), we compute the _error in understanding_ w.r.t to the system mental model as
\[D_{c}=\frac{d(V,C)}{N}\]
. Since for each feature, the participant selects a yes/no for importance, these measures do not capture the relative importance among features. Table 2 summarises all the evaluation parameters.
## 4 Results and Discussion
Fifty participants were recruited from the Prolific platform for the experiment, ensuring a balanced gender profile. All participants were presented with five patient-explanation scenarios and were requested to evaluate each of them. The survey took an average of 26 minutes to complete, and participants received a compensation of \(\mathcal{L}\)6 each, as per the minimum pay requirement. However, one participant was excluded from the analysis due to indications of low-effort responses, spending less than 1 minute on multiple scenarios. The demographic details of the selected participants are summarized in Table 3.
Based on the responses, we computed the evaluation parameters mentioned in the previous section. The Likert scale ratings for _Completeness_, _Understandability_, and _Verboseness_ are assigned values from 0 to 1, 0 corresponding to 'Strongly Disagree' and 1 to 'Strongly Agree'. We also calculate, _Change in the mental model_ and _Error in understanding_ from the selection of feature importance.
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline Measure & Definition \\ \hline \hline _Completeness rating (CR)_ & User rating for the prompt: This explanation helps me completely understand why the AI system made the prediction \\ \hline _Understandability rating (UR)_ & User rating for the prompt: Based on the explanation I understand how the model would behave for another patient \\ \hline _Verboseness rating (VR)_ & User rating for the prompt: This explanation is long and uses more words than required \\ \hline _Change in mental model (CMM)_ & Difference in perceived feature importance before and after viewing model explanation \\ \hline _Error in Understanding (EU)_ & Difference between model feature importance and perceived feature importance after viewing explanation \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation criteria for comparison of different explanation types.
\begin{table}
\begin{tabular}{p{142.3pt}|p{284.5pt}} \hline \hline
**Feature** & **Category: Proportion** \\ \hline \hline Age & 18-30 : 81.63\%, 30-40 : 16.33\%, 40-65 : 2.04\% \\ \hline Gender & Male : 51.02\%, Female : 48.98\% \\ \hline First language & English:38.8\%, Others:61.2\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Demographic distribution of survey participants.
The calculated scores are also normalised to range from 0 to 1. The mean values across all participants are presented in Table 4.
While local-easy scenario has the lowest error in understandability(EU), participants rated all the models comparably in terms of Understandability (UR) and Completeness (CR). The Change in the Mental Model (CMM) exhibited uniformity across all types of explanations, except for local-SHAP and local-easy. To assess the significance of these results, we performed the Wilcoxon test, for all combinations of explanation types. Since multiple comparisons are performed, we apply Bonferroni Correction on p-value and a threshold of 0.01 is chosen. In comparing local and global explanations, local-SHAP is excluded and the ratings for both levels of difficulty in each case are averaged. The results are shown in Table 5. The observations that hold for a stricter threshold of 0.001 are highlighted with \(*\).
Global explanations resulted in a lower average understandability based on the feature selection(EU) and it was observed that harder scenarios resulted in higher errors for both local and global explanations. For each type of explanation, the patient features wrongly selected
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & Local SHAP & Local Easy & Local Hard & Global Easy & Global Hard \\ \hline \hline
**CR** & 0.64 & 0.69 & _0.63_ & 0.68 & **0.69** \\ \hline
**UR** & _0.66_ & 0.71 & 0.67 & 0.72 & **0.74** \\ \hline
**VR** & _0.16_ & 0.26 & 0.23 & **0.56** & 0.52 \\ \hline
**CMM** & **0.42** & _0.28_ & 0.38 & 0.34 & 0.35 \\ \hline
**EU** & 0.12 & _0.07_ & 0.13 & 0.19 & **0.30** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation parameters across all the scenarios. Maximum is highlighted in bold and minimum in italics. CR - Completeness rating, UR - Understandability rating, VR - Verboseness rating, CMM - Change in mental model, EU - Error in Understanding.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & CR & UR & VR & CMM & EU \\ \hline \hline Local vs Global & 0.42 & 0.44 & \(\mathbf{0.00^{*}}\) & 0.53 & \(\mathbf{0.00^{*}}\) \\ \hline Local Easy vs Global Easy & 0.84 & 0.85 & \(\mathbf{0.00^{*}}\) & 0.05 & \(\mathbf{0.00^{*}}\) \\ \hline Local Hard vs Global Hard & 0.35 & 0.42 & \(\mathbf{0.00^{*}}\) & 0.36 & \(\mathbf{0.00^{*}}\) \\ \hline Local Easy vs Local Hard & 0.38 & 0.24 & 0.76 & \(\mathbf{0.00^{*}}\) & \(\mathbf{0.00^{*}}\) \\ \hline Global Easy vs Global Hard & 0.50 & 0.53 & 0.56 & 0.43 & \(\mathbf{0.00^{*}}\) \\ \hline Local SHAP vs Local Hard & 0.63 & 0.76 & 0.10 & 0.23 & 0.42 \\ \hline Local SHAP vs Local Easy & 0.18 & 0.28 & 0.03 & \(\mathbf{0.00^{*}}\) & 0.11 \\ \hline Local SHAP vs Global Hard & 0.02 & 0.30 & \(\mathbf{0.00^{*}}\) & 0.09 & \(\mathbf{0.00^{*}}\) \\ \hline Local SHAP vs Global Easy & 0.16 & 0.28 & \(\mathbf{0.00^{*}}\) & \(\mathbf{0.01}\) & 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Significance of difference between types of explanation. CR - Completeness rating, UR - Understandability rating, VR - Verboseness rating, CMM - Change in mental model, EU - Error in Understanding. The values which are significant (Bonferroni Corrected p-value threshold of 0.01) are highlighted in **bold**. P-value \(\leq 0.001\) are highlighted with \(*\).
was investigated (Table 11, 12). Incorrect feature selection related to _cholesterol_ caused the majority of errors. Participants chose the wrong cholesterol-related feature, possibly due to a lack of attention or limited understanding of medical terminology. Improving the presentation of explanations and providing more contextual information could potentially address this issue. Importantly, when presented with semifactual explanations of hard scenarios both local and global explanations led to almost half or more participants excluding the corresponding feature. This clearly points to the ambiguity of such narration.
The error analysis does not explain the contradiction between the understandability ratings and the correctness of feature selection. Interestingly, a considerable number of participants expressed a preference for longer, global explanations, even if they did not fully comprehend them. Significant rating of global explanations as more verbose adds to this contradiction. To delve deeper into this phenomenon, participant clustering was performed based on the ratings and computed scores. Using the k-means algorithm, three distinct groups of participants were identified and manually validated. Figure 3 displays the average rating across different parameters for each group.
* Group 1: Strongly prefer and understand local explanations. The cluster consists of 11 participants who rate patient-specific local tree explanations highest on completeness and understandability.
* Group 2: Majority group that rates global explanation as most understandable: This cluster consist of 22 people who has the least significance in preference between global, local explanation or difference based on the difficulty level. They rate Global explanation highest on completeness and understandability
* Group 3: This cluster consist of 16 people who strongly prefer global explanations but critical about the narration. This cluster is more detail oriented and rates global explanations as more understandable and complete.This group was critical on the narration and presentation of explanation in the feedback form. The average error in feature selection for global explanation for this group, is lower than Group 2.
It is evident that within the clusters, the ratings on each parameters has significant preferential pattern between each type of explanation. Group 1, 3 has strong polarity on the preferences
Figure 3: Average rating for different explanation type across the participant groups
and their rating tend to _Strongly agree_ or _Strongly disagree_. Both these Groups identify Global explanations as verbose. This shows that, in healthcare setting, the effectiveness of an explanation to an end-user, is very dependent on their individual preference.
### Local vs. Global
While there is no significant difference between local and global explanations overall, strong differences emerge at the Group level. Group 1 rates the local explanation as complete, while both Groups 2, and 3 favour the global explanation for completeness. Similar preferences are observed in participants' perception of understandability within each group. When a stricter p-value threshold of 0.001 is applied, the significance of the difference in user rating for understandability and correctness holds only in Group 1. The results of the Wilcoxon test for combinations of explanation types within Groups are given in Appendix 0.D.
* _The results indicate that certain people strongly prefer specific type of explanation. This preference does not necessarily translate to understandability._
* _In all groups, a higher error in feature selection is observed for global explanations, mainly due to the semifactual explanation and wrongly interpreting features related to Cholesterol_
Among participants belonging to Group 2, the factors driving their preference for global explanations remain unclear. Demographics data examination (Table 6) offers no apparent patterns, leading us to propose the influence of unique cognitive styles within the groups. Further investigations are warranted to unveil the underlying reasons for these preferences and errors. While users may perceive explanations as understandable, it is vital to recognize that this perception may not necessarily translate into accurate decision-making. The lack of significant changes in mental models substantiate this, indicating the need for continued exploration to optimize explanation presentations for healthcare AI models.
### Tree Explanation vs. SHAP
The overall ratings of SHAP explanations are comparable to those of local-hard explanations but lower than those of local-easy explanations generated from the same underlying decision tree. This suggests that the comprehensibility and interpretability of SHAP explanations are slightly lower than those of the local-easy explanations. However, this may be attributed to the presentation bias, as all participants were exposed to the SHAP explanation first. It is noted that the presentation style of SHAP explanations, using bulleted points, is generally considered less verbose even though it does not impact the error in understandability or perceived understandability and completeness. Hence the simpler readability of the SHAP explanation is not seen to have impacted its overall understandability.
\begin{table}
\begin{tabular}{l|l l l} \hline \hline & **Group1** & **Group2** & **Group3** \\ \hline Number of participants & 11 & 22 & 16 \\ \hline Male to female ratio & 4:7 & 9:13 & 12:4 \\ \hline Count of full time employed & 2 & 8 & 5 \\ \hline Student to non-student ratio & 8:2 & 10:9 & 8:7 \\ \hline Number of native english speakers & 4 & 11 & 4 \\ \hline Ethnicity, white to black ratio & 9:2 & 11:10 & 11:3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Demographic distribution of participants within each group. All the features are not available for all participants. Missing data are excluded in the counts.
### Easy vs. Hard
The ratings provided by the participants on the Likert scale did not reveal any significant distinction between the explanation scenarios characterized as easy and hard. However, an examination of the impact of difficulty levels on the error in feature selection uncovered significant results. Hard scenarios, whether global or local explanations, exhibited significantly higher error rates, even within participant groups.
* _The explanation understanding is strongly dependent on the complexity of the feature interaction being explained._
When participants encountered explanations that deviated from their preexisting notions of feature dependence, it introduced confusion, becoming a major contributor to error in hard scenarios. We observed that harder scenarios, on average, caused larger changes in the mental model of participants. However, this alone was insufficient to mitigate the observed errors. Furthermore, the consistent error patterns across different participant groups present an opportunity to enhance the current framework of narration and presentation of explanations, benefiting all participants.
## 5 Limitations and Future Work
The experiment provides evidence for the usefulness of global explanations in health informatics. Identifying cognitive styles that lead to particular explanation preferences and errors in comprehension, is pivotal to applying global explanations in real-life applications. The current experiment has been carried out on a small dataset. Evaluating these findings on a larger data set with more data points and larger features will be undertaken in future studies. We recognise that regression models are commonly used in risk prediction. Expanding the scope of the narrative global explanation within the context of regression and assessing its comparative utility against the local explanation will enable the integration of our findings into established risk predictive tools.
Further, the evaluation in this study was crowdsourced and hence the participants are not representative of real-life patients. Most of the participants fall in the age category that does not have a risk of heart disease as predicted by the model. This may have biased their rating. We aim to rectify this by conducting the evaluation on a representative patient population, which would also require addressing ethical concerns.
The current study has not focussed on generating effective global explanations. The use of semifactuals has not addressed the mismatch with the user's mental models. Further, the presentation of Explanation features is seen to have introduced errors. Effective communication and presentation techniques would be vital in reducing errors. Though we have used a linguistic representation of probability and confidence, the evaluations in this regard remain undone. For risk communication at scale, this is a crucial component. Further research is warranted to delve deeper into these aspects and refine the design and implementation of explanation systems.
## Acknowledgement
We would like to thank Dr. Sameen Maruf and Prof. Ingrid Zukerman for generously sharing their expertise in utilizing the dataset, continuous support and valuable feedback in designing the experiment. We thank Nikolay Babakov and Prof. Alberto Jose Bugarin Diz for their
feedback throughout the development of this research. We also thank the anonymous reviewers for their feedback which has significantly improved this work. A. Sivaprasad is ESR in the NL4XAI project which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 860621.
|
2309.17297 | Structurally complete finitary extensions of positive Łukasiewicz
logic | In this paper we study $\mathcal{MV}^+$, i.e. the positive fragment of
{\L}ukasiewicz Multi-Valued Logic $\mathcal{MV}$. In particular we describe all
the finitary extensions of $\mathcal{MV}^+$ that are structurally complete and
all the axiomatic extensions of $\mathcal{MV}^+$ that are hereditarily
structurally complete. Examples of hereditarily structurally complete finitary
extensions and non hereditarily structurally complete finitary extensions are
provided. | Paolo Aglianò, Francesco Manfucci | 2023-09-29T14:54:16Z | http://arxiv.org/abs/2309.17297v1 | # Structurally complete finitary extensions of positive Lukasiewicz logic
###### Abstract.
In this paper we study \(\mathcal{M}\mathcal{V}^{+}\), i.e. the positive fragment of Lukasiewicz Multi-Valued Logic \(\mathcal{M}\mathcal{V}\). In particular we describe all the finitary extensions of \(\mathcal{M}\mathcal{V}^{+}\) that are structurally complete and all the axiomatic extensions of \(\mathcal{M}\mathcal{V}^{+}\) that are hereditarily structurally complete. Examples of hereditarily structurally complete finitary extensions and non hereditarily structurally complete finitary extensions are provided.
## 1. Introduction
A large class of substructural logics (i.e. logics that lack some structural rules) is given by the extensions of the _Full Lambek Calculus_\(\mathcal{FL}\) (see [27] p. 76). All extensions of \(\mathcal{FL}\) have a primitive connective \(\mathbf{0}\) that denotes _falsum_; so if \(\mathcal{L}\) is an extension of \(\mathcal{FL}\) it makes sense to study its _positive_ fragment \(\mathcal{L}^{+}\). The language of \(\mathcal{L}^{+}\) is obtained by the Inguage of \(\mathcal{L}\) by deleting \(\mathbf{0}\) from the signature; the valid derivations in \(\mathcal{L}^{+}\) are the valid derivations in \(\mathcal{L}\), that contains only \(0\)-free formulas.
An important subfamily of substructural logics over \(\mathcal{FL}\) consists of logics that satisfy both exchange and weakening but lack contraction; in this case the primitive connectives can be taken as \(\vee,\wedge,\rightarrow,\mathbf{0},\mathbf{1}\) where of course \(\mathbf{1}\) represents _truth_. The minimal substructural logic of this kind is denoted by \(\mathcal{FL}_{ew}\) and hence all the substructural logics lacking contraction are an extension of it. Probably Intuitionistic Logic \(\mathcal{IL}\) is the most famous of them but there are many others such as the monoidal logic \(\mathcal{M}\mathcal{IL}\)[25], Hajek's Basic Logic \(\mathcal{BL}\)[31], Dummet's Logic \(\mathcal{SL}\)[31], Product Logic \(\mathcal{PL}\)[31] and, last but not least, Lukasiewicz's Many Valued logic \(\mathcal{M}\mathcal{V}\)[37].
All extensions of \(\mathcal{FL}\) are _algebraizable_ with an _equivalent algebraic semantics_ (in the sense of Blok-Pigozzi[17]) that is at least a quasivariety of algebras; by the same token all positive fragments are algebraizable as well and moreover the machinery of algebrization takes a very transparent form in both cases. The equivalent algebraic semantics of \(\mathcal{FL}\) and \(\mathcal{FL}^{+}\) are the variety of \(\mathsf{FL}\)-algebras [27] and the variety of residuated lattices [33] respectively. Algebraizability in Blok-Pigozzi's fashion
## 1. Introduction
In this paper we study the **multiplicative** (_M_)-algebra \(\mathcal{M}\) of \(\mathcal{M}\). The _M_-algebra \(\mathcal{M}\) is a _M_-algebra \(\mathcal{M}\) of \(\mathcal{M}\). The _M_-algebra \(\mathcal{M}\) is a _M_-algebra \(\mathcal{M}\) of \(\mathcal{M}\).
The following facts were essentially discovered by A. Tarski, J. Los and A. Lyndon in the pioneering phase of model theory; for proof of this and similar statements the reader can consult [21].
**Lemma 2.1**.: _Let \(\mathsf{K}\) be any class of algebras. Then:_
1. \(\mathsf{K}\) _is a universal class if and only if_ \(\mathbf{ISP}_{u}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of all algebras in which a set of universal sentences is valid;_
2. \(\mathsf{K}\) _is a quasivariety if and only if_ \(\mathbf{ISP}_{u}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of all algebras in which a set of quasiequations is valid;_
3. \(\mathsf{K}\) _is a variety if and only if_ \(\mathbf{HSP}(\mathsf{K})=\mathsf{K}\) _if and only if it is the class of all algebras in which a set of equations is valid._
We will often write \(\mathbf{V}\) for \(\mathbf{HSP}\) and \(\mathbf{Q}\) for \(\mathbf{ISPP}_{u}\). It is clear that both \(\mathbf{V}\) and \(\mathbf{Q}\) are closure operators on classes of algebras of the same type; this implies among other things that for a given variety \(\mathsf{V}\) the class of subvarieties of \(\mathsf{V}\) is a complete lattice which we denote by \(\Lambda(\mathsf{V})\). If \(\mathsf{Q}\) is a quasivariety that is not a variety then \(\Lambda(\mathsf{Q})\) is still a lattice but it is not necessarily complete (in particular does not need to have maximum). The class of subquasivarieties of a quasivariety \(\mathsf{Q}\) is a complete lattice denoted by \(\Lambda_{q}(\mathsf{Q})\).
For the definition of free algebras in a class \(\mathsf{K}\) on a set \(X\) of generators in symbols \(\mathbf{F}_{\mathsf{K}}(X)\), we refer again to [20]. We merely observe that every free algebra on a class \(\mathsf{K}\) belongs to \(\mathbf{ISP}(\mathsf{K})\). It follows that every free algebra in \(\mathsf{K}\) is free in \(\mathbf{ISP}(\mathsf{K})\) and therefore for any quasivariety \(\mathsf{Q}\), \(\mathbf{F}_{\mathsf{Q}}(X)=\mathbf{F}_{\mathsf{V}(\mathsf{Q})}(X)\).
Let \(\mathbf{B},(\mathbf{A}_{i})_{i\in I}\) be algebras in the same signature; we say that \(\mathbf{B}\)**embeds** in \(\prod_{i\in I}\mathbf{A}_{i}\) if \(\mathbf{B}\in\mathbf{IS}(\prod_{i\in I}\mathbf{A}_{i})\). Let \(p_{i}\) be the \(i\)-th projection, or better, the composition of the isomorphism and the \(i\)-th projection, from \(\mathbf{B}\) to \(\mathbf{A}_{i}\); the embedding is **subdirect** if for all \(i\in I\), \(p_{i}(\mathbf{B})=\mathbf{A}_{i}\) and in this case we will write
\[\mathbf{B}\leq_{sd}\prod_{i\in I}\mathbf{A}_{i}.\]
An algebra \(\mathbf{B}\) is **subdirectly irreducible** if it is nontrivial and for any subdirect embedding
\[\mathbf{B}\leq_{sd}\prod_{i\in I}\mathbf{A}_{i}.\]
there is an \(i\in I\) such that \(\mathbf{B}\) and \(\mathbf{A}_{i}\) are isomorphic. It can be shown that \(\mathbf{A}\) is **subdirectly irreducible** if and only if the congruence lattice \(\operatorname{Con}(\mathbf{A})\) of \(\mathbf{A}\) has a unique minimal element different from the trivial congruence. If \(\mathsf{V}\) is a variety we denote by \(\mathsf{V}_{si}\) the class of subdirectly irreducible algebras in \(\mathsf{V}\).
**Theorem 2.2**.:
1. _(Birkhoff_ _[_15_]_) Every algebra can be subdirectly embedded in a product of subdirectly irreducible algebras. So if_ \(\mathbf{A}\in\mathsf{V}\)_, then_ \(\mathbf{A}\) _can be subdirectly embedded in a product of members of_ \(\mathsf{V}_{si}\)_._
2. _(Jonsson's Lemma_ _[_34_]__) Suppose that_ \(\mathsf{K}\) _is a class of algebras such that_ \(\mathsf{V}(\mathsf{K})\) _is congruence distributive; then_ \(\mathsf{V}_{si}\subseteq\mathbf{HSP}_{u}(\mathsf{K})\)_._
If \(\mathsf{Q}\) is a quasivariety and \(\mathbf{A}\in\mathsf{Q}\), a \(\mathsf{Q}\)**-congruence** of \(\mathbf{A}\) is a congruence \(\theta\) such that \(\mathbf{A}/\theta\in\mathsf{Q}\); clearly \(\mathsf{Q}\)-congruences form an algebraic lattice \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\). An algebra \(\mathbf{A}\in\mathsf{Q}\) is \(\mathsf{Q}\)**-irreducible** if \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) has a unique minimal element; since clearly \(\operatorname{Con}_{\mathsf{Q}}(\mathbf{A})\) is a meet subsemilattice of \(\operatorname{Con}(\mathbf{A})\), any subdirectly irreducible algebra is \(\mathsf{Q}\)-irreducible in any quasivariety \(\mathsf{Q}\) to which it belongs. For a quasivariety
\(\mathsf{Q}\) we denote by \(\mathsf{Q}_{ir}\) the class of \(\mathsf{Q}\)-irreducible algebras in \(\mathsf{Q}\). We have the equivalent of Birkhoff's and Jonsson's results for quasivarieties:
**Theorem 2.3**.: _Let \(\mathsf{Q}\) be any quasivariety._
1. _(Mal'cev_ _[_38_]__) Every_ \(\mathbf{A}\in\mathsf{Q}\) _is subdirectly embeddable in a product of algebras in_ \(\mathsf{Q}_{ir}\)_._
2. _(Czelkowski-Dziobiak_ _[_23_]__) If_ \(\mathsf{Q}=\mathbf{Q}(\mathsf{K})\)_, then_ \(\mathsf{Q}_{ir}\subseteq\mathbf{ISP}_{u}(\mathsf{K})\)_._
### The Blok-Pigozzi connection
In what follows by a **logic**\(\mathcal{L}\) we mean a substitution invariant consequence relation \(\vdash_{\mathcal{L}}\) on the set of terms \(\mathbf{T}_{\tau}(\omega)\) (also called _algebra of formulas_) of some algebraic language \(\tau\). A **clause** in \(\mathcal{L}\) is a formal expression \(\Sigma\Rightarrow\Delta\) where \(\Sigma,\Delta\) are finite set of formulas of \(\mathcal{L}\); a clause is a **rule** if \(\Delta=\{\delta\}\). A rule is an **axiom** if \(\Sigma=\emptyset\).
If the equivalence between formulas (i.e. provable equivalence according to \(\vdash\)) is a congruence on the algebra of formulas, then we can form the _Lindenbaum-Tarski algebra_ as the quotient of provably equivalent formulas of the algebra of formulas. If the quasivariety \(\mathsf{Q}_{\mathcal{L}}\) generated by the Lindembaum-Tarski algebra satisfies further conditions, then it is the class of algebraic models of \(\mathcal{L}\) and is **algebraic** with **equivalent algebraic semantics \(\mathsf{Q}_{\mathcal{L}}\)**; Blok and Pigozzi [17] formalized this connection and gave necessary and sufficient conditions for a logic to be algebraizable. Essentially, one needs a finite set of equations \(\tau=\{\delta_{i}\approx\varepsilon_{i}:i=1,\ldots,n\}\) in the language of \(\mathsf{Q}_{\mathcal{L}}\) and a finite set of formulas of \(\mathcal{L}\) in two variables \(\Delta(x,y)=\{\varphi_{1}(x,y),\ldots,\varphi_{m}(x,y)\}\) that allow to transform equations, quasiequations and universal sentences in \(\mathsf{Q}_{\mathcal{L}}\) into formulas, rules and clauses of \(\mathcal{L}\) and viceversa; moreover this transformation must respect both the consequence relation and the semantical consequence. That is to say, for all sets of formulas \(\Gamma\) of \(\mathcal{L}\) and formulas \(\varphi\in\mathbf{T}_{\tau}(\omega)\)
\[\Gamma\vdash\varphi\quad\text{if and only if}\quad\{\delta_{i}(\Gamma)\approx \varepsilon_{i}(\Gamma):i=1,\ldots,n\}\models_{\mathsf{Q}_{\mathcal{L}}}\{ \delta_{i}(\varphi)\approx\varepsilon_{i}(\varphi):i=1,\ldots,n\}\]
where of course \(\delta_{i}(\Gamma)\approx\varepsilon_{i}(\Gamma)\) is a shorthand for \(\delta_{i}(\psi)\approx\varepsilon_{i}(\psi)\) for all \(\psi\in\Gamma\) and also
\[\mathsf{Q}_{\mathcal{L}}\models(x\approx y)\Leftrightarrow(\bigwedge_{i=1}^{n }\bigwedge_{j=1}^{m}(\delta_{i}(\varphi_{j}(x,y))\approx\varepsilon_{i}( \varphi_{j}(x,y))).\]
A quasivariety \(\mathsf{Q}\) is a _quasivariety of logic_ if it is the equivalent quasivariety semantics for some logic \(\mathsf{L}_{\mathsf{Q}}\); the Galois connection between algebraizable logics and quasivarieties of logic is given by
\[\mathcal{L}_{\mathsf{Q}_{\mathcal{L}}}=\mathcal{L}\qquad\qquad\mathsf{Q}_{ \mathcal{L}_{\mathsf{Q}}}=\mathsf{Q}.\]
### Structural completeness in algebra and logic
An **extension** of \(\mathcal{L}\) over the language \(\tau\) is a logic \(\mathcal{L}^{\prime}\) over the same language such that \(\Sigma\vdash_{\mathcal{L}}\delta\) implies \(\Sigma\vdash_{\mathcal{L}^{\prime}}\delta\). A **finitary extension** of \(\mathcal{L}\) is an extension of \(\mathcal{L}\) obtained by adding a set of rules to the calculus of \(\mathcal{L}\); an **axiomatic extension** of \(\mathcal{L}\) is an extension of \(\mathcal{L}\) obtained by adding a set of axioms to the calculus of \(\mathcal{L}\). The class of the (finitary, axiomatic) extension of the logic \(\mathcal{L}\) is never empty, since \(\mathcal{L}\) clearly belongs to it. Therefore we can define closure operators in a standard way and hence complete lattices; so \(\mathrm{Th}(\mathcal{L})\), \(\mathrm{Th}_{f}(\mathcal{L})\) and \(\mathrm{Th}_{a}(\mathcal{L})\) will be the lattice of all the extensions, finitary extensions and axiomatic extensions of \(\mathcal{L}\) respectively. Via the Blok-Pigozzi connection we get:
**Theorem 2.4**.: _Let \(\mathcal{L}\) be an algebraizable logic with equivalent algebraic semantics \(\mathsf{Q}_{\mathcal{L}}\). Then_
1. _the lattice_ \(\mathrm{Th}_{a}(\mathcal{L})\) _of the axiomatic extensions of_ \(\mathcal{L}\) _is dually isomorphic with_ \(\Lambda(\mathsf{Q})\)_;_
2. _the lattice_ \(\mathrm{Th}_{f}(\mathcal{L})\) _of the finitary extensions of_ \(\mathcal{L}\) _is dually isomorphic with_ \(\Lambda_{q}(\mathsf{Q})\)_._
An algebraizable logic \(\mathcal{L}\) is **tabular** if it is the logic of a finite algebra; in other words \(\mathsf{Q}_{\mathcal{L}}\) is a finitely generated quasivariety, i.e. \(\mathsf{Q}_{\mathcal{L}}=\mathbf{Q}(\mathbf{A})\) for some finite algebra \(\mathbf{A}\). A logic is **locally tabular** if, for any finite \(k\), there exist only finitely many pairwise nonequivalent formulas in \(\mathcal{L}\) built from the variables \(x_{1},\ldots,x_{k}\). It is clear that an algebraizable logic is locally tabular if and only if \(\mathsf{Q}_{\mathcal{L}}\) is a locally finite quasivariety.
A clause \(\Sigma\Rightarrow\Delta\) is **admissible** in \(\mathcal{L}\) if every substitution that makes the premises into a theorem of \(\mathcal{L}\), also makes at least one of the conclusions in \(\Delta\) a theorem of \(\mathcal{L}\). In particular a rule is admissible in \(\mathcal{L}\) if, when added to its calculus, it does not produce any new theorem. A clause \(\Sigma\Rightarrow\Delta\) is **derivable** in \(\mathcal{L}\) if \(\Sigma\vdash\delta\) for some \(\delta\in\Delta\). A logic \(\mathcal{L}\) is **structurally complete** if every admissible rule of \(\mathcal{L}\) is derivable in \(\mathcal{L}\); a logic is **hereditarily structurally complete** if every finitary extension of \(\mathcal{L}\) is structurally complete. Determining structural completeness of a logic is in general a very deep and challenging problem; here we will use only the parts of the theory that are necessary but for an extensive treatment of the subject we direct the reader to [10].
A quasivariety \(\mathsf{Q}\) is **structural** if for every subvariety \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\), \(\mathbf{H}(\mathsf{Q}^{\prime})=\mathbf{H}(\mathsf{Q})\) implies \(\mathsf{Q}^{\prime}=\mathsf{Q}\).
**Theorem 2.5**.: _[_13_]_For a quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is structural;_
2. \(\mathbf{Q}(\mathbf{F}_{\mathsf{Q}}(\omega)=\mathsf{Q}\)_._
For any quasivariety \(\mathsf{Q}\), we define the **structural core of \(\mathsf{Q}\)** as the smallest \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) such that \(\mathbf{H}(\mathsf{Q})=\mathbf{H}(\mathsf{Q}^{\prime})\). The structural core of a quasivariety always exists:
**Corollary 2.6**.: _For any quasivariety \(\mathsf{Q}\), \(\mathbf{Q}(\mathsf{F}_{\mathsf{Q}}(\omega))\) is structural and it is the structural core of \(\mathsf{Q}\)._
Proof.: \(\mathbf{Q}(\mathsf{F}_{\mathsf{Q}}(\omega))\) is structural by Theorem 2.5; if \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\) is such that \(\mathbf{H}(\mathsf{Q}^{\prime})=\mathbf{H}(\mathsf{Q})\), then clearly \(\mathbf{F}_{\mathsf{Q}}(\omega)\in\mathsf{Q}^{\prime}\) from which the thesis follows.
It follows at once that a quasivariety \(\mathsf{Q}\) is structural if and only if it coincides with its structural core. As a consequence the structural subquasivarieties of a quasivariety \(\mathsf{Q}\) are exactly those that coincide with the structural cores of \(\mathsf{Q}^{\prime}\) for some \(\mathsf{Q}^{\prime}\subseteq\mathsf{Q}\); even more, since \(\mathbf{H}(\mathsf{Q})\) is a variety, the structural subquasivarieties of a variety \(\mathsf{V}\) are exactly the structural cores of \(\mathsf{V}^{\prime}\) for some subvariety \(\mathsf{V}^{\prime}\) of \(\mathsf{V}\). As we will see in the sequel this observation is particularly useful when the free countably generated algebra in \(\mathsf{V}\) has a reasonable description.
If \(\mathsf{Q}\) is a quasivariety and \(\mathsf{Q}^{\prime}\) is a subquasivariety of \(\mathsf{Q}\) we say that \(\mathsf{Q}^{\prime}\) is **equational** in \(\mathsf{Q}\) if \(\mathsf{Q}^{\prime}=\mathbf{H}(\mathsf{Q}^{\prime})\cap\mathsf{Q}\); this is clearly equivalent to saying that \(\mathsf{Q}^{\prime}\) is axiomatized modulo \(\mathsf{Q}\) by a set of equations. A quasivariety \(\mathsf{Q}\) is **primitive** if each subquasivariety of \(\mathsf{Q}^{\prime}\) is equational in \(\mathsf{Q}\). The following lemma is straightforward:
**Lemma 2.7**.: _For a quasivariety \(\mathsf{Q}\) the following are equivalent;_
1. \(\mathsf{Q}\) _is primitive;_
_._
2. _every subquasivariety of_ \(\mathsf{Q}\) _is structural._
For locally finite quasivarieties we have a necessary and sufficient condition due essentially to Gorbunov; an algebra \(\mathbf{A}\) is **weakly projective** in a class \(\mathsf{K}\) if for all \(\mathbf{B}\in\mathsf{K}\), \(\mathbf{A}\in\mathbf{H}(\mathbf{B})\) implies \(\mathbf{A}\in\mathbf{IS}(\mathbf{B})\). An algebra is **projective** in \(\mathsf{K}\) if it is weakly projective in \(\mathsf{K}\) and the epimorphism and the embedding witnessing weak projectivity compose to the identity on \(\mathbf{A}\).
**Theorem 2.8**.: _(see [30]) For a locally finite quasivariety \(\mathsf{Q}\) the following are equivalent:_
1. \(\mathsf{Q}\) _is primitive;_
2. _every finite_ \(\mathsf{Q}\)_-irreducible algebra is weakly projective in_ \(\mathsf{Q}\)_;_
3. _every finite_ \(\mathsf{Q}\)_-irreducible algebra is weakly projective in the class of finite algebras in_ \(\mathsf{Q}\)_._
From the Blok-Pigozzi connection we get at once:
**Theorem 2.9**.: _Let \(\mathcal{L}\) be an algebraizable logic with equivalent algebraic semantics \(\mathsf{Q}_{\mathcal{L}}\). Then_
1. \(\mathcal{L}\) _is structurally complete if and only if_ \(\mathsf{Q}_{\mathcal{L}}\) _is structural;_
2. \(\mathcal{L}\) _is hereditarily structurally complete if and only if_ \(\mathsf{Q}_{\mathcal{L}}\) _is primitive._
Combining Theorems 2.4 and 2.9 it follows at once that if \(\mathcal{L}\) is algebraizable and \(\mathsf{Q}_{\mathcal{L}}\) is primitive, then every finitary extension of \(\mathcal{L}\) is axiomatic.
Some investigations using (part of) the machinery we have described in this section has already been used to investigate structural completeness in (quasi)varieties of fuzzy logics (see for instance [35] or [22]). In this note however we will get more into the details for a specific variety and this will allow us to characterize all the structurally complete finitary extensions of positive Lukasiewicz logic.
### Hoops as quasivarieties of logics
A commutative integral residuated lattice is an algebra
\(\langle A,\vee,\wedge,\cdot,\rightarrow,1\rangle\) such that
1. \(\langle A,\vee,\wedge,1\rangle\) is a lattice with largest element \(1\);
2. \(\langle A,\cdot,1\rangle\) is a commutative monoid;
3. \((\cdot,\rightarrow)\) form a residuated pair w.r.t. the lattice ordering, i.e. for all \(a,b,c\in A\) \[a\cdot b\leq c\qquad\text{if and only if}\qquad a\leq b\to c.\]
In what follows, we will often write \(xy\) for \(x\cdot y\). If we augment the signature with an extra constant \(0\) that is the least element in the lattice order, then we get \(\mathsf{FL}_{ew}\)-algebras. Commutative integral residuated lattices and \(\mathsf{FL}_{ew}\)-algebras form varieties that have a very rich structure; for an equational axiomatization and a list of valid identities we refer the reader to [18].
Varieties of \(\mathsf{FL}_{ew}\)-algebras and commutative integral residuated lattices are _ideal determined_ (w.r.t. 1) in the sense of [11]; this means that there is a one-to-one correspondence (which is in fact a lattice isomorphism) between the congruences of an algebra \(\mathbf{A}\) and certain special subsets of \(\mathbf{A}\). In the present case if \(\mathbf{A}\) is an \(\mathsf{FL}_{ew}\)-algebra or a commutative integral residuated lattice a **filter** of \(\mathbf{A}\) is a filter \(F\) of the lattice structure which is also closed under the monoidal operation. If \(\theta\in\operatorname{Con}(\mathbf{A})\) then \(1/\theta\) is clearly a filter of \(\mathbf{A}\) and it is easily checked that if \(F\) is a filter then \(\theta_{F}=\{(a,b):a\to b,b\to a\in F\}\in\operatorname{Con}( \mathbf{A})\) and the correspondence is
\[\theta\longmapsto 1/\theta\qquad F\longmapsto\theta_{F}.\]
If \(F\) is a filter we will write \(\mathbf{A}/F\) for \(\mathbf{A}/\theta_{F}\).
Let's focus on two equations that bear interesting consequences, i.e., prelinearity and divisibility:
(prel) \[(x\to y)\vee(y\to x)\approx 1.\] (div) \[x(x\to y)\approx y(y\to x);\]
It can be shown (see [18] and [33]) that a subvariety of \(\mathsf{FL}_{\mathsf{ew}}\) satisfies the prelinearity equation (prel) if and only if any algebra therein is a subdirect product of totally ordered algebras, and this implies via Birkhoff's Theorem that all the subdirectly irreducible algebras are totally ordered. Such varieties are called _representable_ (or _semilinear_) and the subvariety axiomatized by (prel) is the largest subvariety of \(\mathsf{FL}_{\mathsf{ew}}\) that is representable; such variety is usually denoted by \(\mathsf{MTL}\), since it is the equivalent algebraic semantics of Esteva-Godo's _Monoidal t-norm based logic_[25].
If an algebra in \(\mathsf{FL}_{\mathsf{ew}}\) satisfies both (prel) and (div) then it is called a BL-algebra and the variety of all BL-algebras is denoted by \(\mathsf{BL}\). Again the name comes from logic: the variety of BL-algebras is the equivalent algebraic semantics of _Hajek's Basic Logic_\(\mathcal{BL}\)[31]. A systematic investigation of varieties of BL-algebras started with [7] and it is still ongoing (see [2] and the bibliography therein).
It follows from the definition that given a variety of bounded commutative integral residuated lattices, the class of its _\(0\)-free subreducts_ is a class of residuated lattices; we have a very general result.
**Lemma 2.10**.: _Let \(\mathsf{V}\) be any subvariety of \(\mathsf{FL}_{\mathsf{ew}}\); then the class \(\mathbf{S}^{0}(\mathsf{V})\) of the zero-free subreducts of algebras in \(\mathsf{V}\) is a variety._
Proof.: The proof is as in Proposition 1.10 of [6]; it is stated for varieties of BL-algebras but it uses only the description of the congruence filters, that can be used in any subvariety of \(\mathsf{FL}_{\mathsf{ew}}\) (as the reader can easily check).
This implies at once that if a variety of \(\mathsf{FL}_{ew}\)-algebras is the equivalent algebraic semantics of a logic \(\mathcal{L}\), then the variety of its zero-free subreducts is the equivalent algebraic semantics of the positive fragment \(\mathcal{L}^{+}\). A **basic hoop** is a zero-free subreduct of a divisible and prelinear \(\mathsf{FL}_{ew}\)-algebra. Note that in any \(\mathsf{FL}_{ew}\)-algebras the prelinearity equation makes the join definable using \(\wedge\) and \(\to\) (see for instance [3]):
\[((x\to y)\to y)\wedge((y\to x)\to x)\approx x\lor y.\]
So basic hoops are often presented in the signature \(\wedge,\to,1\); as noted in [6] the variety \(\mathsf{BH}\) of basic hoops is the equivalent algebraic semantics of the positive fragment of the logic \(\mathcal{BL}\).
A **Wajsberg hoop** is a basic hoop satisfying the so-called _Tanaka's equation_
\[(x\to y)\to y\approx(y\to x)\to x.\]
If we add a constant \(0\) to the signature that is the least element in the lattice order then we have **Wajsberg algebras**; Wajsberg algebras are term equivalent to MV-algebras (see [8] p. 354 for a detailed explanation) and the variety of MV-algebras is usually presented as the equivalent algebraic semantics of Lukasiewicz logic MV. It follows that the variety \(\mathsf{WH}\) of Wajsberg hoops, which is the variety of zero-free subreducts of Wajsberg algebras, is the equivalent algebraic semantics of \(\mathsf{MV}^{+}\), i.e. the positive fragment of Lukasiewicz logic.
In \(\mathsf{FL}_{ew}\)-algebras it is customary to introduce the derived operation \(\neg x:=x\to 0\); now in a bounded (i.e. with a minimum element \(a\)) Wajsberg hoop we can still introduce a negation \(\neg x=x\to a\) that is of course not a term but rather a polynomial. Now it is easy to verify that any bounded Wajsberg hoop is polynomially equivalent to a Wajsberg algebra and we will freely use the expression \(\neg x\) letting the context clear the meaning.
A commutative integral residuated lattice is **cancellative** if the underlying monoid is cancellative in the usual sense.
**Lemma 2.11**.: _[_16_]_ _Every cancellative basic hoop is a Wajsberg hoop. A totally ordered Wajsberg hoop is either cancellative or bounded._
This allows us to show that the connection between Wajsberg hoops and Wajsberg algebras is even stricter. For instance the operator \(\mathbf{ISP}_{u}\) on Wajsberg hoops has been studied in [7] using the results about Wajsberg algebras appeared in [28]; while we maintain that it should be clear why we can do this (and in [7] no explanation was given), maybe some clarification is useful. Wajsberg algebras are polynomially equivalent to bounded Wajsberg hoops; it is easy to see that if \(\mathbf{O}\) is a class operator that is a composition of \(\mathbf{I},\mathbf{H},\mathbf{S},\mathbf{P},\mathbf{P}_{u}\), \(\mathbf{A},\mathbf{B}\) are Wajsberg algebras and \(\mathbf{A}_{0},\mathbf{B}_{0}\) are their Wajsberg hoop reducts, then \(\mathbf{O}(\mathbf{A})\subseteq\mathbf{O}(\mathbf{B})\) if and only if \(\mathbf{O}(\mathbf{A}_{0})=\mathbf{O}(\mathbf{B}_{0})\). This allows us to consider bounded Wajsberg hoops _as they were_ Wajsberg algebras. Since a totally ordered Wajsberg hoop is either bounded or cancellative we can use results about Wajsberg algebras and integrate them with the cancellative case.
## 3. Some useful tools
### Wajsberg chains
Bounded Wajsberg hoops have a _canonical representation_. Let \(\mathbf{G}\) be a lattice ordered abelian group; by [39], if \(u\) is a strong unit of \(\mathbf{G}\) we can construct a bounded Wajsberg hoop \(\Gamma(\mathbf{G},u)=\langle[0,u],\rightarrow,\cdot,0,u\rangle\) where \(ab=\max\{a+b-u,0\}\) and \(a\to b=\min\{u-a+b,u\}\). The main result of [39] is that any bounded Wajsberg hoop can be presented in this way (really there is a categorical equivalence between the category abelian \(\ell\)-groups with strong unit and the category of bounded Wajsberg hoops). Let now \(\mathbb{Z}\times_{l}\mathbb{Z}\) denote the lexicographic product of two copies of \(\mathbb{Z}\). In other words, the universe is the Cartesian product and the group operations are defined componentwise and the ordering is the lexicographic ordering (w.r.t. the natural ordering of \(\mathbb{Z}\)); then \(\mathbb{Z}\times_{l}\mathbb{Z}\) is a totally ordered abelian group and we can apply \(\Gamma\) to it. A **Wajsberg chain** is a totally ordered Wajsberg hoop. Let's define some useful Wajsberg chains:
* the finite Wajsberg chain with \(n+1\) elements \(\mathbf{L}_{n}=\Gamma(\mathbb{Z},n)\);
* the infinite finitely generated Wajsberg chain \(\mathbf{L}_{n}^{\infty}=\Gamma(\mathbb{Z}\times_{l}\mathbb{Z},(n,0))\);
* the infinite finitely generated Wajsberg chain \(\mathbf{L}_{n,k}=\Gamma(\mathbb{Z}\times_{l}\mathbb{Z},(n,k))\);
* the unbounded Wajsberg chain \(\mathbf{C}_{\omega}\); this can be regarded either as the free monoid on one generator, where the product is the monoid product and \(a^{l}\to a^{m}=a^{\max(l-m,0)}\) or, equivalently, as the negative cone of \(\mathbb{Z}\) with the operations defined in the obvious way.
We observe that \(\mathbf{L}_{n}^{\infty}=\mathbf{L}_{n,0}\); moreover the proof of the following is a simple verification:
**Lemma 3.1**.:
1. _For_ \(n,m\in\mathbb{N}\)_,_ \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{L}_{m})\) _if and only if_ \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{L}_{m}^{\infty})\) _if and only if_ \(n\mid m\)
_._
2. _For_ \(n,r,j\in\mathbb{N}\)_,_ \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{L}_{r,j})\) _if and only if_ \(n\mid\gcd\{r,j\}\)_._
3. _If_ \(\mathbf{A}\) _is a cancellative Wajsberg chain and_ \(a\in A\setminus\{1\}\)_, then_ \(a\) _generates a subalgebra of_ \(\mathbf{A}\) _isomorphic with_ \(\mathbf{C}_{\omega}\)_._
In [8] it has been shown that every proper variety of Wajsberg hoops is generated by a finite number of chains of the type described above. More precisely a **presentation**\(P\) is a triple \((I,J,K)\)\(I,J\) are finite subsets of \(\mathbb{N}\setminus\{0\}\) and \(K\subseteq\{\omega\}\). We say that the presentation \(P=(I,J,K)\) is **reduced** if
* \(I\cup J\cup K\neq\emptyset\);
* if \(J\neq\emptyset\) then \(K=\emptyset\);
* no \(m\in I\) divides any \(m^{\prime}\in(I\setminus\{m\})\cup J\);
* no \(n\in J\) divides any \(n^{\prime}\in J\setminus\{m\}\).
For any reduced presentation \(P=(I,J,K)\) we define a set of Wajsberg hoops \(\mathsf{K}_{P}\) in the following way:
* if \(P=(I,J,\emptyset)\) then \(\mathsf{K}_{P}=\{\mathbf{L}_{i}:i\in I\}\cup\{\mathbf{L}_{j}^{\infty}:j\in J\}\),
* if \(P=(I,\emptyset,\{\omega\})\) then \(\mathsf{K}_{P}=\{\mathbf{L}_{i}:i\in I\}\cup\{\mathbf{C}_{\omega}\}\).
Then we set \(\mathsf{V}(P)=\mathbf{V}(\mathsf{K}_{P})\) and \(\mathsf{Q}(P)=\mathbf{Q}(\mathsf{K}_{P})\).
**Theorem 3.2**.: _[_8_]_ _The proper subvarieties of Wajsberg hoops are in one-to-one correspondence with the reduced presentations via_
\[P\longmapsto\mathsf{V}(P).\]
### The construction of \(\mathbf{B}_{\Delta}\)
Any proper subvariety of Wajsberg hoops is axiomatizable (modulo Wajsberg hoops) by an equation in a single variable; this is essentially Theorem 4.4 in [8] and its proof uses the functional characterization of free Wajsberg hoops which we will be using later in this paper. From that, it follows that for any proper subvariety \(\mathsf{V}\) of Wajsberg hoops \(\mathbf{Q}(\mathbf{F}_{\mathsf{V}}(\omega))=\mathbf{Q}(\mathbf{F}_{\mathsf{ V}}(x))\). Indeed, \(\mathsf{V}=\mathbf{HQ}(\mathbf{F}_{\mathsf{V}}(\omega))=\mathbf{HQ}(\mathbf{F}_{ \mathsf{V}}(x))\) because \(\mathsf{V}\) can be axiomatized in one variable and, since \(\mathbf{Q}(\mathbf{F}_{\mathsf{V}}(\omega))\) is structural, this means that it has to be the smallest quasivariety that generates \(\mathsf{V}\), so \(\mathbf{Q}(\mathbf{F}_{\mathsf{V}}(\omega))\subseteq\mathbf{Q}(\mathbf{F}_{ \mathsf{V}}(x))\); the other inclusion is trivial.
In this section we will give an alternative description of \(\mathbf{F}_{\mathsf{V}}(x)\) where \(\mathsf{V}\) is a proper subvariety of Wajsberg hoops.
**Lemma 3.3**.: _Let \(\mathbf{A}\) be a totally ordered Wajsberg hoop, assume that \(\mathbf{A}\) is one-generated, then \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{n}^{\infty})\) if and only if one of the following holds:_
1. _there exists_ \(1\leq k|n\) _such that_ \(\mathbf{A}\cong\mathbf{L}_{k}\)_;_
2. _there exist_ \(1\leq k|n\) _and_ \(0\leq h<k\) _with_ \(k,h\) _relatively prime such that_ \(\mathbf{A}\cong\mathbf{L}_{k,h}\)_;_
3. \(\mathbf{A}\cong C_{\omega}\)_._
Proof.: As \(\mathbf{A}\) is totally ordered, it is either bounded or cancellative. If it is cancellative then since it is one-generated, it must be \(\mathbf{A}\cong C_{\omega}\); if it is bounded, then it can be seen as an \(\mathsf{MV}\)-algebra, so, by [24] (Theorem 1.8), either 1. or 2. holds.
Conversely if \(\mathbf{A}\cong C_{\omega}\), then clearly \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{n}^{\infty})\); in the other cases we appeal again to Theorem 1.8 in [24].
From now on, given a finite subset \(X\) of \(\mathbb{N}\), we will denote by \(X\mathord{\downarrow}\) the set of all the divisors of elements of \(X\).
**Lemma 3.4**.: _Let \(\mathbf{A}\) be a totally ordered Wajsberg hoop, assume that \(\mathbf{A}\) is one generated and let \(P=(I,J,\emptyset)\) be a reduced presentation with \(J\neq\emptyset\). Then \(\mathbf{A}\in\mathsf{V}(P)\) if and only if one of the following holds:_
1. _there exists_ \(k\in I\!\!\downarrow\cup J\!\!\downarrow\) _such that_ \(\mathbf{A}\cong\mathbf{L}_{k}\)_;_
2. _there exist_ \(k\in J\!\!\downarrow\) _and_ \(0\leq h<k\) _with_ \(k,h\) _relatively prime such that_ \(\mathbf{A}\cong\mathbf{L}_{k,h}\)_;_
3. \(\mathbf{A}\cong C_{\omega}\)_._
Proof.: The "only if" part is exactly as in Lemma 3.3.
If \(\mathbf{A}\cong\mathbf{L}_{k}\) for some \(k\in I\!\!\downarrow\cup J\!\!\downarrow\), then, if \(k\in I\!\!\downarrow\) there exists \(i\in I\) such that \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{i})\subseteq\mathsf{V}(I,J,\emptyset)\); if \(k\in J\!\!\downarrow\), then by Lemma 3.3 there exists a \(j\in J\) such that \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{j}^{\infty})\subseteq\mathsf{V}(I,J,\emptyset)\).
If \(\mathbf{A}\cong\mathbf{L}_{k,h}\) for some \(k\in J\!\!\downarrow\) and \(0\leq h<k\) with \(h,k\) relatively prime, then by Lemma 3.3 there exists \(j\in J\) such that \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{j}^{\infty})\subseteq\mathsf{V}(I,J,\emptyset)\). Finally, if \(\mathbf{A}\cong C^{\omega}\) clearly \(\mathbf{A}\in\mathbf{V}(\mathbf{L}_{j}^{\infty})\) for every \(j\in J\!\!\downarrow\), so, since \(J\neq\emptyset\), \(\mathbf{A}\in\mathsf{V}(I,J,\emptyset)\).
**Remark 3.5**.: If \(\mathbf{A}\cong\mathbf{L}_{k}\), then \(h\) generates \(\mathbf{A}\) if and only if \(h,k\) are relatively prime.
If \(\mathbf{A}\cong\mathbf{L}_{k,h}\) with \(k\neq 1\) and \(h<k\) then there exists a unique \(g_{k,h}\in\mathbf{A}\) with \(g_{k,h}\leq\neg g_{k,h}\) such that \(a\) generates \(\mathbf{A}\) if and only if \(a=g_{k,h}\) or \(a=\neg g_{k,h}\). Moreover \(g_{k,1}=(1,0),g_{k,k-1}=(1,1)\) and, if \(h\neq 1,h\neq k-1\), then \(g_{k,h}=(r,s)\) with \(1<r<\frac{k}{2}\). If \(k=1\) we get that \(\mathbf{L}_{1,0}\) is generated by \(g_{1,0}=(0,1)\), but this time \(\neg g_{1,0}=(1,-1)\) generates a subalgebra of \(\mathbf{L}_{1,0}\) isomorphic to \(C_{\omega}\).
Note that in all these cases, we can use the operation \(\neg\) because all the algebras are bounded; in particular, since the generator of \(\mathbf{L}_{k,h}\) has always order \(2\), we can write \(0\) as \(g_{k,h}^{2}\), so for every \(a\in\mathbf{L}_{k,h}\) we get \(\neg a=a\to g_{k,h}^{2}\).
Now we fix a reduced triple \(P=(I,J,K)\) and let
\[\Delta_{I} =\{(k,h,2):0\leq h<k\in I\!\!\downarrow,k,h\text{ relatively prime}\}\] \[\Delta_{J} =\{(k,h,i):i\in\{0,1\},h<k\in J\!\!\downarrow,k,h\text{ relatively prime}\}\] \[\Delta_{K} =\left\{\begin{array}{ll}\emptyset,&\text{if }J\neq\emptyset; \\ \{(0,0,3)\},&\text{otherwise.}\end{array}\right.\]
Moreover for any \(k,h\) we define
\[\mathbf{A}_{k,h}^{0} =\mathbf{A}_{k,h}^{1}:=\mathbf{L}_{k,h}\] \[\mathbf{A}_{k,h}^{2} =\mathbf{L}_{k}\] \[\mathbf{A}_{k,h}^{3} =\mathbf{C}_{\omega}.\]
If \(\Delta=\Delta_{I}\cup\Delta_{J}\cup\Delta_{K}\) we let \(\mathbf{A}_{\Delta}=\prod_{(k,h,i)\in\Delta}\mathbf{A}_{k,h}^{i}\); moreover we denote by \(c\) the generator of \(\mathbf{C}_{\omega}\). We want to define a \(\overline{g}\in\mathbf{A}_{\Delta}\) by cases; for any \((k,h,i)\in\Delta\)
\[\text{if }J=\emptyset,\ \overline{g}(k,h,i)=\left\{\begin{array}{ll}h,&\text{if }i=2;\\ c,&\text{if }i=3.\end{array}\right.\] \[\text{if }J\neq\emptyset,\ \overline{g}(k,h,i)=\left\{\begin{array}{ll}g _{k,h},&\text{if }i=0;\\ \neg g_{k,h},&\text{if }i=1;\\ h,&\text{if }i=2.\end{array}\right.\]
**Theorem 3.6**.: _Let \(P=(I,J,K)\) be any reduced triple and let \(\overline{g}\) and \(\mathbf{A}_{\Delta}\) as above. If \(\mathbf{B}_{\Delta}\) is the subalgebra of \(\mathbf{A}\) generated by \(\overline{g}\), then \(\mathbf{B}_{\Delta}\cong\mathbf{F}_{\mathsf{V}(P)}(x)\)._
Proof.: First we observe that \(\mathbf{A}_{\Delta}\in\mathsf{V}(P)\) and so does \(\mathbf{B}_{\Delta}\). Suppose that \(p(x)\approx q(x)\) is an equation that fails in \(\mathsf{V}(P)\); then it must fail in some one-generated totally ordered algebra \(\mathbf{C}\in\mathsf{V}(P)\) and such algebra is either bounded or cancellative.
First, let us show that we only need to discuss the case in which \(p(x)\approx q(x)\) fails in a generator of \(\mathbf{C}\). Suppose that the equation fails in some \(x\in\mathbf{C}\), then, if we call \(\mathbf{C}^{\prime}\) the subalgebra of \(\mathbf{C}\) generated by \(x\), we have that \(p(x)\approx q(x)\) fails in the generator of \(\mathbf{C}^{\prime}\), that is still an algebra in \(\mathsf{V}(P)\).
Suppose that \(J=\emptyset\). If \(\mathbf{C}\) is bounded, then it cannot be infinite, as \(\mathbf{L}_{n}^{\infty}\notin\mathsf{V}(P)\) for any \(n\in\mathbb{N}\). Hence it must be equal to \(\mathbf{L}_{k}\) for some \(k\in I\mathord{\downarrow}\); this implies that \(p(x)\approx q(x)\) fails in \(g(k,h,2)\) for some \(h\) and, as above, fails in \(\mathbf{B}_{\Delta}\). This covers the case \(K=\emptyset\) and half of the case \(K\neq\emptyset\). To conclude, if \(\mathbf{C}\) is cancellative, then the equation must fail in \(\mathbf{C}_{\omega}\) and hence (if \(c\) is the generator of \(\mathbf{C}_{\omega}\)), \(p(c)\neq q(c)\); this implies that \(p(\overline{g}(0,0,3))\neq p(\overline{g}(0,0,3))\), so \(p(\overline{g})\neq q(\overline{g})\) and \(p(x)\approx q(x)\) fails in \(\mathbf{B}_{\Delta}\).
Suppose now that \(J\neq\emptyset\) (and thus \(K=\emptyset\)). By Lemma 3.4 we have only three possibilities.
If \(\mathbf{C}\cong\mathbf{L}_{k}\) for some \(k\in I\mathord{\downarrow}\cup J\downarrow\); if \(k\in I\mathord{\downarrow}\) then \(p(x)\approx q(x)\) fails in some generator of \(\mathbf{L}_{k}\), so it fails in some \(\overline{g}(k,h,2)\) for some \(h\) and eventually fails in \(\mathbf{B}_{\Delta}\). If \(k\in J\mathord{\downarrow}\) again \(p(x)\approx q(x)\) fails in some generator \(h\) of \(\mathbf{L}_{k}\); now \(h\) and \(k\) must be relatively prime and thus \(p(x)\approx q(x)\) fails in \(\mathbf{L}_{k,h}\). By Remark 3.5 it fails either in \(g_{h,k}\) or \(\neg g_{h,k}\), hence \(p(x)\approx q(x)\) fails either in \(\overline{g}(h,k,0)\) or \(\overline{g}(h,k,1)\). In any case \(p(x)\approx q(x)\) fails in \(\mathbf{B}_{\Delta}\).
If \(\mathbf{C}\cong\mathbf{L}_{h,k}\) the argument is similar to the one above, but easier. Finally if \(\mathbf{C}\cong\mathbf{C}_{\omega}\), since \(\overline{g}(1,0,1)=\neg g_{1,0}=(1,-1)\) generates a subalgebra of \(\mathbf{L}_{1,0}\) isomorphic to \(C^{\omega}\), for sure \(p(\overline{g}(1,0,1))\neq q(\overline{g}(1,0,1))\) and so again \(p(x)\approx q(x)\) fails in \(\mathbf{B}\).
We have thus proved that every equation in one variable that fails in \(\mathsf{V}(P)\) fails in \(\mathbf{B}_{\Delta}\). At last, let us show that this is sufficient to say that \(\mathbf{B}_{\Delta}\cong\mathbf{F}_{\mathsf{V}(P)}(x)\). Indeed, since \(\mathbf{B}_{\Delta}\) is in \(\mathsf{V}(P)\) and it is one-generated, we know that we have a surjective homomorphism \(\varphi\) from \(\mathbf{F}_{\mathsf{V}(P)}(x)\) to \(\mathbf{B}_{\Delta}\); suppose now that \(\varphi\) is not injective, this means that \(\operatorname{Ker}(\varphi)\) is non-trivial, so there exist two terms \(p,q\in\mathbf{F}_{\mathsf{V}(P)}(x)\) such that \(p\neq q\) but \(\varphi(p)=\varphi(q)\), thus \(p(x)\not\approx q(x)\) in \(\mathbf{F}_{\mathsf{V}(P)}(x)\) but \(p(x)\approx q(x)\) in \(\mathbf{B}_{\Delta}\). Hence \(\varphi\) must be an isomorphism.
### Wajsberg functions
A **McNaughton function** over the \(n\)-cube is a continuous function \(f:[0,1]^{n}\to[0,1]\) such that there exist finitely many linear functions \(f_{1},\dots,f_{k}\), where each \(f_{i}\) is of the form \(f_{i}=a_{i}^{0}x_{0}+a_{i}^{1}x_{1}+\dots+a_{i}^{n}x_{n}+b_{i}\) with \(a_{i}^{0}\dots a_{i}^{n},b_{i}\) integers, and such that for any \(v\in[0,1]^{n}\) there exists \(i\in\{1,\dots k\}\) with \(f(v)=f_{i}(v)\). A McNaughton function \(f(x_{1},\dots,x_{n})\) is a **Wajsberg function** if \(f(1,1,\dots,1)=1\).
**Theorem 3.7**.: _[_8_]_ _For each \(n\), the free \(n\)-generated Wajsberg hoop \(\mathbf{F}_{\mathsf{WH}}(n)\) is isomorphic to the algebra of all Wajsberg functions over the \(n\)-cube, where the operations are defined pointwise._
So we can always identify an \(n\)-ary term in the language of Wajsberg hoops with a Wajsberg function over the \(n\)-cube. Conversely, given a Wajsberg function over the \(n\)-cube, we can associate to it an equivalence class of Wajsberg terms (where the equivalence is of course mutual provability in the theory). With the usual abuse of notation we will identify the class with any of its representatives, i.e. given a Wajsberg function \(f\) we will denote by \(\widehat{f}\) the Wajsberg term which is a representative to the equivalence class corresponding to \(f\).
In [8] the authors used this representation to give an easy way to axiomatize all proper subvarieties of Wajsberg hoops. Let \((I,J,K)\) be a reduced triple, we define two finite subsets \(\mathcal{I},\mathcal{J}\) of rational points of \([0,1]\) as
* if \(K\neq\emptyset\) then \(\mathcal{J}=\{1\}\);
* if \(K=\emptyset\) then \(\mathcal{J}=\{v\in[0,1]:\operatorname{den}(v)\in J\!\downarrow\}\);
* \(\mathcal{I}=\{u\in[0,1]:\operatorname{den}(u)\in I\!\downarrow\}\backslash \mathcal{J}\).
Given a reduced triple \((I,J,K)\) an \((I,J,K)\)**-comb** is any \(\alpha\in\mathbf{F}_{\mathsf{WH}}(x)\) such that
1. for every \(v\in\mathcal{J}\), there exists a neighborhood \(V\) of \(v\) such that \(\alpha=1\) on \(V\);
2. for every \(u\in\mathcal{I}\), \(\alpha(u)=1\);
3. for every \(u\in\mathcal{I}\) there exists \(v\in\mathcal{I}\) such that \(\operatorname{den}(v)|\operatorname{den}(u)\) and \(\alpha\) is not identically \(1\) on any neighborhood of \(v\);
4. if \(d\notin(I\cup J)\!\downarrow\), then there exists \(0\leq h<d\) with \(\alpha(h/d)\neq 1\).
**Theorem 3.8**.: _[_8_]_ _Let \(P=(I,J,K)\) be a reduced triple and let \(\alpha(x)\in\mathbf{F}_{\mathsf{WH}}(x)\). Then the identity \(\alpha(x)=1\) axiomatizes \(\mathsf{V}(P)\) relative to \(\mathsf{WH}\) if and only if \(\alpha\) is an \((I,J,K)\)-comb._
This is a very powerful result, in that it gives a procedure that allows to axiomatize every proper subvariety of Wajsberg hoops, and combs are quite easy to construct. Next, we have a very useful lemma.
**Lemma 3.9**.: _Let \(p(x)\approx q(x)\) be an identity in the language of Wajsberg hoops and let \(f,g\) be Wajsberg functions such that \(p=\widehat{f}\) and \(q=\widehat{g}\). Then for any \(n,k\in\mathbb{N}\) with \(k\leq n\)_
1. _if_ \(f(\frac{k}{n})=g(\frac{k}{n})\)_, then_ \(p(k)=q(k)\) _where_ \(k\in\mathbf{L}_{n}\)_;_
2. _if_ \(f(x)=g(x)\) _in a neighborhood of_ \(1\)_, then_ \(\mathbf{C}_{\omega}\vDash p(x)\approx q(x)\)_;_
3. _if_ \(f(x)=g(x)\) _in a neighborhood of_ \(\frac{k}{n}\)_, then_ \(p(c)=q(c)\) _for any_ \(c\in\mathbf{L}_{n,h}\) _such that_ \(c/\mathrm{Rad}(\mathbf{L}_{n,h})=k\)_._
The proof of Lemma 3.9 can be extracted from the proof of Theorem 3.3 in [8], by setting \(\kappa=1\).
Next, we want to have an easy way to construct Wajsberg functions and force them to have certain fixed values (see [32]). If \(0=t_{0}<t_{1}<\dots<t_{k}=1\) and \(x_{0},\dots,x_{k}\in[0,1]\), then we denote by \(f=L(t_{0},x_{0};\dots;t_{k},x_{k})\) the continuous picewise linear function \(f:[0,1]\to[0,1]\) such that \(f(t_{i})=x_{i}\) and \(f\) is linear on each interval \([t_{i},t_{i+1}]\); in other words, \(f\) is the linear interpolation between the nodes \((t_{0},x_{0}),\dots,(t_{k},x_{k})\). Clearly it is possible to play with the variables in order to make \(f\) a Wajsberg function.
Let's see an example. If we want to find a \((2,\emptyset,\emptyset)\)-comb we need to take a function \(f\) that has value \(1\) only in \(\{0,\frac{1}{2},1\}\), so we can take
\[f=L(0,1;\frac{1}{4},0;\frac{1}{2},1;\frac{3}{4},0;1,1).\]
This function has integer coefficients in every interval and \(f(1)=1\), so it is a Wajsberg function and the identity \(f(x)\approx 1\) axiomatizes the subvariety of Wajsberg hoops generated by \(\mathbf{L}_{2}\).
Now we will heavily use Wajsberg functions and lemma 3.9 to prove a couple of fundamental results. From now on, for any reduced triple \(P=(I,J,K)\), \(\mathbf{B}_{\Delta}\) will be the algebra constructed from \(P\) following the directions in Section 3.2.
**Theorem 3.10**.: _Let \(P=(I,J,K)\) be a reduced triple and let \(a\in I\); then \(\mathbf{L}_{a}\) is embeddable in \(\mathbf{B}_{\Delta}\)._
Proof.: If \(1\in I\), then \(P=(\{1\},\emptyset,K)\) and if \(K=\emptyset\), then \(\mathbf{B}_{\Delta}\cong\mathbf{L}_{1}\). So suppose that \(K\neq\emptyset\); then by Theorem 3.6\(\mathbf{B}_{\Delta}\) is isomorphic with the subalgebra of \(\mathbf{L}_{1}\times\mathbf{C}_{\omega}\) generated by \((0,c)\). Consider the Wajsberg function
\[f(x)=L(0,0;\frac{1}{2},1;1,1);\]
then it is easy to check that \(\widehat{f}=(x\to x^{2})\to x\). Now \(f(0)=0\) and \(f(1)=1\) in a neighborhood of \(1\), hence by lemma 3.9\(f(\overline{g})=(0,1)\) and thus it generates a subalgebra of \(\mathbf{B}_{\Delta}\) isomorphic with \(\mathbf{L}_{1}\).
Now suppose that \(1\notin I\); we fix an \(a\in I\) and we let \(m\) be the product of all the elements of \(I\cup J\). Then we consider the Wajsberg function
\[f(x)=L(0,1;\frac{1}{a}-\frac{1}{2m},1;\frac{1}{a},0;\frac{1}{a}+\frac{1}{2m}, 1;1,1).\]
Now clearly \(f(x)=1\) in any neighborhood of \(\frac{n}{m}\) where \(n\leq m\) and \(\frac{n}{m}\neq\frac{1}{a}\) and moreover \(f(\frac{1}{a})=0\). Since \((I,J,K)\) is reduced, \(i\) does not divide any element of \(I\cup J\) and so by Lemma 3.9
\[f(\overline{g}(k,h,i))=\left\{\begin{array}{ll}0,&\mbox{if $(k,h,i)=(a,1,2)$};\\ 1,&\mbox{otherwise}.\end{array}\right.\]
Hence \(\overline{g}\lor f(\overline{g})\) generates a subalgebra of \(\mathbf{B}_{\Delta}\), isomorphic to the one generated by \(\overline{g}(a,1,2)\). But the latter is a generator of \(\mathbf{L}_{a}\) so \(\mathbf{L}_{a}\) is embeddable in \(\mathbf{B}_{\Delta}\), as wished.
**Theorem 3.11**.: _Let \(P=(I,\emptyset,\mathbf{C}_{\omega})\); then \(\mathbf{C}_{\omega}\) is embeddable in \(\mathbf{B}_{\Delta}\)._
Proof.: If \(I=\emptyset\), then \(\mathbf{B}_{\Delta}\cong\mathbf{C}_{\omega}\). Otherwise let \(m\) be the product of all elements of \(I\) and consider the Wajsberg function
\[f(x)=L(1,1;\frac{m-1}{m},1;\frac{m}{m+1},\frac{m}{m+1};1,1).\]
Again it is easy to see that \(\widehat{f}(x)=x^{m}\to x^{m+1}\) and that \(f(\frac{n}{m})=1\) for \(\frac{n}{m}\neq 1\) so that \(f(\overline{g}(k,h,i))=1\) whenever \(i=2\). In a neighborhood of \(1\) we have that \(f(x)=x\), so \(f(\overline{g}(0,0,3))=c\), which generates \(\mathbf{C}_{\omega}\); thus \(f(\overline{g})\) generates a subalgebra of \(\mathbf{B}_{\Delta}\) that is isomorphic with \(\mathbf{C}_{\omega}\) and thus \(\mathbf{C}_{\omega}\) is embeddable in \(\mathbf{B}_{\Delta}\).
## 4. Structurally complete extensions
By Theorem 2.4 and 2.9 classifying all the structurally complete finitary (ax-iomatic) extensions of positive Lukasiewicz's Logic amounts to describing all the structural quasivarieties (varieties) of Wajsberg hoops.
### Structural subvarieties
The **radical** of a Wajsberg chain \(\mathbf{A}\), in symbols \(\mathrm{Rad}(\mathbf{A})\), is the intersection of the maximal filters of \(\mathbf{A}\); it is easy to see that \(\mathrm{Rad}(\mathbf{A})\) is cancellative and \(\mathbf{A}\) is cancellative if and only if \(\mathrm{Rad}(\mathbf{A})=\mathbf{A}\). We say that a bounded Wajsberg hoop \(\mathbf{A}\)**has rank**\(n\), if \(\mathbf{A}/\mathrm{Rad}(\mathbf{A})\cong\mathbf{L}_{n}\). For any bounded Wajsberg hoop the **divisibility index**\(d_{\mathbf{A}}\) of \(\mathbf{A}\), is the maximum \(k\) such that \(\mathbf{L}_{k}\) is embeddable in \(\mathbf{A}\) if any, otherwise \(d_{\mathbf{A}}=\infty\).
Here is a summary of the main results about the rank and the divisibility index; the proofs are either trivial or can be found in [7] or [28].
**Lemma 4.1**.: _For any \(n,k\geq 1\)_
1. \(\mathbf{L}_{n}\) _is simple and_ \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{L}_{k})\) _if and only if_ \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{L}_{k}^{\infty})\) _if and only if_ \(n\mid k\)_._
2. \(\mathbf{L}_{n}\) _has rank_ \(n\) _and divisibility index_ \(n\)_._
3. _For any_ \(k\geq 0\)_,_ \(\mathbf{L}_{n,k}\) _is subdirectly irreducible,_ \(\mathbf{L}_{n,k}\) _has rank_ \(n\) _and_ \(d_{\mathbf{L}_{n,k}}=\gcd(n,k)\)_; in particular_ \(d_{\mathbf{L}_{n}^{\infty}}=n\)_._
4. _If_ \(\mathbf{A}\) _has rank_ \(n\)_, then_ \(\mathbf{A}\in\mathbf{ISP}_{u}(\mathbf{L}_{n,k})\) _if and only if_ \(d_{\mathbf{A}}\) _divides_ \(\gcd(n,k)\)_._
5. _If_ \(\mathbf{A}\) _has rank_ \(n\)_, then_ \(\mathbf{L}_{n,k}\in\mathbf{ISP}_{u}(\mathbf{A})\) _if and only if_ \(\gcd(n,k)\) _divides_ \(d_{\mathbf{A}}\)_._
6. _If_ \(\mathbf{A}\) _is a nontrivial totally ordered cancellative hoop then_ \(\mathbf{ISP}_{u}(\mathbf{A})=\mathbf{ISP}_{u}(\mathbf{C}_{\omega})\)_._
7. _If_ \(\mathbf{A}\) _is a bounded Wajsberg chain of finite rank_ \(k\)_, then_ \(d_{\mathbf{A}}\) _divides_ \(k\)_, and_ \(\mathbf{ISP}_{u}(\mathbf{A})=\mathbf{ISP}_{u}(\mathbf{L}_{k,d_{\mathbf{A}}})\)_._
8. _If_ \(\mathbf{A}\) _is a bounded Wajsberg chain of finite rank_ \(n\)_, then_ \(\mathbf{ISP}_{u}(\mathbf{A})=\mathbf{ISP}_{u}(\mathbf{L}_{n}^{\infty})\) _if and only if_ \(d_{\mathbf{A}}=n\)_._
We have:
**Theorem 4.2**.: _[_5_]_ _Let \(\mathbf{A}_{1},\ldots,\mathbf{A}_{n}\) be Wajsberg chains; if for all \(i\leq n\)_
* \(\mathbf{A}_{i}\) _is finite, or_
* \(\mathbf{A}_{i}\) _is cancellative, or_
* \(\mathbf{L}_{n}\in\mathbf{IS}(\mathbf{A}_{i})\) _for all_ \(n\)_, or_
* \(\mathbf{A}\) _is infinite, bounded and the rank of_ \(\mathbf{A}\) _is equal to_ \(d_{\mathbf{A}}\)_,_
_then \(\mathbf{Q}(\mathbf{A}_{1},\ldots,\mathbf{A}_{n})=\mathbf{V}(\mathbf{A}_{1}, \ldots,\mathbf{A}_{n})\). Moreover if \(n=1\) then the converse holds as well._
This shows that for every reduced presentation \(P\), \(\mathsf{V}(P)=\mathsf{Q}(P)\) and henceforth a proper subvariety \(\mathsf{V}(P)\) of Wajsberg hoop is structural if and only if \(\mathsf{V}(P)=\mathbf{Q}(\mathbf{F}_{\mathsf{V}(P)}(x)\). First we will consider locally finite varieties.
It is clear from the definition that both \(\mathbf{C}_{\omega}\) and \(\mathbf{L}_{j}^{\infty}\) contain finitely generated subalgebras that fail to be finite. Hence from Theorem 3.2 we get that a variety \(\mathsf{V}\) of Wajsberg hoops is locally finite if and only if it is \(\mathsf{V}(P)\) where \(P=(I,\emptyset,\emptyset)\) if and only if it is finitely generated.
Now Wajsberg hoops are basic hoops and:
**Theorem 4.3**.: _[_9_]_ _Every finite basic hoop is projective in the class of finite basic hoops. So if \(\mathsf{V}\) is a locally finite variety of basic hoops every finite algebra is projective._
Therefore:
**Theorem 4.4**.: _Every locally finite quasivariety of Wajsberg hoops is a primitive variety._
Proof.: Let \(\mathsf{Q}\) be a locally finite quasivariety of Wajsberg hoops; then it is easy to check that \(\mathsf{V}=\mathbf{H}(\mathsf{Q})\) is locally finite and hence every finite algebra of \(\mathsf{V}\) is projective in \(\mathsf{V}\). By Theorem 2.8\(\mathsf{V}\) is primitive and thus every subquasivariety of \(\mathsf{V}\) is equational, i.e. it is a variety. This implies that \(\mathsf{Q}\) is a variety and in fact \(\mathsf{Q}=\mathsf{V}\).
Via the Blok-Pigozzi connection we get:
**Corollary 4.5**.: _Every locally tabular extension of \(\mathcal{MV}^{+}\) is tabular, axiomatic and hereditarily structurally complete._
Proof.: By Theorem 3.2 every proper subvariety of Wajsberg hoops is of the form \(\mathsf{V}(I,J,K)\) for some reduced presentation. It is obvious that \(\mathbf{C}_{\omega}\) is not locally finite and it is easy to see that the same holds for \(\mathbf{L}_{n}^{\infty}\) for \(n\geq 1\). Therefore if \(\mathsf{V}(I,J,K)\) is locally finite, then \(J=K=\emptyset\). But \(I\) is a finite set, so \(\mathsf{V}(I,\emptyset,\emptyset)\) is finitely generated. So every locally finite variety of Wajsberg hoops is finitely generated; so every locally tabular extension of \(\mathcal{MV}^{+}\) is tabular and the rest follows from Theorem 4.4.
The variety \(\mathsf{C}=\mathsf{V}(\emptyset,\emptyset,\{0\})\) is the variety of **cancellative hoops**; now \(\mathsf{C}\) can be shown to be primitive by a variety of means. The simplest one is probably to observe first that it is an atom in the lattice of subvarieties \(\Lambda(\mathsf{WH})\)[12] hence it is equationally complete. Then one can quote [14] where it is stated that any equationally complete congruence modular variety has no proper subquasivarieties. As \(\mathsf{C}\) is congruence distributive (having a lattice reduct) it has no proper subquasivarieties and hence it is primitive.
Now we can characterize all the structural varieties of Wajsberg hoops. First we observe that \(\mathsf{WH}\) itself is not structural; this is well-known and can be shown in several ways. The most direct one is probably to observe that the set \(\{\mathbf{L}_{p}:p\text{ prime}\}\) consists of simple algebras with no proper subalgebras and then invoke Corollary 1 in [1].
**Theorem 4.6**.: _Let \(P=(I,J,K)\) be a reduced triple such that \(\mathsf{V}=\mathsf{V}(I,J,K)\) is a proper subvariety of Wajsberg hoops. Then \(\mathsf{V}\) is structural if and only if either \(J=\emptyset\), or \(J=\{1\}\)._
Proof.: Suppose that \(J\neq\emptyset\) and \(J\neq\{1\}\). Then there is an \(n\in J\) with \(n>1\). Let \(\mathsf{K}=\{\mathbf{L}_{i}:i\in I\}\cup\{\mathbf{L}_{j}:j\in J,j\neq n\}\cup \{\mathbf{L}_{n,1}\}\); Clearly \(\mathsf{Q}(\mathsf{K})\subseteq\mathsf{Q}(P)=\mathsf{V}(P)\); if \(\mathbf{L}_{n}^{\infty}\in\mathsf{Q}(\mathsf{K})\) then, by Theorem 2.2, \(\mathbf{L}_{n}^{\infty}\in\mathbf{ISP}_{u}(\mathsf{K})\), since it is subdirectly irreducible. But all the chains in \(\mathsf{K}\) are either finite or their divisibility index is not divided by \(n\); hence, by Lemma 4.1 (3) and (4), \(\mathbf{L}_{n}^{\infty}\notin\mathbf{ISP}_{u}(\mathsf{K})\). Therefore \(\mathsf{Q}(\mathsf{K})\subsetneq\mathsf{Q}(P)\); however, as \(\mathbf{L}_{n,1}\in\mathbf{V}(\mathbf{L}_{n}^{\infty})\), \(\mathbf{H}(\mathsf{Q}(\mathsf{K}))=\mathsf{V}(P)\). So \(\mathsf{V}(P)\) is not structural.
For the converse, modulo the results on locally finite varieties above, we need only to prove that \(\mathsf{V}(P)\) is structurally complete whenever \(P=(I,\emptyset,\{\omega\})\) or \(P=(I,\{1\},\emptyset)\). In either case, by Theorems 3.10 and 3.11, every generator of \(\mathsf{V}(P)=\mathsf{Q}(P)\) is embeddable in \(\mathbf{B}_{\Delta}\). So
\[\mathsf{V}(P)=\mathsf{Q}(P)\subseteq\mathbf{Q}(\mathbf{B}_{\Delta})=\mathbf{ Q}(\mathbf{F}_{\mathsf{V}(P)}(x))\]
and this proves that \(\mathsf{V}(P)\) is structural.
By the description of the proper subvarieties of \(\mathsf{WH}\) by reduced triples, we get at once:
**Corollary 4.7**.: _A variety of Wajsberg hoops is structural if and only if it is primitive._
And thus, via the Blok-Pigozzi connection:
**Corollary 4.8**.: _An axiomatic extension on \(\mathcal{MV}^{+}\) is structurally complete if and only if it is hereditarily structurally complete._
### Structural subquasivarieties
For quasivarieties we need to work a little bit more. First we need a lemma that appears in [29] (Lemma 4.5):
**Lemma 4.9**.: _Let \(n>1\) and let \(\mathbf{D}_{n}\) be the subalgebra of \(\mathbf{L}_{n,1}\times\mathbf{L}_{n,n-1}\) generated by \(((1,0),(1,1))\). Then \(\mathbf{L}_{n,1}\) is embeddable in \(\mathbf{D}_{n}\)._
Using Lemma 4.9 we can prove:
**Lemma 4.10**.: _Let \(P=(I,J,\emptyset)\) be a reduced triple. Then for any \(j\in J\), \(\mathbf{L}_{j,1}\) is embeddable in \(\mathbf{B}_{\Delta}\)._
Proof.: If \(J=\emptyset\) we have nothing to prove.
If \(1\in J\), then since \(P\) is a reduced triple, \(P=(I,\{1\},\emptyset)\). Let \(m\) be the product of all the elements of \(I\) (if \(I=\emptyset\) take \(m=1\)) and consider the Wajsberg function
\[f(x)=L(0,0;\frac{1}{3m},0;\frac{2}{3m},1;1,1).\]
This function has value \(0\) in a neighborhood of \(0\) and has value \(1\) for every \(\frac{n}{m}\neq 0\), so, by Lemma 3.9, \(f(\overline{g}(k,h,i))=0\) if \((k,h,i)=(1,0,0)\), otherwise \(f(\overline{g}(k,h,i))=1\). Since \(\overline{g}(1,0,0)\) is a generator of \(\mathbf{L}_{1}^{\infty}\), \(\overline{g}\lor f(\overline{g})\) generates a subalgebra of \(\mathbf{B}_{\Delta}\) isomorphic to \(\mathbf{L}_{1}^{\infty}\). Now, by Lemma 4.1, \(\mathsf{Q}(\mathbf{L}_{1}^{\infty})=\mathsf{Q}(\mathbf{L}_{1,1})\); hence \(\mathbf{L}_{1,1}\) is embeddable into \(\mathbf{L}_{1}^{\infty}\) and thus into \(\mathbf{B}_{\Delta}\).
Now suppose \(1\notin J\) and fix \(j\in J\). Let \(m\) be the product of every element of \(I\cup J\) and consider the Wajsberg function
\[f(x)=L(0,1;\frac{1}{j}-\frac{2}{3m},1;\frac{1}{j}-\frac{1}{3m},0;\frac{1}{j}+ \frac{1}{3m},0;\frac{1}{j}+\frac{2}{3m},1;1,1).\]
This function has value \(0\) in a neighborhood of \(\frac{1}{j}\) and has value \(1\) in a neighborhood of \(\frac{n}{m}\) when \(\frac{n}{m}\neq\frac{1}{j}\). By Lemma 3.9, \(f(\overline{g}(k,h,i))=0\) if \((k,h,i)=(j,1,0)\) or \((k,h,i)=(j,j-1,0)\). Now let us show that \(f(\overline{g}(k,h,i))=1\) in all the other cases.
Take \((k,h,i)\neq(j,1,0),(j,j-1,0)\). If \(i\in\{0,1\}\), then \(\overline{g}(k,h,i)=(r,s)\) for some \((r,s)\in\mathbf{L}_{k,h}\); notice that, by construction of \(f\), \(f(\overline{g}(k,h,i))\neq 1\) only if \(\frac{r}{k}=\frac{1}{j}\), but this happens only if \(jr=k\), but this can not happen because the triple is reduced, so \(k\) can not be a multiple of \(j\). If \(i=2\), then \(\overline{g}(k,h,i)=h\) for some \(0\leq h<k\) and \(k,h\) relatively prime; this time \(f(\overline{g}(k,h,i))\neq 1\) only if \(\frac{h}{k}=\frac{1}{j}\), that is only if \(j=\frac{k}{h}\), but this means that \(h\) divides \(k\), which is not possible because \(k,h\) are relatively prime.
Thus, if we consider \(\overline{g}\lor f(\overline{g})\), this generates a subalgebra of \(\mathbf{B}_{\Delta}\) isomorphic to \(\mathbf{D}_{j}\) as defined in Lemma 4.9; moreover, by the same lemma, we get that \(\mathbf{L}_{j,1}\) is embeddable into \(\mathbf{D}_{j}\) and hence into \(\mathbf{B}_{\Delta}\).
Let \(P=(I,J,K)\) be a triple (not necessarily reduced) and let \(\mathsf{Q}[I,J,K]\) be defined in the following way
\[\mathsf{Q}[I,\emptyset,K]=\mathsf{Q}(I,\emptyset,K)\] \[\mathsf{Q}[I,J,\emptyset]=\mathsf{Q}(\{\mathbf{L}_{i}:i\in I\} \cup\{\mathbf{L}_{j,1}:j\in J\})\quad\text{if }J\neq\emptyset.\]
**Theorem 4.11**.: _Let \(\mathsf{Q}\) be a quasivariety of Wajsberg hoops; then \(\mathsf{Q}\) is structural if and only if it is either \(\mathsf{Q}=\mathsf{Q}(\mathbf{F}_{\mathsf{WH}}(x))\) or else \(\mathsf{Q}=\mathsf{Q}[P]\) for some reduced triple \(P\)._
Proof.: As the structurally complete subquasivarieties are exactly \(\mathsf{Q}(\mathbf{F}_{\mathsf{V}}(x))\) for \(\mathsf{V}\subseteq\mathsf{WH}\), it is enough to show that for every reduced triple \(P=(I,J,K)\),
\(\mathbf{Q}(\mathbf{F}_{\mathsf{V}(P)}(x))\). Moreover if \(\mathsf{V}\) is a subvariety of \(\mathsf{WH}\), then \(\mathbf{Q}(\mathbf{F}_{\mathsf{V}}(x))\) is the structural core of \(\mathsf{V}\), i.e. the smallest subquasivariety \(\mathsf{Q}\) of \(\mathsf{V}\) such that \(\mathsf{V}(\mathsf{Q})=\mathsf{V}\). Now for any reduced presentation \(P\), we clearly have that \(\mathbf{V}(\mathsf{Q}[P])=\mathbf{V}(P)\); so to get the conclusion it is enough to prove that \(\mathsf{Q}[P]\subseteq\mathbf{Q}(\mathbf{F}_{\mathsf{V}(P)}(x))\).
But Theorem 3.10, 3.11 and Lemma 4.10 show that any generator of \(\mathsf{Q}[P]\) is embeddable in \(\mathbf{B}_{\Delta}\); so
\[\mathsf{Q}[P]\subseteq\mathsf{Q}(\mathbf{B}_{\Delta})=\mathbf{Q}(\mathbf{F}_ {\mathsf{V}(P)}(x))\]
and the conclusion holds.
Observe that \(\mathbf{Q}(\mathbf{L}_{1,1})=\mathbf{Q}(\mathbf{L}_{1}^{\infty})\); so if \(P=(I,J,K)\) is such that either \(J=\emptyset\) or \(J=\{1\}\), then \(Q[P]=Q(P)\) and they are all in fact primitive varieties, by Corollary 4.7. We can make another observation of some relevance based on the results in [23]. It is well known (and easy to prove) that the Wajsberg chains coincide with the finitely subdirectly irreducible Wajsberg hoops and the variety of Wajsberg hoops is congruence distributive. Then:
* every structural subquasivariety \(\mathsf{Q}\) is generated by finitely subdirectly irreducible Wajsberg hoops and hence it is relatively congruence distributive;
* in any structural subquasivariety \(\mathsf{Q}\) the finitely \(\mathsf{Q}\)-irreducible algebras are finitely subdirectly irreducible in the absolute sense, i.e. they are all Wajsberg chains;
* hence any algebra \(\mathbf{A}\in\mathsf{Q}\) is subdirectly embeddable in a product of Wajsberg chains that belong to \(\mathsf{Q}\).
This information might be useful for characterizing the structural subquasivarieties \(\mathsf{Q}[P]\) that are primitive (besides the ones that are already known). We will consider the quasivarieties \(\mathsf{Q}[I,J,K]\) where \(K=\emptyset\) and \(J\neq\emptyset,\{1\}\) and in this case we will write simply \(\mathsf{Q}[I,J]\).
**Lemma 4.12**.: _Let \((I,J,\emptyset)\) and \((I^{\prime},J^{\prime},\emptyset)\) be two triples (not necessarily reduced), then \(\mathsf{Q}[I,J]\subseteq\mathsf{Q}[I^{\prime},J^{\prime}]\) if and only if for every \(i\in I\) s.t. \(i\neq 1\) and for every \(j\in J\), there are \(i^{\prime}\in I^{\prime}\) and \(j^{\prime}\in J^{\prime}\) with \(i|i^{\prime}\) and \(j|j^{\prime}\)._
Proof.: If \(i=1\), then \(\mathbf{L}_{1}\in\mathbf{IS}(\mathbf{L}_{n})\) and \(\mathbf{L}_{1}\in\mathbf{IS}(\mathbf{L}_{n,1})\) for every \(n\). Take now \(i\neq 1\) and suppose that \(\mathbf{L}_{i}\in\mathbf{ISP}_{u}(\{\mathbf{L}_{i^{\prime}}:i\ \in I^{\prime}\}\cup\{\mathbf{L}_{j^{\prime},1}:j^{\prime}\in J^{\prime}\})\); then \(\mathbf{L}_{i}\notin\mathbf{ISP}_{u}(\{\mathbf{L}_{j^{\prime},1}:j^{\prime}\in J ^{\prime}\})\) as \(\mathbf{L}_{i}\) has divisibility index \(i\neq 1\) and each \(\mathbf{L}_{j^{\prime},1}\) has divisibility index equal to \(1\). On the other hand no \(\mathbf{L}_{j,1}\) can belong to \(\mathbf{ISP}_{u}(\{\mathbf{L}_{i^{\prime}}:i\ \in I^{\prime}\})\) for obvious reasons. From here it is straightforward to check that the conclusion holds.
Now we can give an example of a class of quasivarieties that are not primitive.
**Proposition 4.13**.: _Let \(\mathsf{Q}=\mathsf{Q}[I,J]\) with \((I,J,\emptyset)\) reduced triple; if \(I\mathord{\downarrow}\cap J\mathord{\downarrow}\supsetneqneq\{1\}\), then \(\mathsf{Q}\) is not primitive._
Proof.: If \(I\mathord{\downarrow}\cap J\mathord{\downarrow}\supsetneqneq\{1\}\), then there exists \(n\neq 1\) in \(I\mathord{\downarrow}\cap J\mathord{\downarrow}\) and, by the previous lemma, \(\mathsf{Q}[n,n]\subseteq\mathsf{Q}[I,J]\). Now, notice that \(\mathsf{V}(\mathsf{Q}[n,n])=\mathsf{V}(\emptyset,n,\emptyset)=\mathsf{V}( \mathsf{Q}[\emptyset,n])\), so, by theorem 4.11, the structural core of \(\mathsf{Q}[n,n]\) is \(\mathsf{Q}[\emptyset,n]\). But, by the previous lemma, \(\mathsf{Q}[\emptyset,n]\subsetneq Q[n,n]\) so, in particular, \(\mathsf{Q}[n,n]\) is different from its structural core, hence it is not structural. Therefore \(\mathsf{Q}\) contains a quasivariety that is not structural and this means that it is not primitive.
We can also give an example of a class of primitive quasivarieties.
**Proposition 4.14**.: _Let \(Q=Q[\emptyset,p]\), where \(p\) is a prime number; then \(Q\) is primitive._
Proof.: We know that \(V(Q)=V(\emptyset,p,\emptyset)\) and, since \(Q\) is structural, this means that every quasivariety strictly contained in \(Q\) generates a variety that is strictly contained in \(V(Q)\). So now let's consider all the subvarieties of \(V(\emptyset,p,\emptyset)\).
If we consider a quasivariety \(Q^{\prime}\) that generates a variety strictly contained in \(V(\emptyset,p,\emptyset)\), then it has to be contained in the coatoms of the lattice, that are \(V(p,\emptyset,\omega)\) and \(V(\emptyset,1,\emptyset)\). By theorem 4.6, we know that these two varieties are structural, hence primitive, so every quasivariety contained in them is structural.
Unfortunately, we fall short of characterizing all the primitive quasivarieties, due to our lack of understanding of the lattice of all the subquasivarieties. The proof of the Proposition 4.14 is based on the fact that all the quasivarieties strictly contained in \(Q[\emptyset,p]\) are actually varieties; so we know that the lattice of the subquasivarieties of \(Q\) is contained in the lattice of the subvarieties of \(V(Q)\) which clearly is not always the case.
Consider for example the quasivariety \(Q=Q[\emptyset,\{p,q\}]\). Using Lemma 4.12, we may sketch what the lattice of the subquasivarieties of \(Q\) looks like:
Now, \(Q(\emptyset,p,\emptyset)\) and \(Q(\emptyset,q,\emptyset)\) are primitive by Lemma 4.14, but we don't know if there is any quasivariety in the intervals \([Q(\emptyset,p,\emptyset),Q(\emptyset,\{p,q\},\emptyset)]\) and \([Q(\emptyset,q,\emptyset),Q(\emptyset,\{p,q\},\emptyset)]\). Note that, if there is such a quasivariety, then it cannot be generated by chains,
and that would immediately imply that \(Q\) is not primitive, because by Theorem 4.11 this quasivariety would not be structural.
Another problem is that, given a quasivariety \(Q\), the lattice of all the varieties \(V(Q^{\prime})\), with \(Q^{\prime}\subseteq Q\), is almost always strictly contained in the lattice of the subvarieties of \(V(Q)\). For example consider \(Q=Q[\emptyset,pq]\); then \(V(Q)=V(\emptyset,pq,\emptyset)\) and we know that \(V(pq,p,\emptyset)\) is a subvariety of \(V(\emptyset,pq,\emptyset)\). Now, if a quasivariety generates \(V(pq,p,\emptyset)\) then it would not be primitive, because it would contain \(Q[pq,p]\) that is not primitive by Lemma 4.13, but by lemma 4.12 we know that no subquasivariety of \(Q\) can contain \(\mathbf{L}_{pq}\), so there is no subquasivariety of \(Q\) that can generate \(V(pq,p,\emptyset)\).
## 5. Conclusions and future work
What can we say about the fragments of \(\mathcal{MV}^{+}\)? We will consider only the fragments containing \(\to\), as they are the algebraizable ones. The \(\{\to\}\)-fragment has been studied first in [36]; its equivalent algebraic semantics is the variety \(\mathsf{LBCK}\) of **Lukasiewicz \(\mathsf{BCK}\) algebras**. We have that:
* every locally finite quasivariety of \(\mathsf{LBCK}\)-algebras is a primitive variety [9];
* the only non-locally finite subvariety is the entire variety \(\mathsf{LBCK}\)[36];
* \(\mathsf{LBCK}\) it is generated as a quasivariety by its finite chains [6];
* every infinite chain contains all the finite chains as subalgebras [36];
* so if \(Q\) is a quasivariety which contains only finitely many chains, then \(V(Q)\) is locally finite, hence primitive;
* otherwise \(Q\) contains infinitely many chains and so \(V(Q)=Q=\mathsf{LBCK}\).
Hence every subquasivariety of \(\mathsf{LBCK}\) is a variety and \(\mathsf{LBCK}\) is primitive.
For the other algebraizable fragments, observe that if \(\to\) is present then \(\vee\) is definable and if \(\to\) and \(\cdot\) are present, then \(\wedge\) is definable. So the only remaining interesting fragment is the \(\{\to,\wedge,1\}\)-fragment that has been considered in [4]. Its equivalent algebraic semantics is the variety \(\mathsf{LBCK}^{\wedge}\) of \(\mathsf{LBCK}\)-semilattices; from the results in [4] (and some straightforward calculations) one can prove that \(\mathsf{LBCK}^{\wedge}\) is primitive.
As far as the future work is concerned, there is a very natural path to follow. In this paper we have characterized all the structurally complete finitary extensions of \(\mathcal{MV}^{+}\) and in [29] the same has been basically done for finitary extensions of \(\mathcal{MV}\). Hajek's Basic Logic \(\mathcal{BL}\)[31] has been investigated from the algebraic point of view in many papers through its equivalent algebraic semantics, that is the variety of \(\mathsf{BL}\)-algebras (see for instance [2] and the bibliography therein). In particular, in [7] (Theorem 3.7) it has been shown that there is a very deep algebraic connection between \(\mathsf{BL}\)-algebras (and their positive subreducts), \(\mathsf{MV}\)-algebras and Wajsberg hoops. With the knowledge we have accumulated so far (and using also more general techniques introduced in [9]) we believe we can tackle the problem of describing the finitary structurally complete extensions of \(\mathcal{BL}\) with some degree of success.
|
2306.17412 | Revealing the spatial extent of patent citations in the UK: How far does
knowledge really spillover? | Access to external knowledge sources through localized knowledge spillovers
is an important determinant of the innovative capabilities of firms. However,
the geographical extent of knowledge spillovers is not well understood. In this
article we use patent citations in the UK as a proxy of knowledge flows and
analyze the spatial extent of knowledge spillovers relative to the distribution
of existing knowledge creation. We find that local, regional and country
specific institutional factors play an important role in influencing the
probability of knowledge spillovers and that most knowledge spillovers are
exhausted within an extended commuting boundary. It is also shown that these
effects have increased over time and that the spatial extent of knowledge
spillovers varies by industry. | Philip Wilkinson, Elsa Arcaute | 2023-06-30T06:01:00Z | http://arxiv.org/abs/2306.17412v1 | # Revealing the spatial extent of patent citations in the UK:
###### Abstract
Access to external knowledge sources through localized knowledge spillovers is an important determinant of the innovative capabilities of firms. However, the geographical extent of knowledge spillovers is not well understood. In this article we use patent citations in the UK as a proxy of knowledge flows and analyze the spatial extent of knowledge spillovers relative to the distribution of existing knowledge creation. We find that local, regional and country specific institutional factors play an important role in influencing the probability of knowledge spillovers and that most knowledge spillovers are exhausted within an extended commuting boundary. It is also shown that these effects have increased over time and that the spatial extent of knowledge spillovers varies by industry.
**Keywords**: Agglomeration, knowledge spillovers, Innovation, International Patents
## 1 Introduction
Innovation depends on the combination of existing ideas, concepts, and theories to create something new which can be utilised for societal or commercial benefit (Carlino & Kerr, 2015). To this end it is increasingly recognised that the innovative capacity of a firm depends not only on its ability to combine internal stocks of knowledge and resources but also its ability to acquire and implement external sources of knowledge (Paci, et al., 2014; Crescenzi, et al., 2016). Utilising these external sources reduces the internal costs of innovation and allows firms to develop novel combinations which they may not have developed on their own (Rammer, et al., 2020). In this sense, geographical proximity has been identified as a factor that can influence the access to these external resources through knowledge spillovers, affecting the innovative performance of firms. However, while this has been acknowledged, little is known about the exact geographic scope of these spillovers (Cappelli, et al., 2014).
The idea of knowledge spillovers was originally conceived by Alfred Marshall as 'The mysteries of the trade become no mysteries; but are as if it were in the air' (Marshall, 1890, p. 251), which is similarly reflected in subsequent theories from a variety of academic disciplines (McCann, 2013). The importance of this, and its dynamic understanding, can be readily seen in recent developments of the literature that focus on the agglomeration of firms, such as Endogenous Growth Theory and Porter's Cluster Concept (Fritsch & Franke, 2004). Different factors have been suggested to influence these knowledge spillovers such as physical distance between firms, institutional difference and the degree of knowledge crossover, however all three have not been considered in combination. In this piece we examine these factors concurrently, considering the physical distance between inventors/applicants, the institutional constraints through local, regional and country level boundaries, and knowledge similarity related to the technological domain. |
2307.16425 | All-In-One Metrical And Functional Structure Analysis With Neighborhood
Attentions on Demixed Audio | Music is characterized by complex hierarchical structures. Developing a
comprehensive model to capture these structures has been a significant
challenge in the field of Music Information Retrieval (MIR). Prior research has
mainly focused on addressing individual tasks for specific hierarchical levels,
rather than providing a unified approach. In this paper, we introduce a
versatile, all-in-one model that jointly performs beat and downbeat tracking as
well as functional structure segmentation and labeling. The model leverages
source-separated spectrograms as inputs and employs dilated neighborhood
attentions to capture temporal long-term dependencies, along with non-dilated
attentions for local instrumental dependencies. Consequently, the proposed
model achieves state-of-the-art performance in all four tasks on the Harmonix
Set while maintaining a relatively lower number of parameters compared to
recent state-of-the-art models. Furthermore, our ablation study demonstrates
that the concurrent learning of beats, downbeats, and segments can lead to
enhanced performance, with each task mutually benefiting from the others. | Taejun Kim, Juhan Nam | 2023-07-31T06:20:01Z | http://arxiv.org/abs/2307.16425v1 | # All-in-one metrical and functional structure analysis
###### Abstract
Music is characterized by complex hierarchical structures. Developing a comprehensive model to capture these structures has been a significant challenge in the field of Music Information Retrieval (MIR). Prior research has mainly focused on addressing individual tasks for specific hierarchical levels, rather than providing a unified approach. In this paper, we introduce a versatile, all-in-one model that jointly performs beat and downbeat tracking as well as functional structure segmentation and labeling. The model leverages source-separated spectrograms as inputs and employs dilated neighborhood attentions to capture temporal long-term dependencies, along with non-dilated attentions for local instrumental dependencies. Consequently, the proposed model achieves state-of-the-art performance in all four tasks on the Harmonix Set while maintaining a relatively lower number of parameters compared to recent state-of-the-art models. Furthermore, our ablation study demonstrates that the concurrent learning of beats, downbeats, and segments can lead to enhanced performance, with each task mutually benefiting from the others.
Taejun Kim and Juhan Nam+ KAIST, Graduate School of Culture Technology, Daejeon, Republic of Korea beat tracking, downbeat tracking, structure analysis, multi-task learning, transformers +
Footnote †: This research was supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2023 (Project Name: Development of high-speed music search technology using deep learning, Project Number: CR202104004)
## 1 Introduction
Music has a hierarchical organization characterized by distinct levels of structural units. The foundational level comprises metrical elements, including beats, bars, and segments, which form the basic rhythmic structure. Ascending the hierarchy, these metrical components are assembled into functional units, such as verses and chon-ruses, that collectively shape the overall architecture of the piece. Despite inherent interdependence of the hierarchical levels, research in the field of MIR has primarily been conducted as isolated tasks such as beat/downbeat tracking [1, 2, 3, 4, 5], segmentation [6, 7], and functional structure labeling [8, 9, 10], missing the potential benefits of interdependence gained from all the metrical and functional structure information. However, joint learning of the hierarchical information levels in a unified model presents considerable challenges due to the substantial length and high dimensionality of individual songs represented as audio data. Furthermore, the songs contain a wide variety of acoustic and musical variations within the underlying metrical and functional structure layers. In this paper, we attempt to predict beat, downbeat, segmentation, and functional structure labels all at once with a single model and show their synergy in the multi-task learning.
The core challenge in the attempt is designing an efficient model that can learn the information with a large time-granularity over long-range audio frame sequences. In the beat/downbeat tracking task, the model has been designed to have a large receptive field to cover a sufficient number of beats and downbeats. A representative model is Temporal Convolutional Networks (TCN), a family of convolutional neural networks with dilation operations which has an exponentially increasing size of receptive fields as the layer goes up [1, 2, 3, 4]. Recently, researchers have improved the performance further using variants of the transformer architecture. For example, SpecTNT-TCN used the time-frequency transformer (SpecTNT) for efficient long-term representation learning and integrated it with the TCN module for performance gain [11]. Beat Transformer employed the dilation operations in the self-attention layers along with demixed input, achieving state of the art results across five datasets [5].
Unlike beats and downbeats, segmentation boundaries and temporal change of functional structure labels are much sparser. Thus, the tasks have primarily been tackled as segmentation problems based on the self-similarity of local audio features within a song [8]. One group of previous works explored better audio features or embeddings using temporal affinity [6], semantic labels [7], or structure labels [9]. The other group focused on segmentation algorithms that leverage homogeneity, repetition, and novelty principles in the segment level [12, 13, 14]. However, recent models based on convolutional neural networks or transformer predicted the "boundaryness" or "chorusness" of an excerpt directly from the audio and achieved a new state-of-the-art [15, 10].
Following recent advances in the aforementioned tasks, our proposed model builds upon the transformer architecture. Specifically, we incorporate dilated self-attention layers and demixed input from Beat Transformer. However, we introduce three major modifications. First, we employ "neighborhood attention" which effectively creates attention windows enclosing nearest possible neighbors without requiring zero-padding [16]. This facilitates widening the receptive field of the model without unnecessary computation. Second, we set the model to predict not only beat and downbeat but also segmentation boundary and functional structure labels directly from audio input. Through a comprehensive ablation study, we investigate performance interaction in the all-in-one learning. Lastly, we significantly streamline the model size following the configuration of the TCN model. We evaluated the proposed model on the Harmonix Set which includes all metrical and functional structure labels [17]. We show that our proposed model outperforms recent state-of-the-art models in all four tasks while maintaining a relatively small number of parameters (about 300K). The code and pre-trained models are accessible via the provided link 1.
## 2 Method
### Model Architecture
An overview of the proposed model is illustrated in the left side of the Figure 1. The model utilizes demixed sources as inputs and convolutional layers and max-pooling as a front end processing following Beat Transformer [5]. However, the transformer modules comprise two distinct blocks based on with neighborhood attentions: 1) 1D Dilated Neighborhood Attention (DiNA) block which models long-term temporal dependencies using dilations, and 2) 2D Neighborhood Attention (NA) block which models inter-instrument dependencies while preserving locality by focusing on local neighbors. The concept of stacking alternating dilated and non-dilated blocks originates from the original DiNA design [18].
The 1D DiNA block includes two DiNA modules inspired by the TCN model [3]. The proposed transformer module has the second DiNA module with a doubled dilation, aiming for the model to learn musical properties at various levels that are integer multiples of each other. The outputs of the two DiNA modules are first added to the skip connection, then concatenated, and fed into the next layer. The multilayer perceptron (MLP) consists of two fully connected layers, which initially increase the embedding dimension to \(8C\) and subsequently reduce it back to its original size of \(C\) to keep the embedding size consistent. The dilations grow up to \(2^{10}\) and \(2^{11}\), yielding receptive field sizes of approximately 41 and 82 seconds for the first and second DiNA modules, respectively. The size of the embedding dimension \(C\) remains fixed at 24 throughout all transformer blocks. The 2D NA block is identical to the original NA [16].
### Details of Neighborhood Attentions
Figure 2 illustrates the neighborhood attention mechanism at the end of a song. The bottom part of the figure demonstrates how the DiNA effectively and efficiently computes attention at the end of a song without requiring any zero padding. In the worst cases, with a large receptive field such as 82 seconds, conventional mechanisms would require 41 seconds of zero padding, which adds unnecessary computational complexity. The top part of the figure shows that the NA effectively creates the window only enclosing available instruments and time frames surrounding it2. These details make the proposed model different from the Beat Transformer, which has fixed sliding window and single frame instrumental attention.
Footnote 2: In practice, we use a kernel size of \(5\times 5\) with a zero padding on the instrumental dimension since NATTEN does not support non-square kernels.
### Model Configuration and Post-processing
The transformer architecture generally requires a large number of parameters. Inspired by the effectiveness of lightweight TCN models [19], we streamline the proposed transformer model. Specifically, we followed the overall configuration and pipeline from the TCN models for beat, downbeat, and tempo estimation [2, 3]. We adopt the same input spectrogram configurations and initial feature extractor setups, which consist of three convolutional and max pooling layers. Our proposed model also has a stack of 11 sequence modeling blocks (transformer modules in this work) and utilizes a dynamic Bayesian network (DBN) [20] for post-processing beats and downbeats. However, since the TCN model is designed solely for beat and downbeat tracking, we apply the peak-picking method
Figure 1: (Left) An illustration of the proposed model architecture. (Right) A detailed representation of the transformer module, showcasing both the 1-dimensional (1D) Dilated Neighborhood Attention (DiNA) and the 2-dimensional (2D) Neighborhood Attention (NA) blocks. C denotes the embedding dimension.
Figure 2: An illustration of the attention windows (depicted by red and blue lines) in Neighborhood Attentions [18, 16] at the end of a song. Unlike conventional sliding window attention or convolution mechanisms, the windows are not centered around the attending (yellow; or query) values. Instead, they enclose the nearest possible neighbors (red and blue boxes), effectively eliminating the need for zero padding. The light grey boxes represent the dilations.
from two other previous works [21, 10] for post-processing segment boundaries and functional labels. This method involves normalizing the probabilities of segment boundaries using sliding window averages and selecting the highest probability. Contrary to the previous works, we do not apply thresholding after normalization and opt for a window size of 24 seconds, as opposed to their 18-second window. In the sequence modeling block design, further adaptations from the TCN model are made: a kernel size of 5, a second kernel featuring doubled dilation, and an exponentially increasing dilation rate at \(2^{l}\), where \(l\) represents the block number.
## 3 Experiments
### Experimental Setup
We primarily used the Harmonix Set [17] for the experiment. We conducted data cleaning and functional label merging following the previous work [10], as different versions of audio and annotations exist. The labels represent segment functions such as'verse' and 'chorus'. Performance evaluation is carried out under 8-fold cross validation, following the convention of beat and downbeat tracking [22, 3, 11, 5]. Among the 8 folds, 6 are designated for training, 1 for validation, and 1 for test. Data augmentation and additional datasets are not utilized in this study.
### Evaluation Metrics
We assess performance using conventional metrics for each task. For beat and downbeat tracking, F-measure (F1) with a tolerance window of 70 ms, CMLt, and AMLt are utilized [23]. For segmentation, the F-measure of hit rate at 0.5 seconds (HR.5F) is employed, while the F-measure of pairwise frame-level clustering (PWF) and F-measure of normalized entropy score (Sf) are used for the evaluation of segment labeling [8].
### Implementation Details
We reproduced the TCN model on our own according to the original training strategies and hyperparameters [3]. However, for the larger variation (TCN-Large), we perform a grid search to determine the optimal regularization hyperparameters. TCN-Large is a variant designed to offer a fair comparison with the proposed model, as it has a similar number of parameters (301 K) by increasing the channel dimensionality. We use PyTorch 2.0 for implementation. Hybrid Transformer Demucs [24] handles source separation, while NATTEN3 implements NA and DiNA. Madmon [25] is utilized for spectrogram extraction and DBN implementation. The batch size is set to 1, and spectrograms larger than 5 minutes are randomly chunked into 5 minutes due to the GPU memory limit. Optimization is performed using RAdam [26] with a learning rate of 0.005 and Stochastic Weight Averaging (SWA) [27] with a learning rate of 0.15. When the validation loss plateaus, the learning rate is decayed by a factor of 0.3. A weight decay of 0.00025 is applied. Early stopping is triggered when the validation loss fails to decrease for 30 epochs. Dropouts with rates of 0.2, 0.2, 0.2, and 0.1 are applied to convolution, MLP, attention probabilities, and skip connections. Exponential Linear Unit is utilized for convolutions and Gaussian Error Linear Unit for transformers. On average, early stopping is activated after 5 hours of training on a single fold with an RTX 2080 Ti 11 GB.
Footnote 3: [https://github.com/SHI-Labs/NATTEN](https://github.com/SHI-Labs/NATTEN)
## 4 Results and Discussion
### Ablation Study
To investigate the contributions of various components to the overall performance gain, we conduct an ablation study by discarding a component or changing training setups, as shown in Figure 3. The ablation study is grouped into modifications of multi-task, block, and input settings. They are colored in green, blue, and red, respectively. All performance results are averages of 8-fold cross-validation results, and a single metric for each task is used as a
Figure 3: Ablation study performance results. Performance differences relative to the proposed model (dotted line) are indicated in parentheses.
representative metric: F1, F1, HR.5F, and PWF, for beat, downbeat, segment, and label, respectively.
_Multi-task settings_ are cases where specific losses are discarded. For example, "w/o beat & downbeat" does not including beats and downbeat tracking tasks but only focuses on segmentation and structure labeling, resulting in the absence of performance metrics for the beat and downbeat tracking (shown as "N/A"). The downbeat tracking performance significantly decreases without the segmentation tasks and structure labeling as shown in Figure 3 (b) but we found that this is due to overfitting by investigating the training and validation loss curves. Nonetheless, it seems that the three tasks (beat/downbeat tracking and segmentation) benefit from the joint learning. Beat and downbeat tracking performances decrease without the segmentation task, and vice versa. However, they are not influenced by the joint learning with structure labeling. Furthermore, when considering the downbeat tracking and structure labeling performances, their performances become higher without each other. This may be because the nature of structure is more akin to long-term timbre classification rather than instant event detection such as the three other tasks. Nevertheless, segmentation performance drops without learning of the structure labeling because it can provide strong cues to find segment boundaries when the labels change.
_Block settings_ refer to training models with the omission of one component in the transformer module at a time. The "w/o second DiNA" setting lacks the second 1D Dilated Neighborhood Attention (NA) with doubled dilation (as depicted on the right side of Figure 1). In the "w/o inst. attention" setting, the model uses 1D NA instead of 2D NA, which means the models do not have any instrument-wise attention. Models labeled as "w/o dilation" do not have any dilations. The results from these experiments demonstrate that performance decreases when any of the components in the block are missing, highlighting the importance of each component in the transformer module for achieving optimal performance.
_Input settings_ refer to different settings of input channel and length. "w/o demix" indicates models without demix inputs, which drastically decreases performances in all four tasks. Lastly, we provide performance metrics for models trained with shorter segment lengths and higher batch sizes. For example, "1-minute segment" indicates models trained with randomly chunked 1-minute spectrograms. To take advantage of shorter chunks, we set the batch sizes to 5, 3, 1, and 1 for 1, 2, 3, and 4-minute segments, respectively, which are the maximum numbers that can be loaded onto the GPU. While it is known that a large batch size leads to better generalization, it is impossible to achieve if the sequence length is too long due to the limited memory of the GPU. However, the results show that training with longer segments and a batch size of 1 yields better generalizations, especially in segmentation and structure labeling. This is probably because the long-term nature of segments and structure labels requires a large context to predict.
### Comparison with Previous Work
Table 1 provides a summary of the performance of current state-of-the-art models in beat/downbeat tracking, segmentation, and structure labeling. Meanwhile, the bottom two sections display the performance of the TCNs and the proposed models. The TCNs are evaluated with and without demixed inputs. The proposed model (All-In-One) outperforms the TCNs models. Additionally, we can see that demixing is effective for the TCNs as well except for beat tracking. Notably, the proposed model achieves state-of-the-art performances in all four tasks while maintaining a relatively small number of parameters compared to other models. We also report the performances of a smaller version of the proposed model, which only has 46 K parameters. This smaller model consists of nine stacks of the transformer module, a kernel size of three, an embedding size of 16, and exponentially growing dilations with a factor of three. Remarkably, even with the small number of parameters, the proposed model already achieves state-of-the-art performances in the segmentation and structure labeling tasks.
## 5 Conclusions
We introduced a novel approach that learns multiple levels of hierarchical music structure, including beats, downbeats, segment boundaries, and functional labels. To construct the model, we employed demixed sources as inputs and adopted neighborhood attentions for effective modeling of temporal and instrumental dependencies. As a result, the proposed model achieved state-of-the-art performances in all four tasks. Furthermore, our ablation study reveals the potential benefits of joint training for beats, downbeats, and segments, while structure labels may not derive the same advantage. We hypothesize that the reason for this observation is that that structure labeling focuses on long-term timbre texture whereas the other three tasks involve short-term event detection.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{
\begin{tabular}{c} \# of \\ Params \\ \end{tabular} } & \multicolumn{3}{c}{Beat} & \multicolumn{3}{c}{Downbeat} & \multicolumn{2}{c}{Segment} & \multicolumn{2}{c}{Label} \\ \cline{3-10} & & F1 & CML4 & AML4 & F1 & CML4 & AML4 & HR.5F & PWF & Sf \\ \hline SpecTNT-TCN [11]\({}^{*}\) & 4.7 M &.953 & **.939** &.959 &.908 &.872 &.928 & – & – & – \\ Beat Transformer [5]\({}^{*}\) & 9.3 M &.954 &.905 &.957 &.898 &.863 &.919 & – & – & – \\ \hline DSF+Scluster [9] & N/A & – & – & – & – & – & – &.497 &.684 &.743 \\ SpecTNT [10]\({}^{*}\) & N/A & – & – & – & – & – & – &.558 &.712 &.724 \\ \hline TCN w/o demix [3]\({}^{\dagger}\) & 74 K &.954 &.900 &.961 &.886 &.842 &.920 &.594 &.687 &.694 \\ TCN\({}^{\dagger}\) & 93 K &.946 &.898 &.950 &.894 &.850 &.919 &.619 &.715 &.738 \\ TCN-Large\({}^{\dagger}\) & 301 K &.953 &.906 &.960 &.901 &.853 &.924 &.626 &.717 &.746 \\ \hline All-In-One-Small (Ours) & 46 K &.943 &.891 &.952 &.901 &.854 &.929 &.616 &.713 &.745 \\ All-In-One (Ours) & 300 K & **.958** &.913 & **.964** & **.915** & **.873** & **.932** & **.660** & **.738** & **.769** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of performance metrics between previous works and the proposed models on the Harmonix Set. \({}^{\dagger}\)denotes previous works reproduced by us and their number of parameters are also calculated by us. \({}^{*}\)indicates the use of data augmentation and extra datasets. |
2310.20558 | Breaking the Token Barrier: Chunking and Convolution for Efficient Long
Text Classification with BERT | Transformer-based models, specifically BERT, have propelled research in
various NLP tasks. However, these models are limited to a maximum token limit
of 512 tokens. Consequently, this makes it non-trivial to apply it in a
practical setting with long input. Various complex methods have claimed to
overcome this limit, but recent research questions the efficacy of these models
across different classification tasks. These complex architectures evaluated on
carefully curated long datasets perform at par or worse than simple baselines.
In this work, we propose a relatively simple extension to vanilla BERT
architecture called ChunkBERT that allows finetuning of any pretrained models
to perform inference on arbitrarily long text. The proposed method is based on
chunking token representations and CNN layers, making it compatible with any
pre-trained BERT. We evaluate chunkBERT exclusively on a benchmark for
comparing long-text classification models across a variety of tasks (including
binary classification, multi-class classification, and multi-label
classification). A BERT model finetuned using the ChunkBERT method performs
consistently across long samples in the benchmark while utilizing only a
fraction (6.25\%) of the original memory footprint. These findings suggest that
efficient finetuning and inference can be achieved through simple modifications
to pre-trained BERT models. | Aman Jaiswal, Evangelos Milios | 2023-10-31T15:41:08Z | http://arxiv.org/abs/2310.20558v1 | Breaking the Token Barrier: Chunking and Convolution for Efficient Long Text Classification with BERT
###### Abstract
Transformer-based models, specifically BERT, have propelled research in various NLP tasks. However, these models are limited to a maximum token limit of 512 tokens. Consequently, this makes it non-trivial to apply it in a practical setting with long input. Various complex methods have claimed to overcome this limit, but recent research questions the efficacy of these models across different classification tasks. These complex architectures evaluated on carefully curated long datasets perform at par or worse than simple baselines. In this work, we propose a relatively simple extension to vanilla BERT architecture called ChunkBERT that allows finetuning of any pretrained models to perform inference on arbitrarily long text. The proposed method is based on chunking token representations and CNN layers, making it compatible with any pre-trained BERT. We evaluate chunkBERT exclusively on a benchmark for comparing long-text classification models across a variety of tasks (including binary classification, multi-class classification, and multi-label classification). A BERT model finetuned using the ChunkBERT method performs consistently across long samples in the benchmark while utilizing only a fraction (6.25%) of the original memory footprint. These findings suggest that efficient finetuning and inference can be achieved through simple modifications to pre-trained BERT models.
## 1 Introduction
Transformers [17] and their derivatives, e.g, BERT [6], RoBERTa [12] have achieved significant improvements in various NLP tasks including text classification. These improvements have been largely credited to the self-attention module of the architecture. Self-attention, also known as intra-attention, is a mechanism used in the BERT model (and other Transformer models) to allow the model to attend to different parts of the input text at the same time, rather than processing the text sequentially. This contrasts previous models such as recurrent neural networks (RNNs [8]), which process input sequentially, one word at a time. This allows the model to capture long-range dependencies in the input text more effectively.
The self-attention mechanism in BERT has a fixed-size window, which means it can only attend to a limited number of words at any given time. The exact size of this window varies depending on the specific architecture of the BERT model. Still, in many cases, it is limited to 512 tokens (i.e., words or sub-word units) in the input. This means that BERT is not able to process input sentences that are longer than 512 tokens. There are several reasons why the self-attention mechanism in BERT is limited to 512 tokens in the input. One reason is computational efficiency. Self-attention mechanism grows quadratically with the input length, which can be computationally intensive, and limiting the number of tokens in the input allows BERT to process the input more efficiently. By limiting the input to 512 tokens, BERT is able to focus on the most relevant words in the input and generate
high-quality contextualized representations of those words. The limitation of BERT to 512 tokens in the input is a trade-off that allows the model to achieve good performance on a wide range of natural language processing tasks while also being computationally efficient. In practice, it is non-trivial to apply BERT to real datasets which are much longer than the allowed limit of various off-the-shelf pre-trained models [18]. This presents a challenge in terms of real-life application and feasibility. Different approaches have claimed to overcome this input limit but recent research from [16] shows that when compared against the same benchmark these models have performed marginally better or even fail to outperform simple baselines (that truncates longer text) on different datasets. We propose ChunkBERT, a finetuning extension to vanilla BERT that can be used to finetune any pre-trained BERT on longer text. The chunkBERT finetuning strategy has the following advantages:
* It can adapt any off-the-shelf pre-trained BERT beyond the limit imposed by architecture.
* It allows fine-tuning on a standard GPU and the ability to scale down memory requirements if needed. In our setting, we reduce memory by a factor of 16, i.e., 6.25% of the original memory.
* It scales linearly and can infer on inputs of arbitrary length, without the need for custom CUDA kernels.
* It is more systematic than random text truncation.
## 2 Related Work
There are a few different ways to extend BERT to handle input sequences that are longer than 512 tokens. One way is to use a variant of BERT that has a larger self-attention window (e.g, XLNet [21]). This includes pretraining a model from scratch on in-domain data and finetuning on the downstream task, requiring a lot more computing than a simple finetuning task. Another way to extend BERT to handle longer input sequences is to use a "hierarchical" approach as [20], where the input sequence is divided into multiple shorter sequences and each of these sequences is processed separately by BERT. Another approach is to use BERT as part of a larger model that is designed to handle sequential inputs [5]. Generally, there are four main approaches that can be broadly identified for handling longer sequences in the transformer architecture. The straightforward approach is to simply truncate the longer inputs (until 512 tokens) to fit within the allowed input limit of the architecture, this is the simplest and most popular approach. The disadvantage of simple truncation is that important information for the task may be truncated. A better approach can be to find the important parts of the longer input sequence and truncate the least important parts, this is called key-sentence selection. [7] explores this method by training two models in tandem, where the first model recognizes the important parts of the longer input and the second model inferences over the selected shorter input. Other approaches (e.g, Bigbird [22], Longformer [2]) focus on efficient versions of the self-attention module which employ sparse self-attention and save memory by not attending to every token in the input. The hierarchical processing of longer documents [15] explores dividing longer inputs into shorter chunks and combining these chunks representation using an additional transformer attention layer.
More recent work [16] highlights the lack of long-text classification benchmark and compares the above approaches on a unified benchmark across different classification tasks and datasets. The long-text benchmark [16] suggests strong baselines that often outperform the state-of-the-art in terms of classification performance. Their baselines use the key-sentence selection strategy where a partition of the long input is selected randomly or by using a text ranking algorithm.
The success of simple baselines against complex models motivated our shift in perspective to investigate finetuning strategies rather than compute-efficient attention mechanisms, this work is parallel to efficient attention mechanisms and complements any global attention approximations (e.g, Nystromformer [19]). Our work focuses on taking advantage of the transfer learning paradigm (pretaining-finetuning) and reusing pretrained models for finetuning on longer text.
## 3 ChunkBERT
We propose a simple chunking methodology to consume long inputs, where each chunk is processed independently and their representations are concatenated to be processed by a TextCNN classification
module. The complete architecture is optimized end-to-end and learn to use the chunk representation to perform well in the downstream task. The finetuning can be divided into 4 steps:
* Dynamic padding of batches to support splitting of batched inputs into equal-sized chunks.
* Splitting the long input into shorter input chunks and independent processing of each chunk.
* Concatenation of chunk representation to obtain the complete input embedding.
* Classification using the TextCNN module.
We achieve this using the token representation induced by the BERT model and chunking them until the complete input is observed. Specifically, The long input of size \((T)\) is split into digestible chunks of size \((C)\) giving \((\frac{T}{C})\) or \((n)\) chunks of input tokens. BERT processes each input chunk independently to induce token representations of size \((C\times 768)\). The token representations from each split \((n\times(C\times 768))\) are concatenated to obtain the final token-level representations of the complete input \((T\times 768)\). The BERT input chunking methodology is illustrated in Figure 1.
Finally, we process the concatenated embeddings obtained by BERT input chunking using a TextCNN module as described in [10]. The whole pipeline of BERT chunking and CNN layer is trained end-to-end using cross-entropy loss. This allows us to process arbitrary long sequences using the constant parameters of the CNN layers.
### Chunk Attention
The chunking mechanism is more efficient than global attention as it only computes local attention weights and routes inter-token information within each chunk independently. This makes the space complexity quadratic to chunk-size\((C)\) rather than input length \((T)\) and linear to the number of chunks \((n)\). This means that we can potentially increase the number of chunks for a linear growth in space requirement while consuming longer inputs. The chunk attention mask is illustrated for a complete input in Figure 2. It highlights the need for routing inter-chunk information. In classification tasks, it can be important for information in the first chunk to be routed to the last chunk in the input. We
Figure 1: Overview of BERT Input chunking. First, the complete input \((T)\) is tokenized using the BERT tokenizer and split into chunks of size \((C)\) e.g, 128 tokens each. The input chunks are independently processed by BERT and produce token embeddings for each chunk of size \((C\times 768)\). Here, we average the token embeddings of the last five layers of the BERT model to obtain token representations. Once every chunk is processed, token representations are concatenated over the token to produce the complete input matrix of size \((T\times 768)\). The input matrix is used by TextCNN classification module.
Figure 2: Chunk Attention Mask: illustration of chunk attention for the complete input \(T\). The space complexity becomes \(O(nC^{2})\), where \(n\) is number of chunks and \(C\) is the chunk-size.
### Vectorized Chunking
The process of inducing input chunk representations may be done sequentially, where one forward pass is needed for each chunk. The sequential method is suitable when memory is limited, allowing finetuning and inference in low-resource settings. Since each chunk only looks at the local context of the chunk window, this process can be vectorized by shifting the chunks into the batch dimension and obtaining the complete token representation in a one-forward pass. The chunk outputs can be shifted back into the token dimension before being processed by the TextCNN module. This process is facilitated by dynamic padding of the inputs to the next multiple of chunk size (\(C\)). Vectorized chunking allows for even faster fine-tuning and inference when resources are available. The pseudo-code for vectorized chunking is shown in Appendix B
### TextCNN Aggregation
The concatenated token representations \((T\times 768)\) can be used as input for another module that can combine information from each chunk to give an aggregated comprehension of the input. We use a TextCNN module for this aggregation task, as it can process concatenated token representation of any length (\(T\)) with the fixed amount of parameters it learned during the finetuning stage. TextCNN is a type of convolutional neural network (CNN) designed for natural language processing tasks, such as text classification and sentiment analysis. [10] proposed the Convolutional neural networks for text processing where they used pretrained word vectors as input. The TextCNN model uses multiple filters of different sizes to process the input text and extract features, which are then fed into a fully connected layer to make predictions. The filters are applied to the input text in a sliding window fashion, allowing the model to capture local dependencies within the text. The TextCNN model used in ChunkBERT is illustrated in Figure 3.
## 4 Evaluation
### Benchmark
We used the long text benchmark proposed by [16], which provides a comprehensive comparison of existing models for long document classification across a range of unified datasets and baselines. The
Figure 3: TextCNN architecture from [10], We use 100 filters of window sizes 3, 4, and 5 each which slide over the input \((T\times 768)\) with stride 1. This is followed by a max-pooling operation to obtain the feature vector that is projected to the classification layer. _Note:_ The illustration only shows one filter for each window. The final feature vector is of size 300, obtained by concatenating 100 filters for each window.
benchmark covers different models representing various strategies for modeling long-text classification in BERT. It includes various datasets, such as binary classification, multi-class classification, and multi-label classification. Table 2 presents the statistics of the datasets with their task types.
### Datasets
We briefly describe the datasets used in the benchmark, including the artificially modified dataset to increase their length in Table 1. Since not all samples in a dataset may be above the context length of 512 tokens, it is essential to highlight the relative percentage and the absolute number of samples longer than the context length, as shown in Table 2.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline Dataset & Description \\ \hline Hyperpartisan & A binary classification dataset introduced in SemEval-2019 [9]. Each news article can either be ’hyperpartisan’ or ’not hyperpartisan.’ The term hyperpartisan refers to an extreme and uncompromising bias toward a particular political party or ideology. \\ \hline
20NewsGroups & A popular multi-class dataset [11], where each news item can belong to one of the 20 news categories. \\ \hline Eurlex-57K & A multi-label classification dataset with 4.3K EUROVAC labels based on the European Union’s legislation documents, originally proposed for zero-shot or few-shot learning [4]. It consists of three sections, with the first two sections’ headers and recitals being the most information-dense. This dataset has the maximum number of long samples in the benchmark, with 3078 samples. \\ \hline Inverted-Eurlex & A simulated version of the EURLEX-57K dataset [4], where the salient sections (headers, recitals) are shifted toward the end of the document after the main body. This approach should enforce machine learning models to observe and retain the information until the end of the document to make correct predictions. \\ \hline CMU Book Summary & A multi-label dataset [1] extracted from Wikipedia, which contains book summaries and their metadata, such as author and genres from Freebase. The genres are used as labels. \\ \hline Paired Book summary & A simulated dataset made from the CMU book summary [1], where two summaries are joined to create a longer text with two independent information blocks. This requires machine learning models to predict the correct genres for both summaries. This dataset has the longest text, with a mean of 1147 tokens. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset descriptions in the long text benchmark [16]
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Dataset & Task & \# Train & \# Dev & \# Test & \# Labels & \# BERT Tokens & \# Long (\%) \\ \hline Hyperpartisan 20NewsGroup & B & 516 & 64 & 64 & 2 & 744 \(\pm\) 677 & 34 (53\%) \\
20NewsGroup & MC & 10,182 & 1,132 & 7,532 & 20 & 368 \(\pm\) 783 & 1107 (15\%) \\ \hline EURLEX-57K -Inverted & ML & 45,000 & 6,000 & 6,000 & 4271 & 707 \(\pm\) 538 & 3078 (51\%) \\ \hline Book Summary - Paired & ML & 10,230 & 1279 & 1279 & 227 & 574 \(\pm\) 659 & 495 (39\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset statistics and task types. B, MC, and ML refer to binary, multi-class, and multi-label respectively. #BERT tokens are the average token counts after tokenization using a BERT-base tokenizer. # Long refers to the number of long samples in the test-set. Note: The dataset statistics are reproduced from [16]
#### 4.2.1 Baseline models
The benchmark study [16], proposed simple yet effective baseline models that outperformed complex architectures. We will briefly describe these baseline models, using our own illustrations in Figure 4. The first baseline model is called BERT+Textrank, which uses an off-the-shelf implementation [14] of an unsupervised sentence ranking algorithm [13] to extract key sentences from the input text. The [CLS] representation of the first 512 tokens and the key sentences (up to 510 tokens) are concatenated and projected to the output layer. This requires two forward passes of the BERT model. The second baseline model is called BERT+Random, which is similar to the first model but instead of using key sentences, it selects random 512 tokens from the input text.
The benchmark study also includes three additional models that cover different strategies for modeling long texts. The first model is the Longformer [2] model, which uses efficient sparse attention to model long-range dependencies. The second model is ToBERT [15], which stands for transformer over BERT. It splits the input into overlapping segments and applies a 2-layer transformer over their representation. The third model is CogLTX [7], which uses key sentence selection to train two models in tandem, one for selecting relevant input sentences and the other for classification using the selected input.
### Experimental Setup
During training, we may limit the number of maximum input tokens to allow for efficient training. Even though chunkBERT allows for arbitrary input length for both training and testing, we set this to 4096 tokens, which is more than sufficient for the input in the considered datasets. There are two reasons for this: first, to limit the memory requirement during training, and second, to match the input limit of the LongFormer model. The chunk size (\(C\)) is set to 128 tokens to reduce the memory consumption to 6.25% of the original BERT with 512 context length. We found Chunksize of 128 tokens provides a good tradeoff between training efficiency and performance. The TextCNN module and the underlying BERT model share the same ADAM optimizer and learning rate of 3e-05 with a batch size of 8. The model is trained for 20 epochs using early stopping on the development set and evaluated on the test set as followed by [16]. We report the average of evaluation metrics over 5 runs. We utilize and modify the original codebase1 for our experiments.
Footnote 1: [https://github.com/amazon-science/efficient-longdoc-classification](https://github.com/amazon-science/efficient-longdoc-classification)
### Results
The evaluation metrics for all models, except for ChunkBERT, are reported directly from [16]. Table 3. displays the experimental results on the test set containing only long samples (i.e., samples with more
Figure 4: Baselines: 1) BERT+TextRank concatenates the CLS token embedding of the first 512 tokens with another 512 tokens obtained by a text ranking algorithm [13]. This concatenated \(2\times 768\) embedding is used for the output layer. 2) BERT+Random augments the first 512 tokens with random 512 tokens. _Note_: The baselines are originally described in [16].
than 512 tokens). The table shows that ChunkBERT and Longformer exhibit similar performances on the Hyperpartisan dataset. The baseline models show the best performance in three out of the six datasets. LongFormer's performance varies across datasets, while ChunkBERT consistently performs well across all datasets, albeit slightly behind the best-performing models. The ChunkBERT model performs comparatively with only 128 chunk sizes i.e., with a much smaller footprint. On average, across all datasets, ChunkBERT emerges as the best-performing model. It performs 5.7% better than Longformer on average, and about 18% better in complicated datasets like EURLEX and Inverted-EURLEX, which have 4271 classes with many classes having zero samples. Similar trends are observed in the experimental results on the complete test set, as described in Appendix A. The original baseline models continue to display a strong competitive performance highlighting the need to further study the benchmark datasets.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Model & Hyper- & 20News- & EURLEX & Inverted- & Book- & Paired- & Average \\ & partisan & Group & & EURLEX & Summary & Summary & \\ \hline BERT & 88.00 & 86.09 & 66.76 & 62.88 & 60.56 & 52.23 & 69.42 \\ BERT+TextRank & 85.63 & 85.55 & 66.56 & 64.22 & 61.76 & 56.24 & 69.99 \\ BERT+Random & 83.5 & 86.18 & **67.03** & **64.31** & **62.34** & 56.77 & 70.02 \\ \hline LongFormer & **93.17** & 85.5 & 44.66 & 47.00 & 59.66 & **58.85** & 64.81 \\ ToBERT & 86.5 & NA & 61.85 & 59.50 & 61.38 & 58.17 & 65.48\({}^{*}\) \\ CogLTX & 91.91 & 86.07 & 61.95 & 63.00 & 60.71 & 55.74 & 69.9 \\ \hline
**ChunkBERT** & 93.00 & **86.62** & 64.94 & 62.94 & 57.80 & 57.73 & **70.51** \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation metrics on long documents (any document above 512 tokens) in the test set for all datasets. The average accuracy(%) over five runs is reported for Hyperpartisan and 20NewsGroup while the average micro-F1 is reported for other datasets. The highest value per column is in bold. ToBERT on 20NewsGroups was omitted in the original table, which requires further preprocessing. The _Average_ column describes the average across all the datasets. \({}^{*}\) ToBERT average performance does not include 20NewsGroups.
Figure 5: A heatmap displaying the performance metric difference (\(\Delta\)) between the BERT model and other models over the long documents (\(>512\) tokens) in the test set.. The x-axis is sorted by the percentage (%) of long documents in the dataset. It provides a visual comparison of performance gain or loss relative to BERT.
Discussion
In this section, we discuss the results in terms of the location of discriminatory information in the dataset and the impact of different long-text modeling strategies on classification performance compared to simple BERT with truncation. This can be achieved by inspecting the performance difference (\(\Delta\)) between BERT and other models, as shown in Figure 5. A positive delta suggests that there is significant discriminative information located only after the first 512 tokens, and the model is capable of utilizing this information effectively. On the other hand, a negative delta indicates that the model was unable to leverage the information even within the first 512 tokens, possibly due to the chosen modeling strategy.
In simple datasets, where salient information for correct classification is located within the first 512 tokens. For example, in the 20newsgroup dataset, all models perform similarly as the news category can be resolved from the first few sentences. Conversely, in the paired book summary dataset, all the models perform better than simple BERT as the models must consider the complete input to correctly classify the genres for both summaries in the document. Introducing unrelated sentences without full context may introduce noise and degrade performance, as observed in the hyperpartisan dataset, where BERT outperforms both the BERT+TextRank and BERT+Random alternatives.
Complex datasets may require the complete interaction of tokens to accurately classify the documents. This is evident with the EURLEX dataset and its inverted variant, where the sparse attention of Longformer has a notable negative impact on performance compared to BERT. However, other modeling strategies are less affected, with ChunkBERT exhibiting the least degradation in performance. The Eurlex documents have key information concentrated at the beginning, which BERT leverages more effectively than complex models using full self-attention. Therefore, incorporating additional information to BERT using methods like TextRank or Random does not necessarily lead to significant performance improvements. However, in the inverted-Eurlex dataset, where the key information is shifted towards the end of the document, these variants do show improvements over BERT.
In general, the ChunkBERT method for finetuning performs well in cases where salient information is available beyond context while maintaining reasonable performance on complex classification tasks. Its performance indicates the potential benefits of using chunk-based approaches to capture important information beyond the context length.
## 6 Limitation
ChunkBERT assumes that a longer context benefits the downstream task, although this assumption may not hold universally. Some tasks may benefit from understanding longer contexts, while others may be less sensitive to longer inputs. In our experiments, we use a fixed chunk size of 128 tokens. However, the optimal chunk size may vary depending on the task and the datasets, finding the right balance in chunking granularity remains an important consideration and a potential area for future exploration within the ChunkBERT approach. During training, ChunkBERT has a maximum input token limit of 4096 tokens. While this is sufficient for the datasets in this study, it imposes a fixed input length during training. Recent research[3] suggests that learning and extrapolating to longer inputs during test time can be achieved with up to seven chunks. The ChunkBERT methodology can be used with a different chunk size during test time, for example, training with a chunk size of 128 tokens and performing inference with a full chunk size of 512 tokens. However, our preliminary experiments suggest that increasing the chunk size during test time may degrade performance. Further work is required to stabilize the extrapolation of chunk size during test time.
## 7 Conclusion
In this work, we propose a chunking finetuning approach called ChunkBERT for pretrained BERT models, which allows them to process inputs beyond 512 tokens to any arbitrary length. The method is flexible and can be easily applied from a standard GPU to any pretrained BERT model. It divides long input into shorter chunks and induces chunk representation using BERT. It further processes them using TextCNN resulting in linear memory growth with respect to chunk size. The proposed method is evaluated on a long text benchmark with various long document classification tasks. ChunkBERT performs 5.7% better than LongFormer on average across the dataset and 18% better on complicated
datasets. The previous baselines display strong performance, but ChunkBERT still outperforms them marginally when evaluated only on the long samples from the test set while using only a fraction (6.25%) of the original memory. Our findings suggest that ChunkBERT strikes a balance when a long-sequence modeling strategy depends on the dataset's complexity, information density, and the location of the salient information required for the task.
## 8 Future Work
There are multiple future avenues for this work, including studying the effect of changing chunk size on the performance of downstream tasks. In this work, we only explore certain window sizes for the TextCNN module. Future work may involve comprehensive hyperparameter tuning of the TextCNN module and chunking parameters like the maximum number of allowed chunks. Here, we only explore TextCNN as an aggregation layer for routing inter-chunk information. We can explore an attention-based aggregation layer or grouped permutation of inputs for mixing chunk information. The choice of modeling strategy for long texts should consider the dataset's characteristics, the location and density of discriminative information, and the trade-offs between dense attention and context length. The experimental findings provide valuable insights into the performance variations and highlight the need for further research on benchmark datasets to enhance our understanding of long-text modeling.
## Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institutes of Health Research (CIHR). This research was enabled in part by support provided by ACENET (ace-net.ca) and the Digital Research Alliance of Canada (alliance-can.ca). We thank the reviewers for their helpful comments on the draft. We thank Chandramouli Shama Sastry for implementing vectorized chunking and exciting research discussions about future directions.
|
2309.05693 | Near-Term Distributed Quantum Computation using Mean-Field Corrections
and Auxiliary Qubits | Distributed quantum computation is often proposed to increase the scalability
of quantum hardware, as it reduces cooperative noise and requisite connectivity
by sharing quantum information between distant quantum devices. However, such
exchange of quantum information itself poses unique engineering challenges,
requiring high gate fidelity and costly non-local operations. To mitigate this,
we propose near-term distributed quantum computing, focusing on approximate
approaches that involve limited information transfer and conservative
entanglement production. We first devise an approximate distributed computing
scheme for the time evolution of quantum systems split across any combination
of classical and quantum devices. Our procedure harnesses mean-field
corrections and auxiliary qubits to link two or more devices classically,
optimally encoding the auxiliary qubits to both minimize short-time evolution
error and extend the approximate scheme's performance to longer evolution
times. We then expand the scheme to include limited quantum information
transfer through selective qubit shuffling or teleportation, broadening our
method's applicability and boosting its performance. Finally, we build upon
these concepts to produce an approximate circuit-cutting technique for the
fragmented pre-training of variational quantum algorithms. To characterize our
technique, we introduce a non-linear perturbation theory that discerns the
critical role of our mean-field corrections in optimization and may be suitable
for analyzing other non-linear quantum techniques. This fragmented pre-training
is remarkably successful, reducing algorithmic error by orders of magnitude
while requiring fewer iterations. | Abigail McClain Gomez, Taylor L. Patti, Anima Anandkumar, Susanne F. Yelin | 2023-09-11T18:00:00Z | http://arxiv.org/abs/2309.05693v1 | # Near-Term Distributed Quantum Computation using Mean-Field Corrections and Auxiliary Qubits
###### Abstract
Distributed quantum computation is often proposed to increase the scalability of quantum hardware, as it reduces cooperative noise and requisite connectivity by sharing quantum information between distant quantum devices. However, such exchange of quantum information itself poses unique engineering challenges, requiring high gate fidelity and costly non-local operations. To mitigate this, we propose near-term distributed quantum computing, focusing on approximate approaches that involve limited information transfer and conservative entanglement production. We first devise an approximate distributed computing scheme for the time evolution of quantum systems split across any combination of classical and quantum devices. Our procedure harnesses mean-field corrections and auxiliary qubits to link two or more devices classically, optimally encoding the auxiliary qubits to both minimize short-time evolution error and extend the approximate scheme's performance to longer evolution times. We then expand the scheme to include limited quantum information transfer through selective qubit shuffling or teleportation, broadening our method's applicability and boosting its performance. Finally, we build upon these concepts to produce an approximate circuit-cutting technique for the fragmented pre-training of variational quantum algorithms. To characterize our technique, we introduce a non-linear perturbation theory that discerns the critical role of our mean-field corrections in optimization and may be suitable for analyzing other non-linear quantum techniques. This fragmented pre-training is remarkably successful, reducing algorithmic error by orders of magnitude while requiring fewer iterations.
_Keywords_: Distributed Quantum Computing, Near-term Quantum Computing, Quantum Simulation, Variational Quantum Algorithms
## 1 Introduction
One prospective trajectory for quantum information hardware is distributed quantum computing [1, 2, 3], the quantum analog of the celebrated classical field [4, 5, 6, 7]. Distributed quantum computing seeks to eliminate the need for large, monolithic quantum computers, which suffer from cooperative noise [8, 9]. Instead, large-scale problems will be split among many smaller quantum computers that are in communication with each other via a _quantum
interconnect_, a standardized form of quantum communication between remote quantum computing platforms [10, 11].
While the benefits of distributed quantum computing are abundant, many obstacles complicate its realization. For instance, due to the no-cloning theorem [12], extensive quantum entanglement would be a required component of quantum interconnects in order to enable non-local operations such as quantum teleportation [1, 2, 9, 13]. Moreover, fault-tolerant quantum computing would be needed to compute and transmit quantum information between distributed simulators reliably [11, 14]. Finally, long coherence times or relatively local topology would be necessary to manage the time delays associated with communication between remote locations [9, 15].
Nevertheless, the promise of scalability continues to inspire research in various facets of distributed quantum computing. Researchers have characterized the compilation of quantum circuits into cohesive network instructions [16] and devised a language to communicate such instructions more efficiently than conventional circuit diagrams [17]. Likewise, much work has been done to develop the non-local operations integral to distributed quantum computing, which have been supported with experimental realizations [18, 19, 20, 13]. Other studies have developed algorithms tailored to quantum distributed architectures, including Shor's algorithm, quantum sensing, and combinatorial optimization [21, 22, 23, 24], while additional research has focused on the quantum advantage provided by quantum distributed computing [25, 26, 27, 28]. Still other research has addressed how to approach distributed algorithm design [29, 24], the effect of noise in distributed quantum computing [30], architecture selection and scalability [31, 32, 14, 33], and resource allocation [34, 35, 36], particularly to optimize teleportation cost [37, 38, 39].
Although the interest in its theoretical application continues to grow, a wide gap remains between much distributed quantum computing research and its physical implementation. Research along a different vein has instead concentrated on applications that are realizable using near-term hardware, stretching the limit of noisy quantum simulators' utility. Although not distributed in the sense of the works discussed above (which assume that the distributed hardware forms a quantum network), these approaches involve small groups of qubits simulated in parallel or in sequence to address a larger problem. Entanglement forging is one such approach [40], which relies on shifting computation to classical post-processing in order to assemble information from two smaller circuits, thereby halving the maximum circuit size required for the calculation. The quantum tensor network approach uses the framework of tensor networks to identify weakly entangled subgroups and parallelize quantum simulation [41]. Similarly, Quantum Multi-Programming (QMP) takes advantage of the increasing size of available quantum simulators to execute multiple shallow quantum circuits concurrently [42, 43].
In order to bridge the distributed quantum computing paradigm with the capabilities of near and moderate-term hardware, in this manuscript, we design two procedures that approximately link distributed simulators while remaining amenable to small-scale, noisy devices. Our schemes of fragmented quantum simulation explore what problems can be
addressed without full information transfer between hardware. First, focusing on the task of time evolution, we partition a system of qubits into subgroups (referred to as _fragments_) that are treated separately. We harness mean-field measurements to inform mean-field corrections [44] that link the distinct fragments. These simulations could be executed in parallel on a single simulator (as in QMP [42, 43]), outsourced to different simulators (as in distributed computing [1, 2, 9, 13]), or even simulated using a mixture of classical and quantum resources (as in _heterogeneous_ computing [45, 46]). We further make use of a limited number of auxiliary qubits to mimic the presence of the qubits located on distant simulators.
In our first approach to distributed time evolution, we rely on classical communication to transmit partial state information between distant simulators through measurements, omitting a quantum link between devices. Transmitting incomplete information reduces the generally exponential number of measurements required to relay complete information of a quantum state via a classical channel. For locally interacting systems, the classical fragmentation scheme closely approximates quantities local to each fragment - including the fidelity of the fragment - for timescales up to several \(1/J\), where \(J\) weights the system's interactions. We present a second scheme that is supplemented by limited quantum information transfer,
Figure 1: 1a: Diagram illustrating how a 6-qubit system can be split into two fragments. Interactions \(J^{\alpha\beta}_{ij}\) are represented by lines between qubits; one label is included for clarity. Interactions that span the two fragments form the interface \(I\). 1b: The case where the two fragments are linked via a classical channel. Mean-field measurements \(\langle\hat{S}^{(j)}_{\beta}\rangle\) are exchanged classically. One auxiliary qubit \(a_{1,2}\) is included in each fragment’s simulation, interacting with the fragment qubits according to a target qubit in the opposite fragment (identified in the figure by a blue / green circle). 1c: The case where the two fragments are linked via a quantum channel. While mean-field measurements \(\langle\hat{S}^{(j)}_{\beta}\rangle\) are still exchanged classically, the auxiliary qubits are physically shared between fragments using some form of quantum communication.
consequently composing an interface of classical and partial quantum information transfer that approximately connects quantum simulators. We show numerically that the limited use of quantum communication significantly extends the scheme's performance to longer evolution times, even for long-range interacting systems. As non-local operations become more available, this technique could be employed in moderate-term distributed applications before a fully connected quantum network is achievable.
Using the same fragmentation framework, we devise a fragmented pre-training approach for variational quantum algorithms, focusing on the variational quantum eigensolver algorithm (VQE) [47]. The pre-training can be performed classically or using resource-limited hardware, as only portions of the full circuit are considered. For classical MaxCut problem graphs, the pre-training method reduces energy error by various orders of magnitude on average, and requires over an order of magnitude fewer circuit preparations. For transverse field Ising-like models [48, 49] outside of the classical domain, our pre-training scheme maintains a significant advantage in the regime of a small transverse field \(h\).
The remainder of the paper is organized as follows. In Section 2, we first present a fragmented approach to quantum simulation that only involves the classical transfer of partial state information. We further consider an alternate scheme for the case of linking quantum simulators with reduced quantum information transfer through selective qubit shuttling [50] or teleportation [51, 52], in addition to classical information transfer. In Section 3, the performance of each scheme is evaluated for the time evolution of quantum Ising-like spin Hamiltonians [53], which are amenable to quantum simulation using trapped ions and Rydberg platforms [54, 55]. Finally, in Section 4 we expand the scheme to apply to the optimization of quantum circuits. The use of our fragmentation scheme to assist VQE is evaluated in Section 5[47]. The role of mean-field corrections in the optimization through the lens of perturbation theory [56] is explored in Sections 5.2.2 and 5.2.3. In Section 5.2.3, we introduce a non-linear perturbation theory to study mean-field corrected Hamiltonians, analytically formalizing the success of our pre-training approach.
## 2 Fragmented Quantum Simulation
In our method of fragmented quantum simulation, we divide a system of \(N\) qubits into two or more sub-systems, here referred to as _fragments_ (see Fig. 1a). Each fragment contains some number of qubits \(N_{f}<N\), such that \(\sum_{f}N_{f}=N\). The fragments are treated separately, but it is possible to approximate the presence of a fragment's _environment_, that is, the qubits outside of a given fragment, through corrective fields and interactions [57]. We devise mean-field corrections (described in detail in Section 2.1) [44], which are informed by measurements of a fragment's environment, to actively adjust the state of a fragment. Corrective interactions are mediated by the inclusion of auxiliary qubits within each fragment's simulation, such that \(\sum_{f}N_{f+a}>N\), where \(N_{f+a}=N_{f}+N_{a}\) and \(N_{a}\) represents the number of auxiliary qubits included in fragment \(f\). Each auxiliary qubit mimics the behavior of one environment qubit, which we refer to as the _target_ qubit for that auxiliary. Each auxiliary qubit interacts with
the fragment's qubits according to the same interaction terms as the corresponding target qubit, as prescribed by the original Hamiltonian, enabling entanglement to grow beyond the \(N_{f}\) fragment qubits.
Fig. 1b provides an overview of our classically-linked fragmentation scheme, and a detailed diagram is provided in Fig. A1. We define a fragment's _interface_\(I\) to be the collection of interactions existing in the original Hamiltonian that act between fragment qubits and environment qubits. The combination of auxiliary qubits and mean-field corrections collectively mimics the action of the interface on the fragment. The growth and faithfulness of the entanglement within a fragment will be limited by the number of auxiliary qubits included - an unavoidable limitation of the scheme - but the effects of this limitation can be mitigated through judicious fragmentation of the system. Firstly, to mitigate fragmentation error (that is, the error produced by the omission of some system interactions and the resultant reduction of Hilbert space), one can choose to divide the system qubits such that the qubits interacting most influentially with each other are confined to a single fragment. Secondly, it is possible to make an informed choice of target qubit for each auxiliary. This is explored further in Section 2.2.
### Mean-Field Corrections
Consider the class of spin models:
\[H=-\sum_{\langle i,j\rangle}\sum_{\alpha,\beta}J^{\alpha,\beta}_{ij}\hat{S}^{( i)}_{\alpha}\hat{S}^{(j)}_{\beta}-\sum_{i=1}^{N}h_{i}\hat{S}^{(i)}_{x}. \tag{1}\]
Here, \(\hat{S}^{(i)}_{\alpha}\) and \(\hat{S}^{(j)}_{\beta}\) are spin-1/2 spin operators acting on sites \(i\) and \(j\), where \(\alpha,\beta\in\{x,y,z\}\). The coefficient \(J^{\alpha,\beta}_{ij}\) gives the strength and sign of the interaction. For concreteness and without loss of generality, we have selected transverse fields \(h_{i}\) to point along the x-axis. The Hamiltonian acting strictly within some sub-system \(f\) will neglect any operators acting outside of \(f\), yielding
\[H^{(f)}=-\sum_{\langle i,j\rangle\in f}\sum_{\alpha,\beta}J^{\alpha,\beta}_{ ij}\hat{S}^{(i)}_{\alpha}\hat{S}^{(j)}_{\beta}-\sum_{i\in f}h_{i}\hat{S}^{(i)}_{x}, \tag{2}\]
the bare Hamiltonian that acts within a fragment \(f\) when no corrections are included.
Clearly, the simple exclusion of interactions that span the interface between \(f\) and its environment (i.e., the fragmented evolution of \(f\) under \(H^{(f)}\)) will, in general, poorly approximate the evolution of the sub-system under the full Hamiltonian. The fragment qubits will behave as a closed system without external interactions. Although generally these interactions cannot be exactly simulated without modeling all of the system's spins on a single fragment, we introduce a mean-field to partially capture the action of each missing interaction. Mean-field methods have frequently been used to simplify the simulation and study of quantum systems, and statistical physics [44, 58, 59]. Here, the strength and sign of the introduced mean-field correction is informed by the measurement of the corresponding environment spin, while the correction's axis is determined by that of the corresponding interaction's spin operator
that would act within fragment \(f\). The resulting mean-field corrected Hamiltonian is given by:
\[H_{MF}^{(f)}=-\sum_{\begin{subarray}{c}\langle i,j\rangle\in I,\\ i\in f\end{subarray}}\sum_{\alpha,\beta}J_{ij}^{\alpha,\beta}\hat{S}_{\alpha}^{ (i)}\langle\hat{S}_{\beta}^{(j)}\rangle-\sum_{\langle i,j\rangle\in f}\sum_{ \alpha,\beta}J_{ij}^{\alpha,\beta}\hat{S}_{\alpha}^{(i)}\hat{S}_{\beta}^{(j)} -\sum_{i\in f}h_{i}\hat{S}_{x}^{(i)}. \tag{3}\]
The strength and direction of the mean-fields appearing in \(H_{MF}^{(f)}\) should be updated regularly to reflect the current state of the environment spins. Physically, this requires regular mean-field measurements of the fragments. Evolution must therefore be reset to the initial state in order to proceed by one time step \(dt\), with each new mean-field measurement being stored to progress the evolution. The process of incrementing the time evolution by one time step per simulation is commonly implemented in order to track the time dynamics of an observable [60], resulting in a complexity that scales as \(\mathcal{O}(N_{t}^{2})\) in the number of time steps \(N_{t}\).
### Auxiliary Target Spin Selection
For nearest-neighbor spin models (e.g., the transverse field Ising model [49]), the selection of a target spin for each auxiliary is somewhat trivial, as at most two qubits interact with a fragmented section of the chain. The choice of auxiliary qubit encoding may be unclear for more general systems. Here, we present a method for auxiliary target qubit selection that yields, on average, the optimal auxiliary qubit encoding. Specifically, we consider how auxiliary target selection affects the simulation error to the first non-vanishing order in \(dt\). This simulation error arises from the omission of interactions forming the interface of some particular fragment \(f\) and the remaining environment spins \(E\). The full derivation of the leading error is provided in A.2; here, we sketch the derivation and build on the result.
The fidelity between a system evolved using our fragmented procedure with that of the full system can be expressed as:
\[F(t)=|\langle\Psi|U^{\dagger}(t)U_{I}^{(f)}(t)|\Psi\rangle|^{2}. \tag{4}\]
The unitary operator \(U(t)=\exp(-iHt)\) evolves the system exactly under the full Hamiltonian \(H\), while \(U_{I}^{(f)}(t)=\exp(-iH_{I}^{(f)}t)\) evolves the system under a fragmented Hamiltonian that neglects interactions crossing the interface of fragment \(f\). In A.2, this expression is expanded for short times \(t\) to understand how the evolution error \(\epsilon(t)=1-F(t)\) depends on the strength of the neglected interactions. Through the use of Taylor expansion and the Baker-Campbell-Hausdorff (BCH) formula [61], we arrive at the first non-vanishing correction to the fidelity:
\[F(t)\approx 1-\mathrm{var}(H-H_{I}^{(f)})t^{2}, \tag{5}\]
where \(\mathrm{var}(\mathcal{O})\) is the quantum variance of operator \(\mathcal{O}\). The error \(\epsilon(t)=1-F(t)\) is thus given by \(\mathrm{var}(H-H_{I}^{(f)})t^{2}\) for short times \(t\).
The form of the short-time error provides a simple rule for choosing the target auxiliary qubits for fragment \(f\) to minimize error; namely, select the environment qubit(s) whose
interactions contribute most significantly to the variance \(\mathrm{var}(H-H_{I}^{(f)})\). This choice will minimize the short-time error of evolving the state by the fragmented Hamiltonian, which will lead to higher fidelity performance, on average (see Section 3.3). Moreover, if the auxiliary selection is updated sufficiently often, the selection becomes exact as the short-time error dominates from the time of one auxiliary encoding to the next.
### Practical Implementation of the Optimal Auxiliary Encoding
Although the final form of the short-time evolution error provides insight into optimal auxiliary selection, the procedure for estimating a particular qubit's contribution to the error within the distributed framework is less straightforward. For a general spin Hamiltonian, this variance is given by:
\[\mathrm{var}(H-H_{I}^{(f)}) =\mathrm{var}\bigg{(}-\sum_{\langle i,j\rangle\in I}\sum_{\alpha,\beta}J_{ij}^{\alpha,\beta}\hat{S}_{\alpha}^{(i)}\hat{S}_{\beta}^{(j)}\bigg{)}\] \[=\sum_{\langle i,j\rangle\in I}\sum_{\langle i^{\prime},j^{\prime }\rangle\in I}\sum_{\alpha,\beta}\sum_{\alpha^{\prime},\beta^{\prime}}J_{ij}^ {\alpha,\beta}J_{i^{\prime}j^{\prime}}^{\alpha^{\prime},\beta^{\prime}}\big{(} \langle\hat{S}_{\alpha}^{(i)}\hat{S}_{\beta}^{(j)}\hat{S}_{\alpha^{\prime}}^{ (j^{\prime})}\hat{S}_{\beta^{\prime}}^{(j^{\prime})}\rangle-\langle\hat{S}_{ \alpha}^{(i)}\hat{S}_{\beta}^{(j)}\rangle\langle\hat{S}_{\alpha^{\prime}}^{(i^ {\prime})}\hat{S}_{\beta^{\prime}}^{(j^{\prime})}\rangle\big{)}. \tag{6}\]
Estimating the full variance of Eq. (6) requires 4-point correlation measurements. If the distributed simulators are linked solely via classical channels, correlation measurements are only accessible when all relevant qubits are local to a single fragment. This implies that two auxiliary qubits - one targeting \(j\) and one targeting \(j^{\prime}\) - must already be placed within the fragment in order to access the required 4-point correlator measurements. For \(N_{E}\) environment qubits, there are \(\mathcal{O}(N_{E}^{2})\) combinations, requiring \(\mathcal{O}(N_{E}^{2})\) copies of the system in order to estimate all required 4-point correlators, undermining (although not necessarily precluding) the motivations for fragmented quantum simulation with such a technique.
The correlator calculation simplifies significantly when the variance is calculated with respect to a known product state, but a new issue arises: for many spin model Hamiltonians, the variance will vanish for certain initial product states. In fact, for the case of the transverse field Ising model [49], this quantity vanishes for all computational basis states, providing no insight into the proper auxiliary choice.
We propose a two-part solution that addresses these issues. First, we propose a proxy \(v(a)\) that estimates the contribution of one potential auxiliary \(a\) to the variance:
\[v(a)=\sum_{\langle i,j\rangle\in I}\sum_{\langle i^{\prime},j^{\prime}\rangle \in I}\sum_{\alpha,\beta}\sum_{\alpha^{\prime},\beta^{\prime}}J_{ij}^{\alpha, \beta}J_{i^{\prime}j^{\prime}}^{\alpha^{\prime},\beta^{\prime}}\delta_{j,a} \delta_{j^{\prime},a}\big{(}\langle\hat{S}_{\alpha}^{(i)}\hat{S}_{\beta}^{(j) }\hat{S}_{\alpha^{\prime}}^{(i^{\prime})}\hat{S}_{\beta^{\prime}}^{(j^{\prime} )}\rangle-\langle\hat{S}_{\alpha}^{(i)}\hat{S}_{\beta}^{(j)}\rangle\langle \hat{S}_{\alpha^{\prime}}^{(i^{\prime})}\hat{S}_{\beta^{\prime}}^{(j^{\prime} )}\rangle\big{)}. \tag{7}\]
Inserting the two Dirac delta functions \(\delta_{j,a}\delta_{j^{\prime},a}\) eliminates the cross-terms in Eq. 6 that depend on multiple environment qubits. Thus, a single auxiliary is required to estimate \(v(a)\), and in total \(\mathcal{O}(N_{E})\) partitions are required to acquire \(v(a)\) for all potential auxiliary targets. In addition to requiring fewer measurements, this proxy focuses on \(a\)'s contribution to the variance while neglecting the cross-terms that involve contributions from other potential
auxiliary qubits. Secondly, to avoid scenarios where the variance vanishes for initial product states, we suggest first evolving the system for one time step \(dt\) for a particular choice of \(a\) before estimating \(v(a)\). Although this procedure is more involved than calculating \(v(a)\) for the initial product state directly, the overhead remains linear in the number of potential auxiliary qubits.
## 3 Application 1: Fragmented Time Evolution
### Simulators Linked via Classical Information
We first focus on the scheme free of quantum information transfer, where the auxiliary qubits are selected at the beginning of the simulation and fixed to target a single environment qubit throughout the evolution. We refer the reader to Fig. 1 and Appendix A.1 for an in-depth look at how a system is fragmented for time evolution.
As a representative example, consider the transverse field Ising model (TFIM) [48, 49] with a uniform transverse field:
\[H_{TFIM}=-J\sum_{\langle i,j\rangle}\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}-h\sum_{i }^{N}\hat{S}_{x}^{(i)}. \tag{8}\]
Figure 2: Nearest neighbor TFIM with constant \(J=1.0\). To produce 2a, \(N=12\) qubits are split into two fragments, and the fidelity between the fragment qubits’ state and the exactly evolved system is plotted for various numbers of auxiliary qubits, with (dashed lines) and without (solid lines) mean-field corrections. The fidelity is averaged over non-zero \(h\) values ranging between \(\pm 1\). Performance progressively increases with increasing \(N_{a}\) and the addition of mean-field corrections. 2b displays the local expectation of \(\hat{S}_{z}\) and \(\hat{S}_{x}\) for a system of \(N=12\) qubits for the specific case of \(h=1.0\), with the corner label indicating site index. Here, we fragment the system into four fragments, each containing three qubits, and contrast the case of no communication (in red) to that of including \(N_{a}=2\) auxiliary qubits and mean-field corrections, which match the exact expectation values for longer simulation times.
This system has been studied in depth to better understand the physics of quantum phase transitions [62, 63, 64]. We consider the evolution of a quantum system initialized in the computational basis state \(|\mathbf{0}\rangle\) under \(H_{TFIM}\), implementing exact unitary evolution numerically using PennyLane [65] with mean-field measurements updated every \(dt=0.1/J\). Fig. 2 displays the scheme's performance for a 12-qubit model with nearest neighbor interactions of \(J=1.0\). The results presented in Fig. 2a are averaged over non-zero transverse fields \(h\) ranging from \(\pm 1\), while Fig. 2b features the specific case of \(h=1.0\). In Fig. 2a, the system is split into two fragments, each simulating six of the system qubits and some number of auxiliary qubits. The average of the quantity \(F_{f}\) is plotted, which we define as the fidelity between the reduced density operator of the system qubits within the fragment (tracing out any auxiliary qubits \(a\)) and the reduced density of the same system qubits for the exact evolution of the full system (tracing out all environment qubits forming \(E\)). We use the generalization of fidelity for density matrices [66] to enable the focused evaluation of the fragment sub-system:
\[F_{f}=\Bigg{(}\mathrm{Tr}\ \sqrt{\sqrt{\rho_{f}}\rho_{f}^{ex}\sqrt{\rho_{f}}} \Bigg{)}^{2}, \tag{9}\]
\[\rho_{f}=\mathrm{Tr}_{a}\ \rho_{f+a}, \tag{10}\]
\[\rho_{f}^{ex}=\mathrm{Tr}_{E}\ \rho^{ex}. \tag{11}\]
For a short evolution time, the scheme captures the correct state of the system qubits within the fragment. This time can be extended by the inclusion of additional auxiliary qubits.
In Fig. 2b, we consider a specific instance of the TFIM with \(J=1.0\) and \(h=1.0\). To test the scheme, we split the system into smaller partitions with \(N_{f}=3\). When we make use of two auxiliary qubits and mean-field corrections, the local expectation values exhibit little error for several units of \(Jt\), as expected from the fidelity results.
In Fig. 3, we examine the scaling performance for the nearest neighbor model, increasing the number of fragments simulated with increasing \(N\) (keeping \(N_{f}\) constant) with \(N_{a}=2\). In the left panel, the first fragment is considered. This fragment contains the boundary of the chain, and consequently, the fragment's interface consists of only one missing interaction. The right panel considers the second fragment, which is on the interior of the chain for \(N>6\) and consequently neglects two interactions, leading to reduced performance. This is manifested in the reduced fragment fidelity going from \(N=6\) to \(N=9\) for fragment 2. However, there is no such visible drop going from \(N=9\) to \(N=12\) due to the monogamy of entanglement [67] - that is, although the number of qubits in the system grows, the qubits that are most strongly entangled with each other remain local to one fragment, and thus the amount of lost information shrinks as \(N\) is further increased. We therefore expect our classical scheme to scale well with \(N\) for systems that are locally interacting, and to serve as a strong approximation for moderate evolution times.
### The Addition of Quantum Information Transfer
Next, we examine the case of selective quantum information transfer between quantum simulators, applicable when non-local operations are available, even if only in a limited capacity. In this case, the fragmentation scheme can be modified to include limited quantum information transfer (a _quantum channel_[9]). The role of the auxiliary qubits shifts from being bystanders confined to a single fragment to qubits that are physically shared between simulators through selective non-local interactions, accomplished through qubit shuttling [50] or teleportation [51, 52] (see Fig. 1c). If the simulations are being executed in parallel on a single quantum simulator [42, 43], this would only require a few additional SWAP gates to include a limited number of cross-simulation interactions. In addition to providing more complete information transfer, a quantum channel further enables the active correction of auxiliary encoding as the system evolves. The selected number of auxiliary qubits places a limit on the number of qubits that are physically teleported / shuttled to a fragment; however, which environment qubits play this role can be changed from one time step to the next depending on which potential auxiliary qubit(s) have the largest contribution to the most recent estimate of the short-time error. When quantum channels and synchronized measurements are available, all correlation measurements are accessible. The quantity \(v(a)\) can thus be estimated for any \(a\) at any time. As the potential auxiliary qubits' contribution to the variance shift, new auxiliary qubits can be selected - that is, we can make a new selection for which qubit(s) physically interact with a fragment native to a different simulator. If the time steps are sufficiently small such that the first non-vanishing order in the error dominates,
Figure 3: Scaling performance of the classical scheme for the nearest neighbor TFIM with \(J=1.0\), averaged over \(h\) values ranging from \(\pm 1\). The number of system qubits within a fragment \(N_{f}\) is kept constant as \(N\) is increased, with \(N_{a}\) fixed to be zero (no communication, in black/gray) or two (in blue). The left panel plots \(\langle F_{f}\rangle\) for the first fragment (which includes the boundary qubit and thus only involves one interaction crossing the interface), while the right panel plots \(\langle F_{f}\rangle\) for the second fragment (which, for \(N=9\) and \(N=12\), is an _interior_ fragment with two interactions crossing the interface).
then this becomes optimal even for long simulation times.
To evaluate this scheme numerically, we abstract away the details of information transport; this topic has been investigated by other research in the context of distributed time evolution [68]. In our actively updated simulation, the quantity \(v(a)\) is calculated for each potential auxiliary qubit at each time step to determine its contribution to the short-time error. This requires the estimation of correlators between each potential auxiliary and each fragment system qubit (requiring \(\mathcal{O}(N_{f}N_{E}N_{\alpha\beta}^{2})\) per fragment, where \(N_{\alpha\beta}\) is the number of \(\alpha\beta\) interaction types), but if there is only one kind of interaction (as is the case for the TFIM and other Ising-like models, with \(\alpha=\beta=z\)), all relevant correlators can be estimated from a set of full system snapshot measurements. The largest contributors are selected to be auxiliaries - numerically, this amounts to keeping the interactions between these qubits and the fragment qubits, while zeroing the \(J_{ij}^{\alpha\beta}\) coefficients of all other environment-fragment interactions (see A.3). Any zeroed interactions can be approximately included via mean-field corrections. At the next time step, the selection of zeroed interactions might change due to a change in the selected auxiliaries for each fragment, as dictated by the short-time error.
Fig. 4 compares this scheme (labeled "Q. Channel" to indicate the addition of quantum information transfer) to the previous scheme in Section 3.1, which involves only classical information transfer ("C. Channel"). The graph plots the fragment fidelity \(F_{f}\) averaged over 100 transverse field Ising-like models with \(h=1.0\) and randomly generated graphs \(J_{ij}\). Each edge \(ij\) exists with probability 0.5, and edge weights \(J_{ij}\) are sampled from a Gaussian distribution with mean \(\mu=0.0\) and width \(\sigma=1.0\). Furthermore, we randomly select a computational basis state to initialize the fragmented system. Although both schemes
Figure 4: Comparison between scheme involving only classical information transfer (dark teal) to that involving limited quantum and classical information transfer (light green). For reference, the independent case (no information transfer) is included in red. The results are averaged over 100 Ising-like Hamiltonians with constant \(h=1.0\) and randomly generated graphs \(J_{ij}\) (see Section 3.2). The \(N\) qubits are split into groups such that \(N_{f}=3\), with an additional \(N_{a}=2\) auxiliary qubits employed in the simulation.
outperform the case of no information transfer (in red), the complicated long-range nature of the Hamiltonians considered challenges the previous scheme, which only employs classical information transfer. In contrast, the quantum scheme preserves a large fragment fidelity, even at late simulation times.
### Short-Time Error Auxiliary Selection
The benefit of using short-time error to inform auxiliary selection can be isolated by evaluating the performance of each auxiliary choice independently. Consider a system of \(N=12\) qubits, fragmented into two groups of \(N_{f}=6\). This leaves six environment qubits from the perspective of each fragment that could be targeted by an auxiliary qubit. Selecting two auxiliary qubits (\(N_{a}=2\)), we rank the six potential choices for target auxiliary encoding according to the size of \(v(a)\). In Fig. 5, the six target encoding choices are divided into three groups of two based on \(v(a)\), and each option is explored for randomly generated transverse field Ising-like Hamiltonians with \(h=1.0\), as considered in the previous section. A total of 100 such Hamiltonians are generated and simulated; the averaged results are presented in Fig. 5, where \(v_{0}\) corresponds to encoding the two environment qubits with the largest value for \(v(a)\). On the left, the results are plotted for the case of classical information transfer. Any separation between the fidelity curves corresponding to different auxiliary choices indicates that the \(v(a)\) metric meaningfully separates the potential auxiliary choices according to fidelity performance. The fact that the ordering corresponds to the ranked choice is evidence that using short-time error to select auxiliary encoding propagates to better performance at later times. In red, we consider random auxiliary encoding. The random performance roughly converges to the middle-ranked choice \(v_{1}\) and can be thought of as the performance averaged over auxiliary encoding. In the center, the results are plotted for the case of additional quantum information transfer, without actively updating the auxiliary encoding. The results qualitatively match those of the classical case, with slightly better performance overall, consistent with Fig. 4. In the right panel, we consider the quantum channel with actively updated auxiliary encoding. In this case, \(v_{0}\) (\(v_{2}\)) corresponds to selecting the two auxiliary targets with the largest (smallest) values for \(v(a)\) at each decision. The performance of \(v_{0}\) marginally increases with the introduction of active updates, while the performance of \(v_{2}\) marginally decreases. However, the random performance increases most markedly. Here, the rapid shuffling of auxiliary qubit encoding allows the fragments to quickly share information, leading to performance comparable to the optimal variance choice, \(v_{0}\). The random, actively updated case has the added advantage of being measurement-efficient as it forgoes any variance estimation, but the highly frequent change of auxiliary encoding may lead to an overhead in qubit routing / swapping in order to be realized.
Finally, we note that in the averaged results presented in Fig. 5, the mean-field corrected simulation (plotted with a dashed line) outperforms the corresponding simulation that fully neglects these interface interactions for every case considered. Appendix B investigates the use of mean-field corrections to reduce simulation error through a numerical study.
## 4 Fragmented Quantum Circuits
We now investigate the use of fragmentation in quantum circuit evolution. Consider the fragmentation of a parameterized quantum circuit (PQC) of size \(N\) into multiple smaller PQCs. To fragment a circuit, multi-qubit unitaries that act on qubits outside the \(N_{f+a}\) qubits devoted to a single sub-system's PQC are neglected. Although this resembles the first step of circuit cutting techniques [69], no data processing is required to reconstruct the cut gates; they are simply ignored. Crucially, some auxiliary qubits are included in each sub-system PQC, such that the full set of sub-system PQCs overlap with one another and \(\sum_{f}N_{f+a}>N\) (see Fig. 6). The collection of fragmented circuits can be optimized alone prior to optimizing the full circuit as a new approach to _pre-training_, commonly employed to boost variational quantum algorithms [70, 71, 72, 73, 74, 75, 76]. Pre-training generally uses classical resources and can greatly increase the accuracy of a variational algorithm's solution, which is crucial for many applications such as reaching chemical accuracy for quantum chemistry problems [77, 78, 79]. Our pre-training approach is motivated by the fact that the parameter solutions of the smaller circuits are expected to be smoothly connected to the parameter solutions of the full quantum circuit, as explored by [80]. We constrain the pre-training to use small circuits that are cheap to simulate classically. Furthermore, employing smaller circuits limits entanglement growth, which has
Figure 5: The effect of auxiliary selection on simulation performance. The curves are the averaged results for 100 different transverse field Ising-like Hamiltonians with \(h=1.0\) and the randomly generated graphs \(J_{ij}\) described in Section 3.2). Here, \(N=12\) with \(N_{f}=6\) and \(N_{a}=2\). The auxiliary target choices are ranked according to the size of \(v(a)\), such that the two environment qubits with the largest \(v(a)\) are used in simulation \(v_{0}\), the two with the smallest \(v(a)\) are used in simulation \(v_{2}\), and the remaining two auxiliary choices are used in simulation \(v_{1}\). Additionally, in red, we consider the case of randomly selecting two auxiliary target qubits with no variance calculation. In the left panel (classical communication) and center panel (quantum communication), the selection is made after one time step, and the choice remains fixed throughout evolution. In the right panel (quantum communication), the selection is re-evaluated at each time step.
been shown to improve training and avoid barren plateaus [80, 81, 82, 83, 84].
## 5 Application 2: Fragment-Initialized VQE
Our method of fragmenting a quantum circuit can be applied to classically pre-train quantum circuit parameters for the variational quantum eigensolver (VQE) [47]. For this application, a PQC of size \(N\) is divided into smaller PQCs, each having size \(N_{f+a}<N\). To optimize each sub-system PQC, the mean-field-corrected Hamiltonian given in Eq. (3) is minimized. In addition to facilitating the study of quantum systems and statistical physics, mean-field methods have been introduced for data analysis and loss function modification [85, 86, 87, 88]. In our pre-training technique, employing mean-field terms serves to link the optimization of the separate circuits by their current mean-field measurements. Overlapping parameters (that is, parameters shared by two fragmented PQCs) are initialized for one PQC using the most recent values from the other, further uniting the separate circuit optimizations. The mean-field measurements are updated regularly, and optimization halts when the steady state (up to some set precision) is reached for all parameters - those shared and those unique to one PQC - or the maximum number of iterations is reached. The algorithm is outlined in Algorithm 1.
Figure 6: Diagram depicting how a circuit can be fragmented into a number of smaller circuits with overlapping registers, analogous to the inclusion of auxiliary qubits. The \(N=6\) qubits are partitioned into three groups of two (with \(q_{1},q_{2}\) addressed by the top PQC, \(q_{3},q_{4}\) addressed by the middle PQC, and \(q_{5},q_{6}\) addressed by the bottom PQC). Two additional auxiliary registers are included in each small PQC, such that some of the parameterized two-qubit gates appear in multiple PQCs. Gates that address qubits beyond the scope of one PQC are neglected by that particular circuit.
### Details of Ansatz
We focus on pre-training brickwork circuits with a limited number of layers to constrain entanglement growth between fragments. Although a circuit ansatz with high complexity is often necessary for interesting VQE applications in order to provide enough expressivity to reach the ground state [89, 90], fragmentation-based pre-training is still beneficial through the use of a layer-wise approach [91]. If a shallow brickwork circuit is placed ahead of a more expressive PQC ansatz, the brickwork layers can first be optimized using the fragmented approach. These layers serve to bring the state of the system to have some ground state overlap. The full circuit VQE can then be performed, initializing the leading brickwork layers of the circuit with the pre-trained parameter values and initializing the remaining gates of the ansatz to approximately act as identity - specifically, we choose to randomly initialize these parameters to be small values bounded by \(\pm\varepsilon\) (with \(\varepsilon=10^{-5}\) for our results), to balance maintaining the optimized action of the initial layers after pre-training while avoiding training issues associated with a true identity initialization [70, 92]. The overall circuit layout is outlined in Fig. 7.
### Performance and Analysis of Fragmented Pre-Training
To evaluate our pre-training method, we focus on random Ising-like models. In Section 5.2.1, we present the numerical performance of the scheme for the classical case of zero transverse field (\(h=0\)). Having established the advantage of the approach, in Section 5.2.2 we derive its success as stemming from the mean-field corrective terms included in the loss function, which shift the global minimum of the collective fragmented circuit to coincide with that of the full optimization problem. Finally, in Section 5.2.3, we use perturbation theory and numerical simulation to demonstrate that our approach remains beneficial for \(|h|>0\), in the regime of a weak transverse field.
#### 5.2.1 VQE Results for MaxCut
We first benchmark the scheme using randomly generated classical Ising Hamiltonians, where the all-to-all \(J_{ij}\) interactions are sampled from a Gaussian distribution (mean \(\mu=0.0\), width \(\sigma=1.0\)) and the transverse field \(h\) is fixed to be zero. These models can be mapped to MaxCut problems with randomly generated graphs [84]. For the circuit ansatz, a fixed number of brickwork layers is used (\(L_{s}=4\)) to keep this portion of the circuit shallow, while the all-to-all entangling portion of the circuit is made up of \(L_{c}=N\) layers. The parameterized single qubit rotations within each layer are selected to be one rotation about \(x\) followed by one rotation about \(y\), and the entangled gates are selected to be controlled \(z\) (CZ) rotations. All simulations are performed numerically using PennyLane [65]. Lastly, note that parameter convergence (evaluated every 100 iterations) is used as the stopping criterion for both the fragmented circuit and full circuit optimization, with a maximum of 5000 iterations permitted.
To assess the performance of pre-training using circuit fragmentation, the same circuit is optimized using random initial values (referred to as "vanilla VQE"). Fig. 8 provides a case-by-case comparison between fragment-initialized VQE and vanilla VQE for 500 such models, for circuits of up to 15 qubits. In the top panels, the final percent error \(\epsilon=(E-E_{0})/|E_{0}|\) (where \(E_{0}\) is the true ground state energy) is plotted for both approaches, along with the geometric mean of the results. The geometric mean of the fragment-initialized final error lies roughly three orders of magnitude below that of the vanilla VQE, with this gap growing even larger with increasing system size. For the larger system sizes, the vanilla VQE struggles to find a solution having \(\epsilon<10^{-2}\), while the fragment-initialized approach reaches \(\epsilon\sim 10^{-7}\) for
Figure 7: The circuit ansatz is built from \(L_{s}\) layers \(l(\theta)\) with linear entangling gates, which are amenable to fragmentation. These are followed by a set of \(L_{C}\) layers \(l(\phi)\) with an all-to-all entangling architecture. Only the brickwork layers parameterized by \(\theta_{i}\) are pre-trained using the fragmented scheme, while the layers parameterized by \(\phi_{j}\) are employed only in the final training process.
the same problem Hamiltonian. Moreover, using the same stopping criterion, the fragment-initialized VQE reaches this solution in fewer iterations (\(N_{iter}\)), decreasing the average number by nearly an order of magnitude, as illustrated by the bottom panels of Fig. 8. After successful pre-training, the parameters of the stitched-together circuit produce a loss that is already in the neighborhood of the minimum, so fewer iterations are required to reach convergence. For this simulation, we employ a batched optimization of \(T\) different fragmented circuits performed in parallel. See D.1 for a description of this approach.
#### 5.2.2 Solving MaxCut with Mean-Field Terms
Our modification of fragmented loss functions to replace missing (that is, inaccessible) interactions with mean-field terms is critical to the success of pre-training. We here demonstrate that when there is no transverse field (as is the case for Ising-like Hamiltonians that map to classical graph problems), mean-field replacement of interactions results in a ground state and ground state energy that coincide with that of the exact Hamiltonian. This can be shown using a simple logical argument. First, it is well-established that the ground state of a classical Ising Hamiltonian will be a computational basis state - indeed, this is why the ground state can be mapped to the solution of a classical
Figure 8: Comparison between fragment pre-trained VQE and vanilla VQE for 500 different \(J_{ij}\) matrices (graphs). The full PQC is split into fragments with \(N_{a}=2\) and at most \(N_{f}=3\) during pre-training. A total of \(T=10\) different partitionings are considered, and the best pre-trained solution is used to initialize the final optimization. The final percent error \(\epsilon\) is provided in the top panel, while the required number of iterations \(N_{iter}\) to reach convergence is provided in the bottom panel. For the fragment initialized case, these metrics refer to the full circuit training that occurs after pre-training. Fragment pre-training reduces the geometric mean of \(\epsilon\) by orders of magnitude, even as the system size increases. Likewise, the mean number of required iterations is reduced by nearly an order of magnitude.
problem. We denote the ground state by \(|x^{*}\rangle\). The ground state energy is simply a sum of the expected values of weighted \(ZZ\) interactions, taken with respect to the computational basis state \(|x^{*}\rangle\): \(E_{g}=-\sum_{\langle i,j\rangle}J_{ij}\langle x^{*}|\hat{S}_{z}^{(i)}\hat{S}_{z }^{(j)}|x^{*}\rangle\). Notice that for any computational basis state \(|x\rangle\), the value of the expectation of a \(ZZ\) interaction exactly equals the value of the product of the expectation of the individual \(Z\) operators; that is, \(\langle x|\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}|x\rangle=\langle x|\hat{S}_{z}^{( i)}|x\rangle\langle x|\hat{S}_{z}^{(j)}|x\rangle\). Thus, if any weighted interaction \(J_{ij}\langle\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}\rangle\) is replaced by its mean-field counterpart \(J_{ij}\langle\hat{S}_{z}^{(i)}\rangle\langle\hat{S}_{z}^{(j)}\rangle\), the resultant energy is unchanged: \(E_{g}=\langle x^{*}|H|x^{*}\rangle=\langle x^{*}|H_{MF}(|x^{*}\rangle)|x^{*}\rangle\), where \(H_{MF}\) is the union of the fragmented, mean-field corrected Hamiltonians \(\{H_{MF}^{(f)}\}\) and we have explicitly included the state dependence due to the presence of mean-field terms. Having established this fact, we must now show that \(|x^{*}\rangle\) is the ground state of \(H_{MF}\), such that \(\langle x^{*}|H_{MF}(|x^{*}\rangle)|x^{*}\rangle\leq\langle\psi|H_{MF}(|\psi \rangle)|\psi\rangle\ \forall\ |\psi\rangle\). Observe that the quantity \(\langle\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}\rangle\) is bounded by \(\pm 1/4\) and equals one of these extremum values for any computational basis state. The mean-field counterpart \(\langle\hat{S}_{z}^{(i)}\rangle\langle\hat{S}_{z}^{(j)}\rangle\) shares the same bounds; therefore, we cannot expect any state \(|\psi\rangle\) to produce a smaller energy \(\langle\psi|H_{MF}(|\psi\rangle)|\psi\rangle\) than \(|x^{*}\rangle\), the ground state of the full Hamiltonian.
The above is a central reason for the success of our fragmented training: for Ising-like models with zero transverse field, optimizing a Hamiltonian with mean-field corrections will solve the original problem mapped to the full Hamiltonian. Two potential error sources can arise: 1) the state produced by stitching the optimized circuits together can differ from the output of the individual circuits, and 2) the fragmented optimization may have limited success, e.g., by landing in a local minimum or stalling in a barren plateau. A balance should be struck between these complications: the first error source can be mitigated by considering larger fragments with a larger number of auxiliary qubits or possibly by limiting the number of inter-fragment unitaries, as done in [93], while the second can be mitigated by considering smaller fragments with fewer circuit parameters.
#### 5.2.3 Mean-Field Terms as First-Order Perturbation Corrections
We now use perturbation theory to elucidate our technique of replacing multi-qubit interactions with mean fields when \(h\neq 0\). In the previous section, it is established that the ground state and ground state energy of an Ising-like Hamiltonian with zero transverse fields remain unchanged when one or more of the interactions are replaced by the corresponding mean-field approximation term. Following a similar argument, one can further establish that the computational basis states are stationary states of the mean-field corrected Hamiltonian \(H_{MF}(|\psi\rangle)\), and therefore \(H_{MF}(|\psi\rangle)\) and the unaltered Hamiltonian \(H\) share the same spectrum and set of eigenstates (although this term is used loosely for \(H_{MF}(|\psi\rangle)\), as the dependence on \(|\psi\rangle\) causes the stationary Schrodinger equation to deviate from a linear eigenvalue problem).
In this section, we consider adding a small transverse field to the classical Ising-like model, propelling the problem into the quantum domain. The first-order corrections to the ground state \(|x^{*}\rangle\) and ground state energy \(E_{g}\) are computed using perturbation theory. The case of the mean-field corrected Hamiltonian \(H_{MF}(|\psi\rangle)\) is treated with a version of perturbation theory modified to accommodate mean-field terms, and notably, the same first-order corrections to
\(|x^{*}\rangle\) and \(E_{g}\) are recovered. For a full derivation, please refer to C.
Adding a transverse field, the unaltered Hamiltonian containing all interactions is given by:
\[H=H_{0}+H_{I}+\lambda V, \tag{12}\]
where \(H_{0}\) contains the intra-fragment interactions:
\[H_{0}=-\sum_{\langle i,j\rangle\notin I}J_{ij}\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}, \tag{13}\]
\(H_{I}\) contains the inter-fragment interactions:
\[H_{I}=-\sum_{\langle i,j\rangle\in I}J_{ij}\hat{S}_{z}^{(i)}\hat{S}_{z}^{(j)}, \tag{14}\]
\(V\) contains the perturbing transverse field:
\[V=-h\sum_{i}\hat{S}_{x}^{(i)}, \tag{15}\]
and \(\lambda\) is a perturbation parameter. We remind the reader that the inter-fragment interactions \(H_{I}\) are those that will be replaced by mean-field corrections.
In contrast, the mean-field corrected Hamiltonian denoted \(H_{MF}\) is given by:
\[H_{MF}(|\psi\rangle)=H_{0}+H_{I,MF}(|\psi\rangle)+\lambda V, \tag{16}\]
where the form of the Hamiltonian now depends on the state of the system due to the mean-field corrections:
\[H_{I,MF}(|\psi\rangle)=-\sum_{\langle i,j\rangle\in I}J_{ij}\hat{S}_{z}^{(i)} \langle\psi|\hat{S}_{z}^{(j)}|\psi\rangle. \tag{17}\]
Before any corrections can be computed, it is imperative to establish the correct zeroth order energies and eigenstates for each Hamiltonian. Following perturbation theory, the zeroth order eigenstates of \(H\) and \(H_{MF}\) generally equal those of the unperturbed counterparts (that is, taking \(h=0\)); these coincide with the set of computational basis states \(\{|x\rangle\}\) - including the unperturbed ground state, \(|x^{*}\rangle\). However, there are degeneracies in the unperturbed Hamiltonians, and thus, degenerate perturbation theory is required.
When the unperturbed spectrum contains degeneracies, the proper linear combinations of the unperturbed eigenstates forming the degenerate subspace must be determined; these are the states that the perturbed eigenstates approach as \(h\to 0\). The unperturbed Ising-like model possesses \(\mathbb{Z}_{2}\) symmetry. Practically, this means that for each eigenstate \(|x\rangle\), the "flipped" eigenstate \(|\bar{x}\rangle:=\bigotimes_{i}X_{i}|x_{i}\rangle\) is degenerate. For the unaltered Ising-like model \(H\), the proper zeroth order eigenstates for the degenerate subspace containing the ground state are given by \(|\pm_{x^{*}}\rangle=\frac{1}{\sqrt{2}}(|x^{*}\rangle\pm|\bar{x}^{*}\rangle)\). The transverse field will break the ground state degeneracy of \(H\), and the positive superposition \(|+_{x^{*}}\rangle\) is preferred by the ground state.
Shifting attention to the mean-field corrected Hamiltonian \(H_{MF}\), the stationary Schrodinger equation is no longer linear in \(|\psi\rangle\), and the linearity that characterizes quantum
mechanics no longer applies. The notion of finding proper linear combinations is not an appropriate procedure due to the problem's nonlinearity. In particular, superpositions of degenerate eigenstates can yield different energies for \(H_{MF}\) and thus effectively exist outside the degenerate subspace.
To illustrate this, consider a single mean-field factor, \(\langle\hat{S}_{z}^{(i)}\rangle\), such as those within \(H_{MF}\). While the expectation value of this quantity with respect to a computational basis state \(|x^{*}\rangle\) yields
\[\langle x^{*}|\hat{S}_{z}^{(i)}|x^{*}\rangle=\frac{1}{2}(-1)^{x^{*}_{i}}, \tag{18}\]
evaluating the same term with respect to \(|+_{x^{*}}\rangle\) leads to the term vanishing as
\[\langle+_{x^{*}}|\hat{S}_{z}^{(i)}|+_{x^{*}}\rangle =\frac{1}{2}\big{(}\langle x^{*}|\hat{S}_{z}^{(i)}|x^{*}\rangle+ \langle x^{*}|\hat{S}_{z}^{(i)}|\bar{x}^{*}\rangle+\langle\bar{x}^{*}|\hat{S}_ {z}^{(i)}|x^{*}\rangle+\langle\bar{x}^{*}|\hat{S}_{z}^{(i)}|\bar{x}^{*}\rangle \big{)} \tag{19}\] \[=\frac{1}{4}\big{(}(-1)^{x^{*}_{i}}+(-1)^{\bar{x}^{*}_{i}}\big{)}\] \[=0.\]
Notably for the ground state of the unperturbed Hamiltonian \(|x^{*}\rangle\), this means that the pure computational basis states \(|x^{*}\rangle,|\bar{x}^{*}\rangle\) are energetically preferred over any linear combination of them. Thus, for \(H_{MF}\), the computational basis states remain the proper zeroth order eigenstates with a perturbative transverse field.
After establishing the zeroth order eigenstates and eigenenergies (\(|k^{(0)}\rangle\) and \(E_{k}^{(0)}\), respectively) of conventional Hamiltonians such as Eq. 12, perturbation theory proceeds by expanding \(|k\rangle\) and \(E\) in \(\lambda\) in the stationary Schrodinger equation and equating orders of \(\lambda\):
\[\begin{split}\Big{(}H_{0}&+H_{I}+\lambda V\Big{)} \big{(}|k^{(0)}\rangle+\lambda|k^{(1)}\rangle+\lambda^{2}|k^{(2)}\rangle+ \cdots\big{)}\\ &=\big{(}E_{k}^{(0)}+\lambda E_{k}^{(1)}+\lambda^{2}E_{k}^{(2)}+ \cdots\big{)}\big{(}|k^{(0)}\rangle+\lambda|k^{(1)}\rangle+\lambda^{2}|k^{(2) }\rangle+\cdots\big{)}.\end{split} \tag{20}\]
Following this procedure for \(H\) and carefully treating the degeneracy, the first-order energy correction \(E_{k}^{(1)}\) vanishes and the first-order eigenstate correction takes the form:
\[|k^{(1)}\rangle=\sum_{m\notin D_{k}}\frac{\langle m^{(0)}|V|k^{(0)}\rangle}{E _{k}^{(0)}-E_{m}^{(0)}}|m^{(0)}\rangle, \tag{21}\]
where \(D_{k}\) represents the degenerate subspace that \(|k^{(0)}\rangle\) occupies.
To derive the analogous correction to \(H_{MF}\), we employ a modified approach to perturbation theory that can accommodate the nonlinearity of the stationary Schrodinger equation. In particular, the expanded form of \(|k_{MF}\rangle\) is explicitly inserted into the state-dependant terms of \(H_{MF}\) prior to equating orders of \(\lambda\) to compute the corrections. Following this procedure, the first order energy correction \(E_{k,MF}^{(1)}\) again vanishes, and the first order eigenstate correction takes on an identical form to that of \(H\):
\[|k_{MF}^{(1)}\rangle=\sum_{m\notin D_{k}}\frac{\langle m^{(0)}_{MF}|V|k_{MF}^ {(0)}\rangle}{E_{k,MF}^{(0)}-E_{m,MF}^{(0)}}|m^{(0)}_{MF}\rangle. \tag{22}\]
There is one crucial difference between Eq. (21) and Eq. (22): the zeroth order eigenstates \(|m^{(0)}\rangle\) and \(|m^{(0)}_{MF}\rangle\). For the full Hamiltonian, each \(|m^{(0)}\rangle\) has the form \(|\pm_{y}\rangle\propto(|y\rangle\pm|\bar{y}\rangle)\), while each \(|m^{(0)}_{MF}\rangle\) is a single computational basis state, \(|y\rangle\). This leads to the fidelity between the first order ground state \(|\psi_{g,MF}\rangle\propto|x^{*}\rangle+|x^{*(1)}_{MF}\rangle\) and that of the full Hamiltonian \(|\psi_{g}\rangle\propto|+_{x^{*}}\rangle+|+_{x^{*}}^{(1)}\rangle\) to be \(F=0.5\) rather than perfect unity (see C.3). Nonetheless, half overlap provides significant information about the true ground state for pre-training.
In Fig. 9, we examine the mean performance of the pre-training scheme as a function of the transverse field strength \(h\). The case of neglecting mean-field terms during pre-training is also considered to highlight the vital role these corrections play at small values of \(h\). When \(h=0\), the model is classical, and the global fragmented Hamiltonian with mean-field corrections shares the same ground state and ground state energy of the full model, as discussed in 5.2.2. This leads to remarkable pre-training performance, even as \(N\) is increased. The fragment initialization that neglects mean-field terms is not guaranteed to share the ground state of the full model - in fact, the two are likely to be orthogonal. The pre-training will still feature low entanglement, which likely explains why the mean-field free initialization scheme outperforms random initialization, but overall, the average error exceeds that of the mean-field corrected case by orders of magnitude, particularly as \(N\) is increased. When a small transverse field is added to the model, the half overlap between the fragmented and full Hamiltonian ground states provided by including mean-field corrections leads to error orders of magnitude smaller than the other approaches. Only as \(h\) is further increased - entering the
Figure 9: Comparison between fragment pre-trained VQE (with and without mean-field corrections) and vanilla VQE performance as a function of the transverse field strength \(h\). Each point is the mean (geometric or arithmetic) performance of 500 different \(J_{ij}\) matrices (graphs). The full PQC is split into fragments with \(N_{a}=2\) and at most \(N_{f}=3\) during pre-training. A total of \(T=10\) different partitionings are considered, and the best pre-trained solution is used to initialize the final optimization. The mean final percent error \(\epsilon\) is provided in the top panel, while the mean required number of iterations \(N_{iter}\) to reach convergence is provided in the bottom panel.
regime where first-order perturbation theory is inadequate to describe the ground state - do the approaches begin to perform comparably, with increasing error and required iterations.
## 6 Conclusion
We have presented two near-term approaches to the distributed Hamiltonian evolution of a quantum system and a pre-training technique for variational quantum circuits. Our time evolution schemes are built upon the idea that the relative importance of interactions spanning a sub-system and its environment can be ascertained using the principle of minimizing the short-time evolution error, which is derived to be proportional to the quantum variance of the difference between the full and fragmented Hamiltonians. The first scheme employs only classical information transfer in the form of mean-field measurements to update mean-field corrections, as well as a limited number of auxiliary qubits anchored to each fragment, enabling limited entanglement growth. Although our method is lossy, metrics local to the system qubits addressed by a single fragment can closely mimic the true values from exact evolution, including the gold standard comparison of state fidelity. Moreover, this scheme is flexible, as it is amenable to any mixture of classical and quantum hardware and can process the fragments in series or parallel. In our second scheme, the information stored by qubits designated to be auxiliaries is physically shared between fragments, either through qubit shuttling or quantum teleportation. This approach is appropriate when quantum hardware is available and limited quantum communication is feasible. If desired, the choice of which qubits act as auxiliaries can be updated from one time step to the next, as dictated by the minimum error rule, to extend the performance of the approximate scheme.
Finally, we examine how our fragmented simulation scheme can be modified to apply to quantum circuits. Here, a single circuit is fractured into several smaller overlapping circuits, which are more manageable (requiring lower connectivity and less prone to suffer from noise and barren plateaus) and, if sufficiently small, even classically treatable. We devise a scheme that employs fragmented circuits to pre-train the parameters of the full PQC. Crucially, the use of overlapping registers coupled with the mean-field corrective terms in the loss function links the optimization of the individual circuits. The inclusion of mean-field corrections shifts the solution of the collective circuit optimization to have a large overlap with the solution of the full problem. We demonstrate that the pre-training scheme reduces the final percent error by orders of magnitude as well as the number of iterations required when compared to randomly initialized full circuit optimizations of VQE. Although the scheme's performance is particularly strong for classical Ising Hamiltonians, we develop a non-linear perturbation theory to analytically show that the mean-field terms included in optimization act as first-order perturbation corrections when a small transverse field added, extending the success of the scheme into the quantum realm.
This manuscript motivates and facilitates numerous future research directions. Although we emphasized limited quantum information transfer, subsequent studies might explore how the number of auxiliary qubits and the frequency of re-encoding affect distributed simulation,
or devise the details of physically implementing a limited quantum channel. Likewise, higher-order moments beyond mean-field terms may be explored as higher-order corrections. Moreover, rather than using auxiliary qubits to target specific environment qubits, a method of mapping salient environment _states_ to auxiliary qubits (as employed by some classical fragmentation methods such as DMET [57]) may further improve the method, although the measurement-efficiency of such a technique may prove challenging in a quantum setting. Regarding fragment pre-training for variational algorithms, future works might develop efficient circuit fragmentations that are tailored to specific problem Hamiltonians and/or symmetries, rather than our more general, batched approach. Finally, alternative partitioning schemes might be considered to enable pre-training of non-brickwork circuits.
This work represents a pivotal stepping stone on the path to large-scale distributed quantum computing. In the near term, our distributed computing method with classical channels can be implemented by a single small simulator in sequence, or by a collection of small simulators that are either quantum or classical in nature. This permits the simulation of large system quantum dynamics without the noise and connectivity concerns of a large-scale quantum device [8], allowing experimentalists to address challenging problems in quantum chemistry and condensed matter physics [94, 95]. As non-local operations on quantum hardware improve, our proposal for limited quantum information transfer can be implemented, enabling cross-simulator measurements and higher accuracy. Lastly, our fragmented pre-training method can reduce the error of large-scale variational quantum algorithms by orders of magnitude while reducing the number of training epochs. Such improvements are vital to this field, which seeks to address problems ranging from drug discovery to NP-hard optimization on quantum hardware despite persistent training difficulties [96, 97, 98, 47, 97].
## 7 Acknowledgements
This work was done during A.M.G.'s internship at NVIDIA. A.M.G. acknowledges support from the National Science Foundation (NSF) through the Graduate Research Fellowships Program, as well as support through the Theodore H. Ashford Fellowships in the Sciences. At CalTech, A.A. is supported in part by the Bren-endowed chair. S.F.Y. thanks the AFOSR and the NSF (through the CUA PFC and QSense QLCI) for funding. |
2309.14946 | Global well-posedness of the energy-critical stochastic nonlinear wave
equations | We consider the Cauchy problem for the defocusing energy-critical stochastic
nonlinear wave equations (SNLW) with an additive stochastic forcing on
$\mathbb{R}^{d}$ and $\mathbb{T}^{d}$ with $d \geq 3$. By adapting the
probabilistic perturbation argument employed in the context of the random data
Cauchy theory by B\'enyi-Oh-Pocovnicu (2015) and Pocovnicu (2017) and in the
context of stochastic PDEs by Oh-Okamoto (2020), we prove global well-posedness
of the defocusing energy-critical SNLW. In particular, on $\mathbb{T}^d$, we
prove global well-posedness with the stochastic forcing below the energy space. | Enguerrand Brun, Guopeng Li, Ruoyuan Liu | 2023-09-26T14:01:05Z | http://arxiv.org/abs/2309.14946v2 | # Global well-posedness of the energy-critical stochastic nonlinear wave equations
###### Abstract.
We consider the Cauchy problem for the defocusing energy-critical stochastic nonlinear wave equations (SNLW) with an additive stochastic forcing on \(\mathbb{R}^{d}\) and \(\mathbb{T}^{d}\) with \(d\geq 3\). By adapting the probabilistic perturbation argument employed in the context of the random data Cauchy theory by Benyi-Oh-Pocovnicu (2015) and POCvnicu (2017) and in the context of stochastic PDEs by Oh-Okamoto (2020), we prove global well-posedness of the defocusing energy-critical SNLW. In particular, on \(\mathbb{T}^{d}\), we prove global well-posedness with the stochastic forcing below the energy space.
Key words and phrases:stochastic nonlinear wave equation; global well-posedness; energy-critical; perturbation theory 2020 Mathematics Subject Classification: 35L71, 35R60, 60H15
###### Contents
* 1 Introduction
* 2 Preliminary results and lemmas
* 2.1 Preliminary lemmas on Sobolev spaces
* 2.2 Previous results on wave equations
* 2.3 Regularity of stochastic convolutions
* 3 Local well-posedness of the energy-critical stochastic NLW
* 4 Proof of global well-posedness
* 4.1 Global well-posedness on \(\mathbb{R}^{d}\)
* 4.2 Global well-posedness on \(\mathbb{T}^{d}\)
## 1. Introduction
We consider the following Cauchy problem for the defocusing energy-critical stochastic nonlinear wave equation (SNLW) on \(\mathcal{M}=\mathbb{R}^{d}\) or \(\mathbb{T}^{d}\) (with \(\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}\)) for \(d\geq 3\):
\[\left\{\begin{aligned} &\partial_{t}^{2}u-\Delta u+|u|^{\frac{4}{d-2}}u= \phi\xi\\ &(u,\partial_{t}u)|_{t=0}=(u_{0},u_{1}),\end{aligned}\right. \tag{1.1}\]
where \(u\) is real-valued, \(\xi\) is the space-time white noise on \(\mathbb{R}_{+}\times\mathcal{M}\), and \(\phi\) is a bounded operator on \(L^{2}(\mathcal{M})\). The aim of this paper is to show global well-posedness of (1.1).
Let us first mention some backgrounds on the energy-critical NLW. Consider the following deterministic defocusing NLW on \(\mathbb{R}^{d}\) with \(d\geq 3\):
\[\partial_{t}^{2}u-\Delta u+|u|^{\frac{4}{d-2}}u=0. \tag{1.2}\]
It is well known that the following dilation symmetry for \(\lambda>0\)
\[u(t,x)\mapsto u_{\lambda}(t,x):=\lambda^{\frac{d-2}{2}}u(\lambda t,\lambda x)\]
maps solutions of NLW (1.2) to solutions of NLW (1.2). A direct computation yields
\[\|u\|_{\dot{H}^{1}(\mathbb{R}^{d})}=\|u_{\lambda}\|_{\dot{H}^{1}(\mathbb{R}^{d})},\]
so that the scaling critical Sobolev regularity for NLW (1.2) is \(s_{c}=1\). Also, the energy defined by
\[E(\vec{u})=E(u,\partial_{t}u):=\frac{1}{2}\int(\partial_{t}u)^{2}dx+\frac{1}{2 }\int|\nabla u|^{2}dx+\frac{d-2}{2d}\int|u|^{\frac{2d}{d-2}}dx \tag{1.3}\]
is conserved under the flow of (1.2). In view of the Sobolev embedding \(\dot{H}^{1}(\mathbb{R}^{d})\hookrightarrow L^{\frac{2d}{d-2}}(\mathbb{R}^{d})\), we see that \(E(\vec{u})<\infty\) if and only if \(\vec{u}\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d}):=\dot{H}^{1}(\mathbb{R}^{d}) \times L^{2}(\mathbb{R}^{d})\). For this reason, we refer to \(\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\) as the energy space for NLW (1.2). Moreover, we say that NLW (1.2) is _energy-critical_. On \(\mathbb{T}^{d}\), although there is no dilation symmetry, the heuristics provided by the scaling analysis still hold and we say that (1.2) on \(\mathbb{T}^{d}\) is energy-critical.
Note that for energy-subcritical NLW, after proving local well-posedness, one can easily obtain global well-posedness by using the conservation of the energy, which provides an a priori control of the \(\dot{\mathcal{H}}^{1}\)-norm of the solution. For the energy-critical NLW, however, there is a delicate balance between the linear and nonlinear parts of the equation. In the energy-critical setting, only the energy conservation itself is not enough to obtain global well-posedness, which makes the problem quite intricate. Still, after substantial efforts of many mathematicians, we now know that the energy-critical defocusing NLW (1.2) is globally well-posed in \(\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\) and all solutions in the energy space scatter. See [53, 20, 21, 51, 52, 25, 2, 1, 30, 31, 54]. On the other hand, in the periodic setting, the global well-posedness results for (1.2) in \(\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\) immediately implies corresponding global well-posedness for (1.2) in \(\mathcal{H}^{1}(\mathbb{T}^{d})\), thanks to the finite speed of propagation. We also point out that these well-posedness results for (1.2) in the energy space are sharp in the sense that ill-posedness for (1.2) on \(\mathbb{R}^{d}\) occurs below the energy space; see [11, 18, 37].
Let us now go back to the defocusing energy-critical NLW with a stochastic forcing (1.1). Well-posedness theory of SNLW has been studied extensively; see [49, 48, 44, 10, 45, 14, 15, 46, 47, 5, 6] and the references therein. More recently, there has been a significant development in the well-posedness theory of singular SNLW with an additive stochastic forcing; see [22, 23, 24, 43, 41, 35, 36, 42].
Our main goal in this paper is to prove global well-posedness of the defocusing energy-critical SNLW (1.1). We say that \(u\) is a solution to (1.1) if it satisfies the following Duhamel formulation (or mild formulation):
\[u(t)=V(t)(u_{0},u_{1})-\int_{0}^{t}S(t-t^{\prime})\big{(}|u|^{ \frac{4}{d-2}}u\big{)}(t^{\prime})dt^{\prime}+\int_{0}^{t}S(t-t^{\prime}) \phi\xi(dt^{\prime}), \tag{1.4}\]
where
\[S(t)=\frac{\sin(t|\nabla|)}{|\nabla|}\quad\text{and}\quad V(t)( \phi_{0},\phi_{1})=\partial_{t}S(t)\phi_{0}+S(t)\phi_{1}. \tag{1.5}\]
The last term on the right-hand-side of (1.4) is called the stochastic convolution, which we denote by
\[\Psi(t)=\int_{0}^{t}S(t-t^{\prime})\phi\xi(dt^{\prime}). \tag{1.6}\]
See (2.2) in Subsection 2.1 for a precise definition.
On \(\mathbb{R}^{d}\), we obtain the following global well-posedness result.
**Theorem 1.1**.: _Let \(d\geq 3\) and \(\phi\in\text{HS}(L^{2}(\mathbb{R}^{d}),L^{2}(\mathbb{R}^{d}))\). Then, the defocusing energy-critical SNLW (1.1) is globally well-posed in \(\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\) in the sense that the following statement holds true almost surely; given \((u_{0},u_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\), there exists a global-in-time solution \(u\) to (1.1) with \((u,\partial_{t}u)|_{t=0}=(u_{0},u_{1})\)._
Here, the assumption on \(\phi\) is chosen in such a way that the stochastic convolution \(\Psi\) lies in the energy space \(\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\). See Lemma 2.6 below.
For the proof of Theorem 1.1, we first establish local well-posedness of (1.1) in Section 3 and then prove global well-posedness of (1.1) in Subsection 4.1 below. To prove well-posedness of (1.1), we use the first order expansion \(u=\Psi+v\) and consider the following equation for \(v\):
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+|v+\Psi|^{ \frac{4}{d-2}}(v+\Psi)=0\\ &(v,\partial_{t}v)|_{t=0}=(u_{0},u_{1}).\end{aligned}\right. \tag{1.7}\]
By viewing (1.7) as an energy-critical NLW for \(v\) with a perturbation term, we adapt the _probabilistic perturbation theory_ developed in [4, 50] in the context of random data Cauchy theory. See Lemma 2.5 below. The perturbation theory has also been previously used to prove global well-posedness for other equations in deterministic settings, stochastic settings, or random initial data settings. See [55, 13, 27, 28, 4, 33, 32].
In order to apply the perturbation argument for the equation (1.7), we need to make sure that the stochastic convolution \(\Psi\) is small on short time intervals. This can be done since \(\Psi\) can be bounded in Strichartz spaces (i.e. \(L^{q}\) in time and \(L^{r}\) in space), as in [32]. Nevertheless, due to the complicated nature of the wave equations, establishing the space-time regularity of \(\Psi\) is non-trivial. See Lemma 2.6 for more details.
Another important ingredient in carrying out the perturbation argument in our setting is an _a priori energy bound_. This is achieved by a rigorously justified application of Ito's lemma. See also [17, 32, 8, 9].
We now switch our attention to the periodic setting. Here, we assume that \(\phi\) is a a Fourier multiplier operator on \(L^{2}(\mathbb{T}^{d})\). Namely, for any \(n\in\mathbb{Z}^{d}\),
\[\phi e^{in\cdot x}=\widehat{\phi}_{n}e^{in\cdot x}\]
for some \(\widehat{\phi}_{n}\in\mathbb{C}\).
**Theorem 1.2**.: _Let \(d\geq 3\), \(\phi\) be a Fourier multiplier operator on \(L^{2}(\mathbb{T}^{d})\), and \(\phi\in\text{HS}(L^{2}(\mathbb{T}^{d}),H^{s}(\mathbb{T}^{d}))\) with \(s\in\mathbb{R}\) satisfying_
\[\text{(i)}\ d=3:\ s>-\frac{1}{2}\quad\text{or}\quad\text{(ii)}\ d\geq 4:\ s>-1.\]
_Then, the defocusing energy-critical SNLW (1.1) is globally well-posed in \(\mathcal{H}^{1}(\mathbb{T}^{d})\) in the sense that the following statement holds true almost surely; given \((u_{0},u_{1})\in\mathcal{H}^{1}(\mathbb{T}^{d})\), there exists a global-in-time solution \(u\) to (1.1) with \((u,\partial_{t}u)|_{t=0}=(u_{0},u_{1})\)._
Note that compared to the \(\mathbb{R}^{d}\) case, there is an improvement in the regularity assumption of the noise term. This is mainly because of the better space-time regularity of the stochastic convolution \(\Psi\) on a bounded domain. See Lemma 2.8 for more details.
Thanks to the finite speed of propagation, the proof of Theorem 1.2 follows from the same strategies for the \(\mathbb{R}^{d}\) case. However, due to the lower regularity assumption of the noise, the stochastic convolution does not belong to the energy space \(\mathcal{H}^{1}(\mathbb{T}^{d})\). Thus, for the a priori energy bound, instead of using Ito's lemma, we use a Gronwall-type argument developed by [7, 38] in the context of random data Cauchy theory.
We conclude this introduction by stating several remarks.
**Remark 1.3**.: (i) In [38, 50], the authors studied the defocusing energy-critical NLW (1.2) on \(\mathbb{R}^{d}\) with initial data below the energy space. In particular, using the Wiener randomization of the initial data, they proved global well-posedness of (1.2) by establishing an energy bound via a Gronwall-type argument.
However, at this point, we do not know whether one can prove any global well-posedness results for the defocusing energy-critial SNLW (1.1) with initial data below the energy space. The main obstacle is that, even with randomized initial data, the Gronwall-type argument as in [7, 38, 50] is not directly applicable due the lack of space-time regularity of the stochastic convolution \(\Psi\).
(ii) Compared to the \(\mathbb{R}^{d}\) case, the situation is better on \(\mathbb{T}^{d}\) and we can prove almost sure global well-posedness of the stochastic energy-critical defocusing NLW (1.1) below the energy space. Specifically, we consider the following equation for \(d\geq 3\):
\[\left\{\begin{aligned} &\partial_{t}^{2}u-\Delta u+|u|^{\frac{4}{d-2}}u =\phi\xi\\ &(u,\partial_{t}u)|_{t=0}=(u_{0}^{\omega},u_{1}^{\omega}),\end{aligned}\right. \tag{1.8}\]
where \(\phi\) satisfies the same condition as in Theorem 1.2 and \((u_{0}^{\omega},u_{1}^{\omega})\) is a randomization of \((u_{0},u_{1})\) defined by
\[(u_{0}^{\omega},u_{1}^{\omega}):=\bigg{(}\sum_{n\in\mathbb{Z}^{d}}g_{n,0}( \omega)\widehat{u_{0}}(n)e^{in\cdot x},\sum_{n\in\mathbb{Z}^{d}}g_{n,1}( \omega)\widehat{u_{1}}(n)e^{in\cdot x}\bigg{)},\]
where \(\{g_{n,j}\}_{n\in\mathbb{Z}^{d},j\in\{0,1\}}\) is a sequence of independent mean zero complex-valued random variables conditioned such that \(g_{-n,j}=\overline{g_{n,j}}\) for all \(n\in\mathbb{Z}^{d}\). Moreover, we assume that there exists a constant \(c>0\) such that on the probability distributions \(\mu_{n,j}\) of \(g_{n,j}\), we have
\[\int e^{\gamma\cdot x}d\mu_{n,j}(x)\leq e^{c|\gamma|^{2}},\quad j=0,1\]
for all \(\gamma\in\mathbb{R}^{2}\) when \(n\in\mathbb{Z}^{d}\setminus\{0\}\) and all \(\gamma\in\mathbb{R}\) when \(n=0\).
Then, due to better integrability of \(V(t)(u_{0}^{\omega},u_{1}^{\omega})\) compared to \(V(t)(u_{0},u_{1})\) for any \(t\geq 0\) (see, for example, [39, Proposition 4.1 and Proposition 4.4]), one can show the following result. When \(d=3\), given \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\) for \(s>\frac{1}{2}\), there exists a global-in-time solution \(u\) to the equation (1.8) almost surely; when \(d\geq 4\), given \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\) for \(s>0\), there exists a global-in-time solution \(u\) to the equation (1.8) almost surely.
**Remark 1.4**.: As stated in Theorem 1.2, we obtain global well-posedness of (1.1) with \(\phi\in\mathit{HS}(L^{2}(\mathbb{T}^{d}),H^{s}(\mathbb{T}^{d}))\), \(s>-1\). It is also possible to handle a rougher noise. Note that when \(s<-1\), the stochastic convolution \(\Psi\) is merely a distribution, and hence a proper
renormalization is needed to make sense of the power-type nonlinearity. For this purpose, the nonlinearity must be an integer power, which restricts our attention to the \(d=4\) case.
It is natural to ask whether it is possible to extend the well-posedness theory of (1.1) on \(\mathbb{T}^{4}\) with the stochastic convolution \(\Psi\) being merely a distribution. The answer is no. Indeed, in a recent preprint [34], the authors showed an ill-posedness result for the following (renormalized) stochastic NLW on \(\mathbb{T}^{d}\):
\[\partial_{t}^{2}u+(1-\Delta)u+u^{k}=\phi\xi,\]
where \(k\geq 2\) is an integer and \(\phi\) is a Fourier multiplier operator with \(\phi\in\text{{HS}}(L^{2}(\mathbb{T}^{d}),H^{s}(\mathbb{T}^{d}))\), \(s<-1\). Hence, our global well-posedness result for the defocusing energy-critical SNLW (1.1) on \(\mathbb{T}^{4}\) is almost sharp.
## 2. Preliminary results and lemmas
In this section, we recall some notations, definitions, useful lemmas, and previous results.
For two positive numbers \(A\) and \(B\), we use \(A\lesssim B\) to denote \(A\leq CB\) for some constant \(C>0\). Also, we use shorthand notations for space-time function spaces, such as \(C_{T}H^{s}_{x}\) for \(C([0,T],H^{s}(\mathbb{R}^{d}))\) or \(C([0,T],H^{s}(\mathbb{T}^{d}))\).
We recall that if \(H_{1}\), \(H_{2}\) are Hilbert spaces, then for a linear operator \(\phi\) from \(H_{1}\) to \(H_{2}\), we denote
\[\|\phi\|_{\text{{HS}}(H_{1},H_{2})}=\Big{(}\sum_{n\in\mathbb{N}}\|\phi e_{n}\| _{H_{2}}^{2}\Big{)}^{1/2}\]
as the Hilbert-Schmidt operator norm of \(\phi\), where \(\{e_{n}\}_{n\in\mathbb{N}}\) is an orthonormal basis of \(H_{1}\).
### Preliminary lemmas on Sobolev spaces
In this subsection, we recall Sobolev spaces on \(\mathbb{R}^{d}\) and \(\mathbb{T}^{d}\) and also some useful estimates.
For \(s\in\mathbb{R}\), we denote by \(\dot{H}^{s}(\mathbb{R}^{d})\) the homogeneous \(L^{2}\)-based Sobolev space with the norm
\[\|f\|_{\dot{H}^{s}(\mathbb{R}^{d})}:=\big{\|}|\xi|^{s}\widehat{f}(\xi)\big{\|} _{L^{2}_{\xi}(\mathbb{R}^{d})},\]
where \(\widehat{f}\) is the Fourier transform of \(f\). We denote by \(\dot{H}^{s}(\mathbb{R}^{d})\) the inhomogeneous \(L^{2}\)-based Sobolev space with the norm
\[\|f\|_{H^{s}(\mathbb{R}^{d})}:=\big{\|}\langle\xi\rangle^{s}\widehat{f}(\xi) \big{\|}_{L^{2}_{\xi}(\mathbb{R}^{d})},\]
where \(\langle\cdot\rangle=(1+|\cdot|^{2})^{\frac{1}{2}}\). We also define
\[\dot{\mathcal{H}}^{s}(\mathbb{R}^{d}):=\dot{H}^{s}(\mathbb{R}^{d})\times\dot {H}^{s-1}(\mathbb{R}^{d})\quad\text{and}\quad\mathcal{H}^{s}(\mathbb{R}^{d}):= H^{s}(\mathbb{R}^{d})\times H^{s-1}(\mathbb{R}^{d}).\]
For \(1<p\leq\infty\), we denote by \(W^{s,p}(\mathbb{R}^{d})\) the \(L^{p}\)-based Sobolev space with the norm
\[\|f\|_{W^{s,p}(\mathbb{R}^{d})}:=\big{\|}\mathcal{F}^{-1}\big{(}\langle\xi \rangle^{s}\widehat{f}(\xi)\big{)}\big{\|}_{L^{p}(\mathbb{R}^{d})},\]
where \(\mathcal{F}^{-1}\) denotes the inverse Fourier transform.
On \(\mathbb{T}^{d}\), for \(s\in\mathbb{R}\), we denote by \(H^{s}(\mathbb{T}^{d})\) the inhomogeneous \(L^{2}\)-based Sobolev space with the norm
\[\|f\|_{H^{s}(\mathbb{T}^{d})}:=\big{\|}\langle n\rangle^{s}\widehat{f}(n)\big{\|} _{\ell^{2}_{n}(\mathbb{Z}^{d})}.\]
We also define
\[\mathcal{H}^{s}(\mathbb{T}^{d}):=H^{s}(\mathbb{T}^{d})\times H^{s-1}(\mathbb{ T}^{d}).\]
We now recall some useful estimates for Sobolev spaces, starting with the following fractional chain rule. For a proof, see [12].
**Lemma 2.1**.: _Let \(d\geq 1\), \(s\in(0,1)\), and \(r>2\). Let \(1<p,p_{1}<\infty\) and \(1<p_{2}\leq\infty\) satisfying \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\). Let \(F\) denote the function \(F(u)=|u|^{r-1}u\) or \(F(u)=|u|^{r}\). Then, we have_
\[\|F(u)\|_{W^{s,p}(\mathbb{R}^{d})}\lesssim\|u\|_{W^{s,p_{1}}(\mathbb{R}^{d})} \big{\|}|u|^{r-1}\big{\|}_{L^{p_{2}}(\mathbb{R}^{d})}.\]
We also need the following Gagliardo-Nirenberg interpolation inequality. The proof of this inequality follows directly from Sobolev's inequality and interpolation.
**Lemma 2.2**.: _Let \(d\geq 1\), \(1<p_{1},p_{2}<\infty\), and \(s_{1},s_{2}>0\). Let \(p>1\) and \(\theta\in(0,1)\) satisfying_
\[-\frac{s_{1}}{d}+\frac{1}{p}=(1-\theta)\bigg{(}-\frac{s_{2}}{d}+\frac{1}{p_{1 }}\bigg{)}+\frac{\theta}{p_{2}}\quad\text{and}\quad s_{1}\leq(1-\theta)s_{2}.\]
_Then, we have_
\[\|u\|_{W^{s_{1},p}(\mathbb{R}^{d})}\lesssim\|u\|_{W^{s_{2},p_{1}}(\mathbb{R}^ {d})}^{1-\theta}\|u\|_{L^{p_{2}}(\mathbb{R}^{d})}^{\theta}.\]
### Previous results on wave equations
In this subsection, we record some results on wave equations.
Let us first recall the Strichartz estimate for linear wave equations. Let \(\gamma\in\mathbb{R}\) and \(d\geq 2\). We say that a pair \((q,r)\) is \(\dot{H}^{\gamma}(\mathbb{R}^{d})\)-wave admissible if \(q\geq 2\) and \(2\leq r<\infty\) satisfy
\[\frac{1}{q}+\frac{d-1}{2r}\leq\frac{d-1}{4}\quad\text{and}\quad\frac{1}{q}+ \frac{d}{r}=\frac{d}{2}-\gamma.\]
The following Strichartz estimates for wave equations are well-studied. See, for example, [19, 29, 26].
**Lemma 2.3**.: _Let \(\gamma>0\), \(d\geq 2\), \((q,r)\) be \(\dot{H}^{\gamma}(\mathbb{R}^{d})\)-wave admissible, and \((\widetilde{q},\widetilde{r})\) be \(\dot{H}^{1-\gamma}(\mathbb{R}^{d})\)-wave admissible. Let \(u\) be the solution of_
\[\left\{\begin{aligned} &\partial_{t}^{2}u-\Delta u+F=0\\ &(u,\partial_{t}u)|_{t=0}=(u_{0},u_{1})\end{aligned}\right.\]
_on \(I\times\mathbb{R}^{d}\), where \(I\subset\mathbb{R}\) is an interval containing 0. Then, we have_
\[\|u\|_{L^{\widetilde{\nu}}_{I}\dot{H}^{\gamma}_{x}}+\|\partial_{t}u\|_{L^{ \infty}_{I}\dot{H}^{\gamma-1}_{x}}+\|u\|_{L^{q}_{I}L^{r}_{x}}\lesssim\|u_{0} \|_{\dot{H}^{\gamma}}+\|u_{1}\|_{\dot{H}^{\gamma-1}}+\|F\|_{L^{\widetilde{ \nu}}_{I}\dot{L}^{\widetilde{\nu}^{\prime}}_{x}},\]
_where \(\widetilde{q}^{\prime}\) and \(\widetilde{r}^{\prime}\) are Holder conjugates of \(\widetilde{q}\) and \(\widetilde{r}\), respectively._
In the energy-critical setting, we will frequently use the following Strichartz space:
\[\|u\|_{X(I)}=\|u\|_{X(I\times\mathbb{R}^{d})}:=\|u\|_{L^{\frac{d+2}{I}-L}_{x} \frac{2(d+2)}{d-2}\,(\mathbb{R}^{d})}, \tag{2.1}\]
where one can easily check that the pair \((\frac{d+2}{d-2},\frac{2(d+2)}{d-2})\) is \(\dot{H}^{1}\)-wave admissible. Another frequently used pair is \((\infty,2)\), which is \(\dot{H}^{0}\)-wave admissible.
We now recall the following global space-time bound on the solution to the deterministic defocusing energy-critical NLW.
**Lemma 2.4**.: _Let \(d\geq 3\) and \((w_{0},w_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\). Let \(w\) be the solution of the energy-critical defocusing NLW:_
\[\left\{\begin{aligned} &\partial_{t}^{2}w-\Delta w+|w|^{\frac{4}{d-2}}w=0 \\ &(w,\partial_{t}w)|_{t=0}=(w_{0},w_{1}).\end{aligned}\right.\]
_Then,_
\[\|w\|_{X(\mathbb{R}_{+}\times\mathbb{R}^{d})}<C\big{(}\|(w_{0},w_{1})\|_{\dot {\mathcal{H}}^{1}(\mathbb{R}^{d})}\big{)},\]
_where \(C(\cdot)>0\) is a non-decreasing function._
For a proof of Lemma 2.4, see [50, Lemma 4.6] and also the references therein, where the steps can be easily extended to \(d\geq 3\).
Lastly, we recall the following long-time perturbation lemma from [50, Lemma 4.5]. The lemma in [50] was stated for \(d=4,5\), but it can be easily extended to \(d\geq 3\).
**Lemma 2.5**.: _Let \(d\geq 3\), \((v_{0},v_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\), and \(M>0\). Let \(I\subset\mathbb{R}_{+}\) be a compact time interval and \(t_{0}\in\mathcal{I}\). Let \(v\) be a solution on \(I\times\mathbb{R}^{d}\) of the following perturbed equation_
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+|v|^{\frac{4}{d-2}}v=f \\ &(v,\partial_{t}v)|_{t=t_{0}}=(v_{0},v_{1})\end{aligned}\right.\]
_with_
\[\|v\|_{X(I\times\mathbb{R}^{d})}\leq M.\]
_Let \((w_{0},w_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\) and let \(w\) be the solution of the defocusing energy-critical NLW:_
\[\left\{\begin{aligned} &\partial_{t}^{2}w-\Delta w+|w|^{\frac{4}{d-2}}w=0 \\ &(w,\partial_{t}w)|_{t=0}=(w_{0},w_{1}).\end{aligned}\right.\]
_Then, there exists \(\widetilde{\varepsilon}(M)>0\) sufficiently small such that if_
\[\|(v_{0}-w_{0},v_{1}-w_{1})\|_{\dot{\mathcal{H}}^{1}} \leq\varepsilon,\] \[\|f\|_{L^{1}_{1}L^{2}_{x}} \leq\varepsilon\]
_for some \(0<\varepsilon<\widetilde{\varepsilon}(M)\), then the following holds:_
\[\|(v-w,\partial_{t}v-\partial_{t}w)\|_{L^{\infty}_{I}\dot{\mathcal{H}}^{1}_{x }}+\|v-w\|_{L^{q}_{I}L^{r}_{x}}\leq C(M)\varepsilon\]
_for all \(\dot{H}^{1}\)-wave admissible pairs \((q,r)\), where \(C(\cdot)\) is a non-decreasing function._
### Regularity of stochastic convolutions
In this subsection, we study regularity properties of several stochastic object.
We first consider the stochastic convolution \(\Psi\) as defined in (1.6), which we now provide a more precise definition. By fixing an orthonormal basis \(\{e_{k}\}_{k\in\mathbb{N}}\) of \(L^{2}(\mathbb{R}^{d})\), we define \(W\) as the following cylindrical Wiener process on \(L^{2}(\mathbb{R}^{d})\):
\[W(t):=\sum_{k\in\mathbb{N}}\beta_{k}(t)e_{k}.\]
Here, \(\beta_{k}\) is defined by \(\beta_{k}(t)=\langle\xi,\mathbf{1}_{[0,t]}\cdot e_{k}\rangle_{t,x}\), where \(\langle\cdot,\cdot\rangle_{t,x}\) denotes the duality pairing on \(\mathbb{R}_{+}\times\mathbb{R}^{d}\). As a result, \(\{\beta_{k}\}_{k\in\mathbb{N}}\) is a family of mutually independent complex-valued Brownian
motions conditioned so that \(\beta_{-k}=\overline{\beta_{k}}\) for any \(k\in\mathbb{N}\). In particular, \(\beta_{0}\) is a standard real-valued Brownian motion. The stochastic convolution \(\Psi\) is then given by
\[\Psi(t)=\int_{0}^{t}S(t-t^{\prime})\phi dW(t^{\prime})=\sum_{k\in\mathbb{N}}\int _{0}^{t}S(t-t^{\prime})\phi e_{k}d\beta_{k}(t^{\prime}), \tag{2.2}\]
where \(S\) is the Fourier multiplier as defined in (1.5).
We now show the following lemma, which establishes the regularity property of \(\Psi\).
**Lemma 2.6**.: _Let \(d\geq 1\) and \(T>0\). Suppose that \(\phi\in\text{HS}(L^{2}(\mathbb{R}^{d}),H^{s}(\mathbb{R}^{d}))\) for some \(s\in\mathbb{R}\)._
(i) _We have \(\Psi\in C([0,T];H^{s+1}(\mathbb{R}^{d}))\) almost surely. Moreover, for any finite \(p\geq 1\), we have_
\[\mathbb{E}\Big{[}\sup_{0\leq t\leq T}\|\Psi(t)\|_{H^{s+1}}^{p}\Big{]}\leq C\| \phi\|_{\text{HS}(L^{2},H^{s})}^{p}\]
_for some constant \(C=C(T,p)>0\)._
(ii) _Let \(1\leq q<\infty\), \(d\geq 3\), and \(r_{d}=\frac{2(d+2)}{d-2}\). Let \(\gamma\in(0,1)\) such that \(1-\gamma\leq\min(\frac{1}{d+2},\frac{6-d}{2(d+2)})\). Then, we have \(\Psi\in L^{q}([0,T];W^{s+1-\gamma,r_{d}}(\mathbb{R}^{d}))\) almost surely. Moreover, for any finite \(p\geq 1\), we have_
\[\mathbb{E}\Big{[}\|\Psi\|_{L^{q}_{T}W^{s+1-\gamma,r_{d}}_{x}}^{p}\Big{]}\leq C \|\phi\|_{\text{HS}(L^{2},H^{s})}^{p} \tag{2.3}\]
_for some constant \(C=C(T,p)>0\)._
Proof.: For part (i), see [16]. For part (ii), the proof is similar to [40, Lemma 2.1 (ii)] with some additional cares. We define \(\phi_{\leq K}\) be such that
\[\phi_{\leq K}e_{k}=\left\{\begin{aligned} &\phi e_{k}&& \text{ if }k\leq K\\ & 0&&\text{ if }k>K,\end{aligned}\right.\]
and we define \(\Psi_{\leq K}\) be as in (2.2) with \(\phi\) replaced by \(\phi_{\leq K}\). Note that \(\phi_{\leq K}\) converges to \(\phi\) in \(\text{HS}(L^{2},H^{s})\) as \(K\to\infty\). We fix \(1\leq q<\infty\) and choose \(\widetilde{q}=\widetilde{q}(d,\gamma)\geq 2\) such that \((\widetilde{q},r_{d})\) is \(\dot{H}^{\gamma}\)-wave admissible, which can be satisfied given \(1-\gamma\leq\min(\frac{1}{d+2},\frac{6-d}{2(d+2)})\). Then, for \(p\geq\max(q,r_{d})\), by Minkowski's integral inequality, the Gaussian hypercontractivity, and the Ito isometry, we have
\[\begin{split}\big{\|}\|\Psi_{\leq K}\|_{L^{q}_{T}W^{s+1-\gamma,r _{d}}_{x}}\big{\|}_{L^{p}(\Omega)}&\leq\big{\|}\|\langle\nabla \rangle^{s+1-\gamma}\Psi_{\leq K}\|_{L^{p}(\Omega)}\big{\|}_{L^{q}_{T}L^{r_{d}}_ {x}}\\ &\lesssim\big{\|}\|\langle\nabla\rangle^{s+1-\gamma}\Psi_{\leq K} \|_{L^{2}(\Omega)}\big{\|}_{L^{q}_{T}L^{r_{d}}_{x}}\\ &=\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1-\gamma}\phi_{\leq K }e_{k}\|_{\ell^{2}_{k}L^{2}_{r}([0,t])}\big{\|}_{L^{q}_{T}L^{r_{d}}_{x}}\\ &\leq\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1-\gamma}P_{\leq 1 }(\phi_{\leq K}e_{k})\|_{\ell^{2}_{k}L^{2}_{r}([0,t])}\big{\|}_{L^{q}_{T}L^{r_{ d}}_{x}}\\ &\quad+\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1-\gamma}P_{>1}( \phi_{\leq K}e_{k})\|_{\ell^{2}_{k}L^{2}_{r}([0,t])}\big{\|}_{L^{q}_{T}L^{r_{d}} _{x}},\end{split} \tag{2.4}\]
where \(P_{\leq 1}\) denotes the frequency projector onto \(\{|\xi|\leq 1\}\) and \(P_{>1}=\text{Id}-P_{\leq 1}\). For the low frequency part, we apply Minkowski's integral inequality, Bernstein's inequality, and the fact
that \(\frac{\sin(\tau|\xi|)}{|\xi|}\leq|\tau|\) to obtain
\[\begin{split}&\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1-\gamma}P_{ \leq 1}(\phi_{\leq K}e_{k})\|_{\ell_{k}^{2}L_{\tau}^{2}([0,t])}\big{\|}_{L_{T}^{q}L _{x}^{r_{d}}}\\ &\leq T^{\frac{1}{q}}\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1- \gamma}P_{\leq 1}(\phi_{\leq K}e_{k})\|_{L_{x}^{r_{d}}}\big{\|}_{\ell_{k}^{2}L_{ \tau}^{2}([0,T])}\\ &\lesssim T^{\frac{1}{q}}\big{\|}\|S(\tau)\langle\nabla\rangle^{s+ 1-\gamma}P_{\leq 1}(\phi_{\leq K}e_{k})\|_{L_{x}^{2}}\big{\|}_{\ell_{k}^{2}L_{ \tau}^{2}([0,T])}\\ &\lesssim T^{\theta}\big{\|}\|\langle\nabla\rangle^{s}\phi_{\leq K }e_{k}\|_{L_{x}^{2}}\big{\|}_{\ell_{k}^{2}}\\ &=T^{\theta}\|\phi_{\leq K}\|_{\text{HS}(L^{2},H^{s})}.\end{split} \tag{2.5}\]
for some \(\theta>0\). For the high frequency part, by Minkowski's integral inequality, Holder's inequality in \(\tau\), and Lemma 2.3, we obtain
\[\begin{split}&\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1-\gamma}P_{> 1}(\phi_{\leq K}e_{k})\|_{\ell_{k}^{2}L_{\tau}^{2}([0,t])}\big{\|}_{L_{T}^{q}L _{x}^{r_{d}}}\\ &\leq T^{\theta}\big{\|}\|S(\tau)\langle\nabla\rangle^{s+1- \gamma}P_{>1}(\phi_{\leq K}e_{k})\|_{L_{T}^{\widetilde{q}}([0,T];L_{x}^{r_{d}} )}\big{\|}_{\ell_{k}^{2}}\\ &\lesssim T^{\theta}\big{\|}\|\|\nabla|^{-1+\gamma}\langle\nabla \rangle^{s+1-\gamma}P_{>1}(\phi_{\leq K}e_{k})\|_{L_{x}^{2}}\big{\|}_{\ell_{k} ^{2}}\\ &\lesssim T^{\theta}\|\phi_{\leq K}\|_{\text{HS}(L^{2},H^{s})}. \end{split} \tag{2.6}\]
Combining (2.4), (2.5), and (2.6), we obtain
\[\big{\|}\|\Psi_{\leq K}\|_{L_{T}^{q}W_{x}^{s+1-\gamma,r_{d}}}\big{\|}_{L^{p}( \Omega)}\lesssim T^{\theta}\|\phi_{\leq K}\|_{\text{HS}(L^{2},H^{s})}.\]
Similarly, we obtain that for \(K_{1},K_{2}\in\mathbb{N}\) with \(K_{1}<K_{2}\),
\[\big{\|}\|\Psi_{\leq K_{2}}-\Psi_{\leq K_{1}}\|_{L_{T}^{q}W_{x}^{s+1-\gamma,r_ {d}}}\big{\|}_{L^{p}(\Omega)}\lesssim T^{\theta}\|\phi_{\leq K_{2}}-\phi_{ \leq K_{1}}\|_{\text{HS}(L^{2},H^{s})}\longrightarrow 0\]
as \(K_{1},K_{2}\to\infty\). This then implies the convergence and (2.3) in part (ii).
**Remark 2.7**.: One can use integration by parts to write
\[\Psi(t)=-\sum_{k\in\mathbb{N}}\int_{0}^{t}\beta_{k}(t^{\prime})\frac{d}{ds} \Big{|}_{s=t^{\prime}}\big{(}S(t-s)\phi e_{k}\big{)}dt^{\prime}\]
almost surely. This allows us to compute that
\[\partial_{t}\Psi(t)=\sum_{k\in\mathbb{N}}\int_{0}^{t}\partial_{t}S(t-t^{ \prime})\phi e_{k}d\beta_{k}(t^{\prime})\]
almost surely. As in Lemma 2.6 (i), given \(d\geq 1\), \(T>0\), and \(\phi\in\text{HS}(L^{2},H^{s})\) for some \(s\in\mathbb{R}\), we can show that \(\partial_{t}\Psi\in C([0,T];H^{s}(\mathbb{R}^{d}))\) almost surely and
\[\mathbb{E}\Big{[}\sup_{0\leq t\leq T}\|\partial_{t}\Psi(t)\|_{H^{s}}^{p}\Big{]} \leq C\|\phi\|_{\text{HS}(L^{2},H^{s})}^{p}\]
for any finite \(p\geq 1\).
We now consider the stochastic convolution \(\Psi\) in (1.6) on \(\mathbb{T}^{d}\), which we denote by \(\Phi:=\Psi\) to avoid confusion. We recall that in the \(\mathbb{T}^{d}\) setting, we assume that \(\phi\) is a Fourier multiplier operator on \(L^{2}(\mathbb{T}^{d})\): for all \(n\in\mathbb{Z}^{d}\), we have
\[\phi e^{in\cdot x}=\widehat{\phi}_{n}e^{in\cdot x}\]
for some \(\widehat{\phi}_{n}\in\mathbb{C}\). Thus, \(\Phi\) is given by
\[\Phi(t)=\int_{0}^{t}S_{\mathbb{T}^{d}}(t-t^{\prime})\phi dW(t^{\prime})=\sum_{n \in\mathbb{Z}^{d}}\int_{0}^{t}S_{\mathbb{T}^{d}}(t-t^{\prime})(\widehat{\phi}_{ n}e^{in\cdot x})d\beta_{n}(t^{\prime}). \tag{2.7}\]
Here, the operator \(S_{\mathbb{T}^{d}}\) has the same form as \(S\) in (1.5) but on the periodic domain \(\mathbb{T}^{d}\) and \(\beta_{n}\) is now defined by \(\beta_{n}(t)=\langle\xi,\mathbf{1}_{[0,t]}\cdot e^{in\cdot x}\rangle_{t,x}\), where \(\langle\cdot,\cdot\rangle_{t,x}\) denotes the duality pairing on \(\mathbb{R}_{+}\times\mathbb{T}^{d}\).
For later purposes of showing global well-posedness in the \(\mathbb{T}^{d}\) setting, we also need a truncated periodized version of \(\Phi\). Let \(\eta:\mathbb{R}^{d}\to[0,1]\) be a smooth cutoff function such that \(\eta\equiv 1\) on \([-2\pi,2\pi]\) and \(\eta\equiv 0\) outside of \([-4\pi,4\pi]\). Given \(R\geq 1\), we define
\[\eta_{R}(x)=\eta\Big{(}\frac{x}{R}\Big{)}. \tag{2.8}\]
Given \(R\geq 1\), we define the following stochastic convolution on \(\mathbb{R}^{d}\):
\[\mathbf{\Phi}_{R}(t):=\sum_{n\in\mathbb{Z}^{d}}\int_{0}^{t}S(t-t^{\prime}) \big{(}\eta_{R}(x)\widehat{\phi}_{n}e^{in\cdot x}\big{)}d\beta_{n}(t^{\prime}). \tag{2.9}\]
Also, in view of Remark 2.7, we also also take a time derivative and obtain
\[\partial_{t}\mathbf{\Phi}_{R}(t):=\sum_{n\in\mathbb{Z}^{d}}\int_{0}^{t} \partial_{t}S(t-t^{\prime})\big{(}\eta_{R}(x)\widehat{\phi}_{n}e^{in\cdot x} \big{)}d\beta_{n}(t^{\prime}).\]
We now state the following lemma regarding the regularity of the above stochastic objects.
**Lemma 2.8**.: _Let \(d\geq 1\), \(T>0\), \(2\leq r\leq\infty\), and \(\varepsilon>0\) be arbitrarily small. Suppose that \(\phi\) is a Fourier multiplier operator on \(L^{2}(\mathbb{T}^{d})\) such that \(\phi\in\text{HS}(L^{2}(\mathbb{T}^{d}),H^{s}(\mathbb{T}^{d}))\) for some \(s>-1\)._
(i) _We have \(\Phi\in C([0,T];W^{s+1-\varepsilon,r}(\mathbb{T}^{d}))\) almost surely. More precisely, there exists \(C=C(\omega,\|\phi\|_{\text{HS}(L^{2},H^{s})},T)>0\) such that_
\[\sup_{0\leq t\leq T}\|\Phi(t)\|_{W^{s+1-\varepsilon,r}(\mathbb{T}^{d})}\leq C.\]
(ii) _For any \(R\geq 1\), we have \(\mathbf{\Phi}_{R}\in C([0,T];W^{s+1-\varepsilon,r}(\mathbb{R}^{d}))\) almost surely and \(\partial_{t}\mathbf{\Phi}_{R}\in C([0,T];W^{s-\varepsilon,r}(\mathbb{R}^{d}))\) almost surely. More precisely, there exists \(C=C(\omega,\|\phi\|_{\text{HS}(L^{2},H^{s})},R,T)>0\) such that_
\[\sup_{0\leq t\leq T}\|\mathbf{\Phi}_{R}(t)\|_{W^{s+1-\varepsilon,r}(\mathbb{R }^{d})}\leq C\quad\text{and}\quad\sup_{0\leq t\leq T}\|\partial_{t}\mathbf{\Phi }_{R}(t)\|_{W^{s-\varepsilon,r}(\mathbb{R}^{d})}\leq C.\]
Proof.: The proof of (i) follows from straightforward modifications of [23, Lemma 3.1] or [22, Proposition 2.1].
For (ii), we first consider \(\mathbf{\Phi}_{R}\) in the space \(L^{q}([0,T];W^{s+1-\varepsilon,r}(\mathbb{R}^{d}))\) for \(1\leq q<\infty\) and \(2\leq r<\infty\). We note that for any \(s>-1\), \(t>0\), and \(n\in\mathbb{Z}^{d}\), we have
\[\langle\xi\rangle^{s+1}\bigg{|}\frac{\sin(t|\xi|)}{|\xi|}\bigg{|}\lesssim\max( 1,t)\langle\xi\rangle^{s}\leq\max(1,t)\langle\xi-n\rangle^{|s|}\langle n\rangle ^{s}. \tag{2.10}\]
Using (2.10), Hausdorff-Young's inequality, and Holder's inequality, we obtain
\[\begin{split}&\Big{\|}\langle\nabla\rangle^{s+1}\frac{\sin(t| \nabla|)}{|\nabla|}(\eta_{R}e^{in\cdot x})\Big{\|}_{L^{r}_{x}(\mathbb{R}^{d})} \\ &\leq\max(1,t)\langle n\rangle^{s}\big{\|}\langle\xi-n\rangle^{|s |}\widehat{\eta}_{R}(\xi-n)\|_{L^{r^{\prime}}_{\xi}(\mathbb{R}^{d})}\\ &\lesssim\max(1,t)\langle n\rangle^{s}\big{\|}\langle\xi-n\rangle ^{|s|+\beta}\widehat{\eta}_{R}(\xi-n)\|_{L^{2}_{\xi}(\mathbb{R}^{d})}\\ &\lesssim\max(1,t)\langle n\rangle^{s}\|\eta_{R}\|_{H^{|s|+ \frac{d}{2}}(\mathbb{R}^{d})}.\end{split} \tag{2.11}\]
Thus, letting \(p\geq\max(q,r)\), by Minkowski's integral inequality, the Gaussian hypercontractivity, the Ito isometry, Minkowski's integral inequality again, and (2.11), we can compute
\[\begin{split}&\big{\|}\|\mathbf{\Phi}_{R}\|_{L^{q}_{t}([0,T];W^{s+1- \varepsilon,r}_{x}(\mathbb{R}^{d}))}\big{\|}_{L^{p}(\Omega)}\\ &\leq\big{\|}\|\langle\nabla\rangle^{s+1}\mathbf{\Phi}_{R}\|_{L^ {p}(\Omega)}\big{\|}_{L^{q}_{t}([0,T];L^{r}_{x}(\mathbb{R}^{d}))}\\ &\lesssim\big{\|}\|\langle\nabla\rangle^{s+1}\mathbf{\Phi}_{R}\|_{ L^{2}(\Omega)}\big{\|}_{L^{q}_{t}([0,T];L^{r}_{x}(\mathbb{R}^{d}))}\\ &=\Big{\|}\Big{\|}\langle\nabla\rangle^{s+1}\frac{\sin(t^{\prime} |\nabla|)}{|\nabla|}\big{(}\eta_{R}\widehat{\phi}_{n}e^{in\cdot x}\big{)}\Big{\|} _{\ell^{2}_{n}L^{2}_{t^{\prime}}([0,t])}\Big{\|}_{L^{q}_{t}([0,T];L^{r}_{x}( \mathbb{R}^{d}))}\\ &\lesssim T^{\frac{1}{q}}\Big{\|}\Big{\|}\langle\nabla\rangle^{s +1}\frac{\sin(t^{\prime}|\nabla|)}{|\nabla|}\big{(}\eta_{R}e^{in\cdot x}\big{)} \Big{\|}_{L^{r}_{x}(\mathbb{R}^{d})}\cdot\widehat{\phi}_{n}\Big{\|}_{\ell^{2} _{n}L^{2}_{t^{\prime}}([0,T])}\\ &\lesssim T^{\frac{1}{2}+\frac{1}{q}}\|\eta_{R}\|_{H^{|s|+\frac{ d}{2}}(\mathbb{R}^{d})}\big{\|}\langle n\rangle^{s}\widehat{\phi}_{n}\|_{\ell^{2} _{n}}\\ &=T^{\frac{1}{2}+\frac{1}{q}}\|\eta_{R}\|_{H^{|s|+\frac{d}{2}}( \mathbb{R}^{d})}\|\phi\|_{HS(L^{2},H^{s})}.\end{split}\]
This shows that \(\mathbf{\Phi}_{R}\) lies in \(L^{q}([0,T];W^{s+1-\varepsilon,r}(\mathbb{R}^{d}))\) almost surely.
For the case when \(r=\infty\), we can apply the Sobolev embedding \(L^{\infty}(\mathbb{R}^{d})\hookrightarrow W^{\varepsilon,r_{1}}(\mathbb{R}^ {d})\) with \(0<\varepsilon\ll 1\) and \(2\leq r_{1}<\infty\) satisfying \(\varepsilon r_{1}>d\) and then use the above steps to obtain the desired result. The fact that \(\mathbf{\Phi}_{R}\) is continuous in time follows from adapting the above modifications to the proof of [22, Proposition 2.1] using Kolmogorov's continuity criterion. The steps for dealing with \(\partial_{t}\mathbf{\Phi}_{R}\) are similar.
## 3. Local well-posedness of the energy-critical stochastic NLW
In this section, we briefly go over local well-posedness of the defocusing energy-critical SNLW (1.1) on \(\mathbb{R}^{d}\). We first show the following local well-posedness result in a slightly more general setting. Recall that the operator \(V(t)\) is as defined in (1.5) and the \(X(I)\)-norm is as defined in (2.1).
**Proposition 3.1**.: _Let \(d\geq 3\) and \((u_{0},u_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\). Then, there exists \(0<\eta\ll 1\) such that if_
\[\|V(t-t_{0})(u_{0},u_{1})\|_{X(I)}\leq\eta\quad\text{and}\quad\|f\|_{X(I)}\leq\eta \tag{3.1}\]
_for some time interval \(I=[t_{0},t_{1}]\subset\mathbb{R}\), then the Cauchy problem_
\[\begin{cases}\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+f)=0\\ (v,\partial_{t}v)|_{t=t_{0}}=(u_{0},u_{1})\end{cases} \tag{3.2}\]
_with \(\mathcal{N}(u)=|u|^{\frac{4}{d-2}}u\) admits a unique solution \((v,\partial_{t}v)\in C(I;\dot{\mathcal{H}}^{1}(\mathbb{R}^{d}))\), which satisfies_
\[\|v\|_{X(I)}\leq 3\eta.\]
_Here, the uniqueness of \(v\) holds in the set_
\[\{v\in X(I):\|v\|_{X(I)}\leq 3\eta\}.\]
Proof.: By writing (3.2) in the Duhamel formulation, we have
\[v(t)=\Gamma[v](t):=V(t-t_{0})(u_{0},u_{1})-\int_{t_{0}}^{t}S(t-t^{\prime}) \mathcal{N}(v+f)(t^{\prime})dt^{\prime}, \tag{3.3}\]
where \(S(t)\) is as defined in (1.5). We would like to run the contraction argument on the ball
\[B_{\eta,I}:=\{v\in X(I):\|v\|_{X(I)}\leq 3\eta\},\]
where \(\eta>0\) is to be chosen later.
Suppose that \(\|(u_{0},u_{1})\|_{\dot{\mathcal{H}}}\leq A\) for some \(A>0\). Then, by the Strichartz estimate (Lemma 2.3) and (3.1), for any \(v\in B_{a,I}\), we have
\[\|\Gamma[v]\|_{X(I)} \leq\|V(t-t_{0})(u_{0},u_{1})\|_{X(I)}+C_{1}\|v\|_{X(I)}^{\frac{d +2}{d-2}}+C_{1}\|f\|_{X(I)}^{\frac{d+2}{d-2}}\] \[\leq\eta+C_{1}(3\eta)^{\frac{d+2}{d-2}}+C_{1}\eta^{\frac{d+2}{d-2}}\] \[\leq 3\eta\]
for some constant \(C_{1}>0\), where in the last inequality we choose \(0<\eta\ll 1\) in such a way that
\[C_{1}(3\eta)^{\frac{d+2}{d-2}}\leq\eta\quad\text{and}\quad C_{1}\eta^{\frac{d +2}{d-2}}\leq\eta.\]
Also, by the Strichartz estimate (Lemma 2.3) and the fundamental theorem of calculus, for any \(v_{1},v_{2}\in B_{\eta,I}\), we have
\[\|\Gamma[v_{1}]-\Gamma[v_{2}]\|_{X(I)} \leq C_{2}\|\mathcal{N}(v_{1})-\mathcal{N}(v_{2})\|_{L^{1}_{I}L^ {2}_{x}}\] \[=C_{2}\bigg{\|}\int_{0}^{1}\mathcal{N}^{\prime}\big{(}v_{2}+ \alpha(v_{1}-v_{2})\big{)}(v_{1}-v_{2})d\alpha\bigg{\|}_{L^{1}_{I}L^{2}_{x}}\] \[\leq C_{2}\Big{(}\|v_{1}\|_{X(I)}^{\frac{4}{d-2}}+\|v_{2}\|_{X(I )}^{\frac{4}{d-2}}\Big{)}\|v_{1}-v_{2}\|_{X(I)}\] \[\leq 2C_{2}(3\eta)^{\frac{4}{d-2}}\|v_{1}-v_{2}\|_{X(I)}\] \[\leq\frac{1}{2}\|v_{1}-v_{2}\|_{X(I)}\]
for some constant \(C_{2}>0\), where in the last inequality we further shrink \(0<\eta\ll 1\) if possible so that
\[2C_{2}(3\eta)^{\frac{4}{d-2}}\leq\frac{1}{2}.\]
Thus, \(\Gamma\) is a contraction from \(B_{\eta,I}\) to itself, so that the desired local well-posedness of the equation (3.2) follows. The fact that \((v,\partial_{t}v)\in C(I;\dot{\mathcal{H}}^{1}(\mathbb{R}^{d}))\) then follows easily from the Duhamel formulation (3.3), the Strichartz estimate (Lemma 2.3), (3.1), and the fact that \(v\in B_{\eta,I}\).
As a consequence of local well-posedness in Proposition 3.1, we have the following blowup alternative.
**Lemma 3.2**.: _Let \(d\geq 3\) and \((u_{0},u_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\). Let \(f\) be a function such that \(\|f\|_{X([0,T])}<\infty\) for any \(T>0\). Let \(T^{*}=T^{*}(u_{0},u_{1},f)>0\) be the forward maximal time of existence of the following Cauchy problem for \(v\):_
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+f)=0 \\ &(v,\partial_{t}v)|_{t=t_{0}}=(u_{0},u_{1}).\end{aligned}\right.\]
_Then, we have either_
\[T^{*}=\infty\quad\text{or}\quad\lim_{T\nearrow T^{*}}\|v\|_{X([0,T])}=\infty.\]
We now go back to the defocusing energy-critical SNLW (1.1) on \(\mathbb{R}^{d}\). By writing \(u=v+\Psi\) with \(\Psi\) being the stochastic convolution as defined in (2.2), we see that \(v\) satisfies
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+ \Psi)=0\\ &(v,\partial_{t}v)|_{t=t_{0}}=(u_{0},u_{1}).\end{aligned}\right. \tag{3.4}\]
We also note that by Lemma 2.6, given \(\phi\in\text{HS}(L^{2}(\mathbb{R}^{d}),L^{2}(\mathbb{R}^{d}))\), we have
\[\|\Psi\|_{X([t_{0},t_{0}+\tau])}\leq C(\omega)\tau^{\theta}\|\phi\|_{\text{HS} (L^{2},L^{2})} \tag{3.5}\]
almost surely for any \(t_{0}\geq 0\), \(\tau>0\), and some \(\theta>0\). This norm can be made arbitrarily small if we make \(\tau>0\) to be small. Thus, the local well-posedness result (Proposition 3.1) and the blowup alternative (Lemma 3.2) apply to the equation (3.4) by letting \(f=\Psi\).
## 4. Proof of global well-posedness
In this section, we show the proofs of the two global well-posedness results: Theorem 1.1 and Theorem 1.2.
### Global well-posedness on \(\mathbb{R}^{d}\)
In this subsection, we prove Theorem 1.1, global well-posedness of the defocusing energy-critical SNLW (1.1) on \(\mathbb{R}^{d}\). As mentioned in Section 1, the proof is based on the perturbation lemma (Lemma 2.5). However, to carry out the perturbation argument, we first need to show an a priori energy bound using Ito's lemma. In order to justify the use of Ito's lemma, we need to perform an approximation procedure for the defocusing energy-critical SNLW (1.1) as below.
Given \(N\in\mathbb{N}\), we denote \(P_{\lesssim N}\) to be a smooth frequency projection onto \(\{|\xi|\leq N\}\). We consider the following truncated defocusing energy-critical SNLW:
\[\left\{\begin{aligned} &\partial_{t}^{2}u_{\lesssim N}-\Delta u_{ \lesssim N}+P_{\lesssim N}\mathcal{N}(u_{\lesssim N})=P_{\lesssim N}\phi\xi \\ &(u_{\lesssim N},\partial_{t}u_{\lesssim N})|_{t=0}=(P_{\lesssim N }u_{0},P_{\lesssim N}u_{1}),\end{aligned}\right. \tag{4.1}\]
where \(\mathcal{N}(u)=|u|^{\frac{4}{d-2}}u\). We define the truncated stochastic convolution \(\Psi_{\lesssim N}\) by (2.2) with \(\phi\) replaced by \(P_{\lesssim N}\phi\). Due to the boundedness of \(P_{\lesssim N}\), we can easily see from Lemma 2.6 that
\[\|\Psi_{\lesssim N}\|_{X([t_{0},t_{0}+T])}\leq C(\omega)T^{\theta}\|\phi\|_{ \text{HS}(L^{2},L^{2})}\]
almost surely for any \(t_{0}\geq 0\), \(T>0\), and some \(\theta>0\). Thus, again due to the boundedness of \(P_{\lesssim N}\), we see that the local well-posedness result (Proposition 3.1) and the blowup alternative (Lemma 3.2) apply to the following equation for \(v_{\lesssim N}:=u_{\lesssim N}-\Psi_{\lesssim N}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}v_{\lesssim N}-\Delta v_{ \lesssim N}+P_{\lesssim N}\mathcal{N}(v_{\lesssim N}+\Psi_{\lesssim N})=0\\ &(v_{\lesssim N},\partial_{t}v_{\lesssim N})|_{t=0}=(P_{\lesssim N }u_{0},P_{\lesssim N}u_{1}).\end{aligned}\right.\]
This shows that the truncated defocusing energy-critical SNLW (4.1) is locally well-posed.
Let us now show the following lemma regarding the convergence of \(u_{\lesssim N}\) to the solution \(u\) to SNLW (1.1).
**Lemma 4.1**.: _Let \(d\geq 3\), \(\phi\in\text{HS}(L^{2},L^{2})\), and \((u_{0},u_{1})\in\mathcal{H}^{1}\). Then, the following holds true almost surely. Assume that \(u\) is a solution to the defocusing energy-critical SNLW (1.1) on \([0,T]\) for some \(T>0\), and assume that \(u_{\lesssim N}\) is a solution to the truncated defocusing energy-critical SNLW (4.1) on \([0,T_{N}]\) for some \(T_{N}>0\). Also, let \(R>0\) be such that \(\|u\|_{X([0,T])}\leq R\), where the \(X([0,T])\)-norm is as defined in (2.1). Then, by letting \(S_{N}=\min(T,T_{N})\), we have_
\[\|\vec{u}-\vec{u}_{\lesssim N}\|_{C([0,S_{N}];\hat{\mathcal{H}}^{1}_{x})} \longrightarrow 0\]
_and_
\[\|u-u_{\lesssim N}\|_{X([0,S_{N}]\times\mathbb{R}^{d})}\longrightarrow 0\]
_as \(N\to\infty\)._
**Remark 4.2**.: In Lemma 4.1, due to the lack of global well-posedness of the truncated equation (4.1) at this point, we need to assume that the existence time \(T_{N}\) of (4.1) depends on \(N\). One can avoid this issue by inserting a \(H^{1}\)-norm truncation as in [17], so that the nonlinearity becomes Lipschitz and hence one has global well-posedness for the truncated equation. In this paper, we choose to proceed with the existence time \(T_{N}\) dependent on \(N\), which turns out to be harmless at a later point (see the proof of Proposition 4.3).
Proof.: Let \(0<\eta\ll 1\) be chosen later and \(0<\varepsilon\ll\eta\). By slight modifications of Lemma 2.6, we have that for almost sure \(\omega\in\Omega\),
\[\|\Psi-\Psi_{\lesssim N}\|_{X([0,T])}+\|\Psi-\Psi_{\lesssim N}\|_{L^{\infty}_ {T}H^{1}_{x}}<\varepsilon \tag{4.2}\]
given \(N\geq N_{0}(\omega,\varepsilon)\).
We now divide the interval \([0,T]\) into \(J=J(R,\eta)\) many subintervals \(I_{j}=I_{j}(\omega)=[t_{j},t_{j+1}]\) with \(0=t_{0}<t_{1}<\cdots<t_{J-1}<t_{J}=T\) such that
\[\|u\|_{X(I_{j})}\leq\eta \tag{4.3}\]
for \(j=0,1,\ldots,J-1\).
As in the proof of Proposition 3.1, by using the Duhamel formulation (1.4), the Strichartz estimate (Lemma 2.3), and the fundamental theorem of calculus, we obtain
\[\|u-u_{\lesssim N}\|_{X([0,\tau])} \lesssim\|u(0)-u_{\lesssim N}(0)\|_{\hat{\mathcal{H}}^{1}} \tag{4.4}\] \[\quad+\big{(}\|u\|_{X([0,\tau])}+\|u_{\lesssim N}\|_{X([0,\tau])} \big{)}^{\frac{4}{d-2}}\|u-u_{\lesssim N}\|_{X([0,\tau])}\] \[\quad+\|(\text{Id}-P_{\lesssim N})\mathcal{N}(u)\|_{L^{1}_{t}([0,\tau];L^{2}_{x})}+\|\Psi-\Psi_{\lesssim N}\|_{X([0,\tau])}\]
for any \(0\leq\tau\leq\min(t_{1},T_{N})\). Then, from (4.4), by the Lebesgue dominated convergence theorem applied to \((\text{Id}-P_{\lesssim N})u_{0}\), \((\text{Id}-P_{\lesssim N})u_{1}\), and \((\text{Id}-P_{\lesssim N})\mathcal{N}(u)\) along with (4.2) and (4.3), we have
\[\|u-u_{\lesssim N}\|_{X([0,\tau])}<C\varepsilon+C\big{(}\eta+\|u_{\lesssim N} \|_{X([0,\tau])}\big{)}^{\frac{4}{d-2}}\|u-u_{\lesssim N}\|_{X([0,\tau])}\]
for some absolute constant \(C>0\), given \(N\geq N_{0}(\omega,\varepsilon,u)\) sufficiently large. By taking \(\eta>0\) to be sufficiently small (independent of \(\varepsilon\)), we can then use a standard continuity argument to get
\[\|u-u_{\lesssim N}\|_{X(I_{0}\cap[0,T_{N}])}<2C\varepsilon \tag{4.5}\]
and
\[\|u_{\lesssim N}\|_{X(I_{0}\cap[0,T_{N}])}\leq 2\eta. \tag{4.6}\]
Similar as above, we use the Duhamel formulation (1.4), the Strichartz estimate (Lemma 2.3), the Lebesgue dominated convergence theorem, (4.2), (4.3), and (4.6), we obtain
\[\|\vec{u}-\vec{u}_{\lesssim N}\|_{L^{\infty}(I_{0}\cap[0,T_{N}];\dot{\mathcal{ H}}^{1}_{x})}<C\varepsilon+C\eta^{\frac{4}{d-2}}\|u-u_{\lesssim N}\|_{X(I_{0} \cap[0,T_{N}])}\]
given \(N\geq N_{0}(\omega,\varepsilon,u)\). By (4.5) and taking \(\eta>0\) to be sufficiently small (independent of \(\varepsilon\)), we then obtain
\[\|\vec{u}-\vec{u}_{\lesssim N}\|_{L^{\infty}(I_{0}\cap[0,T_{N}];\dot{\mathcal{ H}}^{1}_{x})}<2C\varepsilon.\]
In particular, if \(t_{1}\leq T_{N}\), we have
\[\|\vec{u}(t_{1})-\vec{u}_{\lesssim N}(t_{1})\|_{\dot{\mathcal{H}}^{1}}<2C \varepsilon.\]
We now repeat the above arguments on \(I_{1}\) to obtain
\[\|u-u_{\lesssim N}\|_{X(I_{1}\cap[0,T_{N}])}<4C^{2}\varepsilon\qquad\text{and} \qquad\|\vec{u}-\vec{u}_{\lesssim N}\|_{L^{\infty}(I_{1}\cap[0,T_{N}];\dot{ \mathcal{H}}^{1}_{x})}<4C^{2}\varepsilon.\]
By applying the above arguments repetitively, we obtain that for \(j=0,1,\ldots,J-1\),
\[\|u-u_{\lesssim N}\|_{X(I_{j}\cap[0,T_{N}])}<(2C)^{j+1}\varepsilon\qquad\text{ and}\qquad\|\vec{u}-\vec{u}_{\lesssim N}\|_{L^{\infty}(I_{j}\cap[0,T_{N}];\dot{ \mathcal{H}}^{1}_{x})}<(2C)^{j+1}\varepsilon.\]
Thus, we have
\[\|\vec{u}-\vec{u}_{\lesssim N}\|_{L^{\infty}([0,S_{N}];\dot{ \mathcal{H}}^{1}_{x})}\leq\sum_{j=0}^{J-1}\|\vec{u}-\vec{u}_{\lesssim N}\|_{L ^{\infty}(I_{j}\cap[0,T_{N}];\dot{\mathcal{H}}^{1}_{x})}\leq(2C)^{J}\varepsilon,\] \[\|u-u_{\lesssim N}\|_{X([0,S_{N}])}\leq\sum_{j=0}^{J-1}\|u-u_{ \lesssim N}\|_{X(I_{j}\cap[0,T_{N}])}\leq(2C)^{J}\varepsilon.\]
Since \(\varepsilon>0\) can be arbitrarily small and \(J\) depends only on \(R>0\) and an absolute constant \(\eta>0\), we can conclude the desired convergence results.
We now show the following a priori bound on the energy \(E\) as defined in (1.3).
**Proposition 4.3**.: _Let \(d\geq 3\), \(\phi\in\text{HS}(L^{2},L^{2})\), and \((u_{0},u_{1})\in\dot{\mathcal{H}}^{1}(\mathbb{R}^{d})\). Let \(u\) be the solution to the defocusing energy-critical SNLW (1.1) with \((u,\partial_{t}u)|_{t=0}=(u_{0},u_{1})\) and let \(T^{*}=T^{*}(\omega,u_{0},u_{1})\) be the forward maximal time of existence. Then, given any \(T_{0}>0\), there exists \(C=C(\|(u_{0},u_{1})\|_{\dot{\mathcal{H}}^{1}},\|\phi\|_{\text{HS}(L^{2},L^{2}) },T_{0})>0\) such that for any stopping time \(T\) with \(0<T<\min(T^{*},T_{0})\) almost surely, we have_
\[\mathbb{E}\Big{[}\sup_{0\leq t\leq T}E(\vec{u}(t))\Big{]}\leq C.\]
Proof.: We write the energy \(E\) in (1.3) as
\[E\big{(}u_{1}(t),u_{2}(t)\big{)}=\frac{1}{2}\int_{\mathbb{R}^{d}}|u_{2}(t)|^{ 2}dx+\frac{1}{2}\int_{\mathbb{R}^{d}}|\nabla u_{1}(t)|^{2}dx+\frac{d-2}{2d} \int_{\mathbb{R}^{d}}|u_{1}(t)|^{\frac{2d}{d-2}}dx.\]
A direct computation yields
\[E^{\prime}\big{(}u_{1}(t),u_{2}(t)\big{)}(v_{1},v_{2})=\int_{ \mathbb{R}^{d}}\Big{(}u_{2}(t)v_{2}+\nabla u_{1}(t)\cdot\nabla v_{1}+|u_{1}(t )|^{\frac{4}{d-2}}u_{1}(t)v_{1}\Big{)}dx \tag{4.7}\]
and
\[\begin{split} E^{\prime\prime}\big{(}u_{1}(t),& u_{2}(t)\big{)}\big{(}(v_{1},v_{2}),(w_{1},w_{2})\big{)}\\ &=\int_{\mathbb{R}^{d}}v_{2}w_{2}dx+\int_{\mathbb{R}^{d}}\nabla v_ {1}\cdot\nabla w_{1}dx+\frac{d+2}{d-2}\int_{\mathbb{R}^{d}}|u_{1}(t)|^{\frac{4 }{d-2}}v_{1}w_{1}dx\end{split} \tag{4.8}\]
Given \(R>0\), we define the stopping time
\[T_{1}=T_{1}(R):=\inf\big{\{}\tau\geq 0:\|u\|_{X([0,\tau])}\geq R\big{\}},\]
where the \(X([0,\tau])\)-norm is as defined in (2.1). We also set \(T_{2}:=\min(T,T_{1})\). Note that we have
\[\|u\|_{X([0,T])}<\infty\]
almost surely in view of the blowup alternative in Lemma 3.2, so that we have \(T_{2}\nearrow T\) almost surely as \(R\to\infty\).
Let \(u_{\lesssim N}\) be the solution to the truncated defocusing energy-critical SNLW (4.1) with maximal time of existence \(T_{N}^{*}=T_{N}^{*}(\omega)\). By Lemma 4.1, we can deduce that there exists an almost surely finite number \(N_{0}(\omega)\in\mathbb{N}\) such that \(T_{N}^{*}\geq T_{2}\) for any \(N\geq N_{0}(\omega)\). Indeed, suppose not, then there exists \(\Omega_{0}\subset\Omega\) with positive measure such that for all \(\omega\in\Omega_{0}\), there exists a sequence of increasing numbers \(\{N_{j}(\omega)\}_{j\in\mathbb{N}}\subset\mathbb{N}\) such that \(T_{N_{j}(\omega)}^{*}<T_{2}\). By the blowup alternative (Lemma 3.2), we know that for all \(\omega\in\Omega_{0}\) and \(j\in\mathbb{N}\), there exists \(T_{N_{j}(\omega)}<T_{2}\) such that
\[\|u_{\lesssim N_{j}(\omega)}\|_{X([0,T_{N_{j}(\omega)}])}>2R.\]
This contradicts with the convergence of \(u_{\lesssim N_{j}(\omega)}\) to \(u\) in \(X([0,T_{N_{j}(\omega)}])\) as \(N_{j}(\omega)\to\infty\) in Lemma 4.1, given that we have
\[\|u\|_{X([0,T_{2}])}\leq R.\]
We can now work with (4.1) on \([0,T_{2}]\). Note that we can write (4.1) in the following Ito formulation:
\[d\begin{pmatrix}u_{\lesssim N}\\ \partial_{t}u_{\lesssim N}\end{pmatrix}=\begin{pmatrix}0&1\\ \Delta&0\end{pmatrix}\begin{pmatrix}u_{\lesssim N}\\ \partial_{t}u_{\lesssim N}\end{pmatrix}dt+\begin{pmatrix}0\\ -P_{\lesssim N}\mathcal{N}(u_{\lesssim N})\end{pmatrix}dt+\begin{pmatrix}0\\ P_{\lesssim N}\phi dW\end{pmatrix}.\]
By Ito's lemma (see [16, Theorem 4.32]) along with (4.7) and (4.8), we obtain
\[\begin{split} E\big{(}u_{\lesssim N}(t),&\partial_{t}u_{ \lesssim N}(t)\big{)}=E(P_{\lesssim N}u_{0},P_{\lesssim N}u_{1})+2\sum_{k\in \mathbb{N}}\int_{0}^{t}\int_{\mathbb{R}^{d}}\partial_{t}u_{\lesssim N}(t^{ \prime})P_{\lesssim N}\phi e_{k}dxd\beta_{k}(t^{\prime})\\ &+2\int_{0}^{t}\int_{\mathbb{R}^{d}}\partial_{t}u_{\lesssim N}(t^{ \prime})(\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u_{\lesssim N})dxdt^{\prime}+ t\|P_{\lesssim N}\phi\|^{2}_{\text{HS}(L^{2},L^{2})}\end{split} \tag{4.9}\]
for \(0<t<T_{2}\).
For the third term on the right-hand-side of (4.9), by Holder's inequalities, the Lebesgue dominated convergence theorem applied to \((\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u)\), and Lemma 4.1, we have
\[\begin{split}&\bigg{|}\int_{0}^{t}\int_{\mathbb{R}^{d}}\partial_{t}u _{\lesssim N}(t^{\prime})(\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u_{\lesssim N })dxdt^{\prime}\bigg{|}\\ &\qquad\lesssim\|\partial_{t}u_{\lesssim N}\|_{L^{\infty}_{T_{2}} L^{2}_{x}}\|(\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u_{\lesssim N})\|_{L^{1}_{T_{2}}L^{2}_{ x}}\\ &\qquad\lesssim\|\partial_{t}u_{\lesssim N}\|_{L^{\infty}_{T_{2} }L^{2}_{x}}\Big{(}\|(\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u)\|_{L^{1}_{T_{2} }L^{2}_{x}}+\|\mathcal{N}(u)-\mathcal{N}(u_{\lesssim N})\|_{L^{1}_{T_{2}}L^{2 }_{x}}\Big{)}\\ &\qquad\lesssim\|\partial_{t}u_{\lesssim N}\|_{L^{\infty}_{T_{2} }L^{2}_{x}}\Big{(}\|(\mathrm{Id}-P_{\lesssim N})\mathcal{N}(u)\|_{L^{1}_{T_{2} }L^{2}_{x}}\\ &\qquad+\big{(}\|u\|_{X([0,T_{2}])}+\|u_{\lesssim N}\|_{X([0,T_{ 2}])}\big{)}^{\frac{4}{d-2}}\|u-u_{\lesssim N}\|_{X([0,T_{2}])}\Big{)}\\ &\longrightarrow 0\end{split} \tag{4.10}\]
almost surely, as \(N\to\infty\). Thus, by applying the Burkholder-Davis-Gundy inequality to (4.9) along with (4.10), Holder's inequality, and Cauchy's inequality, we obtain
\[\begin{split}&\mathbb{E}\Big{[}\sup_{0\leq t\leq T_{2}}E\big{(}u_{ \lesssim N}(t),\partial_{t}u_{\lesssim N}(t)\big{)}\Big{]}\\ &\qquad\leq E(P_{\lesssim N}u_{0},P_{\lesssim N}u_{1})+C\mathbb{ E}\bigg{[}\bigg{(}\sum_{k\in\mathbb{N}}\int_{0}^{T_{2}}\bigg{|}\int_{\mathbb{R}^{d}} \partial_{t}u_{\lesssim N}(t^{\prime})P_{\lesssim N}\phi e_{k}dx\bigg{|}^{2} dt^{\prime}\bigg{)}^{1/2}\bigg{]}\\ &\qquad+\varepsilon+T_{2}\|\phi\|^{2}_{\text{HS}(L^{2},L^{2})}\\ &\qquad\leq CE(u_{0},u_{1})+CT_{2}^{\frac{1}{2}}\mathbb{E}\big{[} \|\partial_{t}u_{\lesssim N}\|_{L^{\infty}_{T_{2}}L^{2}_{x}}\|\phi\|_{\text{HS }(L^{2},L^{2})}\big{]}+\varepsilon+T_{2}\|\phi\|^{2}_{\text{HS}(L^{2},L^{2})} \\ &\qquad\leq\frac{1}{2}\mathbb{E}\Big{[}\sup_{0\leq t\leq T_{2}}E \big{(}u_{\lesssim N}(t),\partial_{t}u_{\lesssim N}(t)\big{)}\Big{]}+CE(u_{0},u_{1})+CT_{2}\|\phi\|^{2}_{\text{HS}(L^{2},L^{2})}+\varepsilon\end{split}\]
for some absolute constant \(C>0\) and \(\varepsilon>0\) arbitrarily small given \(N\geq N_{0}(\omega,\varepsilon,u)\) sufficiently large. This shows that
\[\mathbb{E}\Big{[}\sup_{0\leq t\leq T_{2}}E\big{(}\vec{u}_{\lesssim N}(t)\big{)} \Big{]}\leq CE(u_{0},u_{1})+CT_{2}\|\phi\|^{2}_{\text{HS}(L^{2},L^{2})}. \tag{4.11}\]
By Fatou's lemma and (4.11), we have
\[\mathbb{E}\Big{[}\sup_{0\leq t\leq T_{2}}E\big{(}\vec{u}(t)\big{)}\Big{]}\leq \liminf_{N\to\infty}\mathbb{E}\Big{[}\sup_{0\leq t\leq T_{2}}E\big{(}\vec{u}_{ \lesssim N}(t)\big{)}\Big{]}\leq C\]
for some constant \(C=C(\|(u_{0},u_{1})\|_{\hat{\mathcal{H}}^{1}},\|\phi\|_{\text{HS}(L^{2},L^{2}) },T_{0})\). In view of the almost sure convergence of \(T_{2}\) to \(T\), we then obtain the desired energy bound using Fatou's lemma.
We are now ready to prove Theorem 1.1.
Proof of Theorem 1.1.: Let \(u\) be a local-in-time solution to the defocusing energy-critical SNLW (1.1) given by Proposition 3.1. We let \(v=u-\Psi\), where \(v\) satisfies
\[\begin{cases}\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+\Psi)=0\\ (v,\partial_{t}v)|_{t=0}=(u_{0},u_{1}),\end{cases} \tag{4.12}\]
where \(\mathcal{N}(u)=|u|^{\frac{4}{d-2}}u\). Given \(T>0\), if we suppose that the solution \(u\) exists on \([0,T]\), then by the energy bound in Proposition 4.3, we have
\[\sup_{0\leq t\leq T}E(\vec{u}(t))\leq C(\omega,\|(u_{0},u_{1})\|_{\dot{\mathcal{ H}}^{1}},\|\phi\|_{\text{HS}(L^{2},L^{2})},T)<\infty.\]
Then, by Lemma 2.6 and Remark 2.7, we obtain
\[\sup_{0\leq t\leq T}E(\vec{v}(t)) \leq C\sup_{0\leq t\leq T}E(\vec{u}(t))+C\sup_{0\leq t\leq T}E( \vec{\Psi}(t))\] \[\leq C(\omega,\|(u_{0},u_{1})\|_{\dot{\mathcal{H}}^{1}},\|\phi\|_ {\text{HS}(L^{2},L^{2})},T) \tag{4.13}\] \[=:R\]
Given a target time \(T>0\), we pick any \(t_{0}\in[0,T)\) and suppose that the solution \(v\) to (4.12) has already been constructed on \([0,t_{0}]\). Our goal is to show the existence of a unique solution \(v\) to (4.12) on \([t_{0},t_{0}+\tau]\cap[0,T]\) with \(\tau>0\) independent of \(t_{0}\). In this way, we can iterate the argument so that a global solution \(v\) to (4.12) can be constructed on \([0,T]\), which then concludes the proof of Theorem 1.1.
Let \(w\) be the global solution to the deterministic defocusing energy-critical NLW:
\[\partial_{t}^{2}w-\Delta w+|w|^{\frac{4}{d-2}}w=0 \tag{4.14}\]
with \((w,\partial_{t}w)|_{t=t_{0}}=\vec{v}(t_{0})\). By (4.13), we have \(\|\vec{w}(t_{0})\|_{\dot{\mathcal{H}}^{1}}\leq R\). By Lemma 2.4, we have
\[\|w\|_{X([t_{0},T])}\leq C(R)<\infty. \tag{4.15}\]
Let \(0<\eta\ll 1\) be chosen later. By (4.15), we can divide the interval \([t_{0},T]\) into \(J=J(R,\eta)\) many subintervals \(I_{j}=[t_{j},t_{j+1}]\) such that
\[\|w\|_{X(I_{j})}\leq\eta \tag{4.16}\]
for \(j=0,1,\ldots,J-1\). Recalling the operators \(V(t)\) and \(S(t)\) in (1.5), by using the Duhamel formulation of (4.14), the Strichartz estimate (Lemma 2.3), and (4.16), we have
\[\begin{split}\big{\|}V(t-t_{j})\big{(}w(t_{j}),\partial_{t}w(t_{ j})\big{)}\big{\|}_{X(I_{j})}&=\bigg{\|}w(t)+\int_{t_{j}}^{t}S(t-t^{ \prime})\mathcal{N}(w)(t^{\prime})dt^{\prime}\bigg{\|}_{X(I_{j})}\\ &\leq\|w\|_{X(I_{j})}+C\|w\|_{X(I_{j})}^{\frac{d+2}{d-2}}\\ &\leq\eta+C\eta^{\frac{d+2}{d-2}}\\ &\leq 2\eta\end{split} \tag{4.17}\]
for \(j=0,1,\ldots,J-1\), given that \(\eta>0\) is sufficiently small.
We now consider \(v\) on the first interval \(I_{0}\), which we can assume that \(I_{0}\subseteq[t_{0},t_{0}+\tau]\) for some \(\tau>0\) to be determined later. Since \(\vec{v}(t_{0})=\vec{w}(t_{0})\), by (4.17) we have
\[\big{\|}V(t-t_{0})\big{(}v(t_{0}),\partial_{t}v(t_{0})\big{)}\big{\|}_{X(I_{0} )}=\big{\|}V(t-t_{0})\big{(}w(t_{0}),\partial_{t}w(t_{0})\big{)}\big{\|}_{X(I_{ 0})}\leq 2\eta. \tag{4.18}\]
Using Proposition 3.1 along with (4.18) and (3.5) (with \(\tau>0\) sufficiently small such that \(C(\omega)\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L^{2})}\leq 2\eta\)), we obtain the existence of \(v\) on \(I_{0}\) and
\[\|v\|_{X(I_{0})}\leq 6\eta, \tag{4.19}\]
given that \(0<2\eta\leq\eta_{0}\) for sufficiently small absolute constant \(\eta_{0}>0\) and \(I_{0}\subset[t_{0},t_{0}+\tau]\). We set
\[f=\mathcal{N}(v+\Psi)-\mathcal{N}(v).\]
By the fundamental theorem of calculus, (4.19), and (3.5), we have
\[\|f\|_{L^{1}_{I_{0}}L^{2}_{x}} =\bigg{\|}\int_{0}^{1}\mathcal{N}^{\prime}(v+\alpha\Psi)\Psi d \alpha\bigg{\|}_{L^{1}_{I_{0}}L^{2}_{x}}\] \[\leq C\big{(}\|v\|_{X(I_{0})}+\|\Psi\|_{X(I_{0})}\big{)}^{\frac{4} {d-2}}\|\Psi\|_{X(I_{0})}\] \[\leq C\big{(}3\eta_{0}+C\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L^ {2})}\big{)}^{\frac{4}{d-2}}\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L^{2})}\] \[\leq C\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L^{2})}\]
for \(\eta_{0},\tau>0\) sufficiently small. Given \(\varepsilon>0\), we can further shrink the value of \(\tau=\tau(\varepsilon)>0\) so that
\[\|f\|_{L^{1}_{I_{0}}L^{2}_{x}}\leq\varepsilon.\]
Thus, by the perturbation lemma (Lemma 2.5), for \(0<\varepsilon<\varepsilon_{0}\) with \(\varepsilon_{0}=\varepsilon_{0}(R)>0\) given by Lemma 2.5, we obtain
\[\|(v-w,\partial_{t}v-\partial_{t}w)\|_{L^{\infty}_{I_{0}}\dot{\mathcal{H}}^{1}_ {x}}+\|v-w\|_{L^{q}_{I_{0}}L^{r}_{x}}\leq C_{0}(R)\varepsilon\]
for some constant \(C_{0}(R)\geq 1\). In particular, we have
\[\big{\|}\big{(}v(t_{1})-w(t_{1}),\partial_{t}v(t_{1})-\partial_{t}w(t_{1}) \big{)}\big{\|}_{\dot{\mathcal{H}}^{1}_{x}}\leq C_{0}(R)\varepsilon. \tag{4.20}\]
If \(I_{0}=[t_{0},t_{0}+\tau]\), then we can stop the iterative argument to be performed below. If \(I_{0}\subsetneq[t_{0},t_{0}+\tau]\), then we move on to the second interval \(I_{1}\), and we can also assume that \(I_{1}\subseteq[t_{0},t_{0}+\tau]\). By (4.17), the Strichartz estimate (Lemma 2.3), and (4.20), we have
\[\big{\|}V(t-t_{1})\big{(}v(t_{1}),\partial_{t}v(t_{1})\big{)}\big{ }\big{\|}_{X(I_{1})}\] \[\quad+\big{\|}V(t-t_{1})\big{(}w(t_{1})-v(t_{1}),\partial_{t}w(t_ {1})-\partial_{t}v(t_{1})\big{)}\big{\|}_{X(I_{1})}\] \[\quad\leq 2\eta+C\cdot C_{0}(R)\varepsilon\] \[\quad\leq 3\eta,\]
where we choose \(\varepsilon=\varepsilon(R,\eta)>0\) to be sufficiently small such that \(C\cdot C_{0}(R)\varepsilon\leq\eta\). By using the above argument on \(I_{0}\), we can obtain the existence of \(v\) on \(I_{1}\) and
\[\|v\|_{X(I_{1})}\leq 9\eta\]
given \(0<3\eta\leq\eta_{0}\), and
\[\|f\|_{L^{1}_{I_{1}}L^{2}_{x}} \leq C\big{(}\|v\|_{X(I_{0})}+\|\Psi\|_{X(I_{0})}\big{)}^{\frac{4} {d-2}}\|\Psi\|_{X(I_{0})}\] \[\leq C\big{(}3\eta_{0}+C\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L ^{2})}\big{)}^{\frac{4}{d-2}}\tau^{\theta}\|\phi\|_{HS(L^{2},L^{2})} \tag{4.21}\] \[\leq C\tau^{\theta}\|\phi\|_{\text{HS}(L^{2},L^{2})}\] \[\leq\varepsilon.\]
Thus, with (4.20) and (4.21), by the perturbation lemma (Lemma 2.5), we obtain
\[\|(v-w,\partial_{t}v-\partial_{t}w)\|_{L^{\infty}_{I_{1}}\dot{\mathcal{H}}^{1}_ {x}}+\|v-w\|_{L^{q}_{I_{1}}L^{r}_{x}}\leq C_{0}(R)\cdot C_{0}(R)\varepsilon=C_ {0}(R)^{2}\varepsilon\]
as long as \(0<C_{0}(R)\varepsilon<\varepsilon_{0}\).
Proceeding as above iteratively, we obtain the existence of \(v\) on \(I_{j}\) for all \(0\leq j\leq J^{\prime}\leq J-1\) such that \(\bigcup_{j=0}^{J^{\prime}}I_{j}=[t_{0},t_{0}+\tau]\), as long as \(0<3\eta\leq\eta_{0}\) and \(\varepsilon>0\) satisfies \(C\cdot C_{0}(R)^{J-1}\varepsilon\leq\eta\leq\frac{\eta_{0}}{3}\) and \(C_{0}(R)^{J-1}\varepsilon<\varepsilon_{0}\). For convenience, we let \(\eta=\frac{\eta_{0}}{3}\) so that \(J\) depends only on \(R\) and \(\eta_{0}\), and so the above restrictions on \(\varepsilon\) can be made possible by choosing \(\varepsilon=\varepsilon(R,\eta_{0})\) sufficiently small. This finishes the proof of Theorem 1.1.
### Global well-posedness on \(\mathbb{T}^{d}\)
In this subsection, we focus on global well-posedness of the defocusing energy-critical SNLW (1.1) on the periodic setting and present the proof of Theorem 1.2. We would like to invoke the finite speed of propagation of the wave equation to reduce our global well-posedness problem on \(\mathbb{T}^{d}\) to global well-posedness on \(\mathbb{R}^{d}\), which we have presented in Subsection 4.1. However, instead of using Ito's lemma to establish an energy bound for the solution \(u\) to (1.1), we use a Gronwall-type argument to show an a priori energy bound for the solution \(v\) to the perturbed NLW (1.7).
Let us first show the a priori energy bound in a slightly more abstract setting. Consider the following perturbed equation for \(v\) on \(\mathbb{R}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+z)=0 \\ &(v,\partial_{t}v)|_{t=0}=(u_{0},u_{1}),\end{aligned}\right. \tag{4.22}\]
where \(\mathcal{N}(u)=|u|^{\frac{4}{d-2}}u\) and \(z\) is a space-time function satisfying some regularity assumptions to be specified below.
**Proposition 4.4**.: _Let \(d\geq 3\) and \((u_{0},u_{1})\in\mathcal{H}^{1}(\mathbb{R}^{d})\). Let \(z\) be a space-time function in the class_
\[C\big{(}\mathbb{R}_{+};W^{\sigma,\infty}\cap W^{\sigma,\frac{2d}{d-2}}\cap W^{ \sigma,\frac{2d+4}{d-2}}(\mathbb{R}^{d})\big{)}\cap C^{1}(\mathbb{R}_{+};W^{ \sigma-1,\infty}(\mathbb{R}^{d}))\]
_with \(\sigma\in\mathbb{R}\) satisfying_
\[\text{(i)}\ d=3:\ \sigma>\frac{1}{2}\quad\text{or}\quad\text{(ii)}\ d\geq 4:\ \sigma>0.\]
_Let \(v\) be the solution to the equation (4.22) with \((v,\partial_{t}v)|_{t=0}=(u_{0},u_{1})\) and let \(T^{*}=T^{*}_{\omega}(u_{0},u_{1})\) be the forward maximal time of existence. Then, given any \(T_{0}>0\), there exists a constant_
\[C=C\Big{(}T_{0},\|(u_{0},u_{1})\|_{\mathcal{H}^{1}},\|z\|_{L^{\infty}_{T_{0}}L ^{\infty}_{x}},\|z\|_{L^{\infty}_{T_{0}}L^{\frac{2d}{d-2}}_{x}},\|z\|_{L^{ \infty}_{T_{0}}L^{\frac{2d+4}{d-2}}_{x}},\|\partial_{t}z\|_{L^{\infty}_{T_{0} }W^{\sigma-1,\infty}_{x}}\Big{)}>0\]
_such that for any stopping time \(T\) with \(0<T<\min(T^{*},T_{0})\) almost surely, we have_
\[\sup_{0\leq t\leq T}E(\vec{v}(t))\leq C.\]
Proof.: We consider two cases.
**Case 1: \(d\geq 4\)**.
In this case, by assumption, we have \(z\in C([0,T];L^{\infty}(\mathbb{R}^{d}))\). Thus, by (4.22), we have
\[\partial_{t}E(\vec{v}(t)) =\int_{\mathbb{R}^{d}}\partial_{t}v(t)\big{(}\partial_{t}^{2}v(t) -\Delta v(t)+|v(t)|^{\frac{4}{d-2}}v(t)\big{)}dx \tag{4.23}\] \[=-\int_{\mathbb{R}^{d}}\partial_{t}v(t)\big{(}\mathcal{N}(v+z)(t )-\mathcal{N}(v)(t)\big{)}dx.\]
By the fundamental theorem of calculus, we have
\[\big{|}\mathcal{N}(v+z)-\mathcal{N}(v)\big{|}=\bigg{|}\int_{0}^{1}\mathcal{N} ^{\prime}(v+\alpha z)zd\alpha\bigg{|}\lesssim|z||v|^{\frac{4}{d-2}}+|z|^{ \frac{d+2}{d-2}}. \tag{4.24}\]
Thus, by (4.23), (4.24), Holder's inequality, and the Cauchy-Schwarz inequality, we obtain
\[\partial_{t}E(\vec{v}(t)) \lesssim\|z(t)\|_{L^{\infty}}\int_{\mathbb{R}^{d}}|\partial_{t}v(t)|| v(t)|^{\frac{4}{d-2}}dx+\int_{\mathbb{R}^{d}}|\partial_{t}v(t)||z(t)|^{\frac{d+2}{d-2}}dx\] \[\leq\|z(t)\|_{L^{\infty}}\|\partial_{t}v(t)\|_{L^{2}}\|v(t)\|_{L^ {\frac{4}{d-2}}}^{\frac{4}{d-2}}+\|z(t)\|_{L^{\frac{2d+4}{d-2}}}^{\frac{2d+4}{ d-2}}\|\partial_{t}v(t)\|_{L^{2}}\] \[\leq C\big{(}\|z(t)\|_{L^{\infty}},\|z(t)\|_{L^{\frac{2d+4}{d-2}} }\big{)}\big{(}1+E(\vec{v}(t))\big{)}\]
as long as \(\frac{8}{d-2}\leq\frac{2d}{d-2}\), or equivalently, \(d\geq 4\). By Gronwall's inequality, we get
\[E(\vec{v}(t))\leq C(\|(u_{0},u_{1})\|_{\mathcal{H}^{1}})e^{C(z)t}\]
for any \(0<t\leq T\), which implies the desired energy bound.
**Case 2: \(d=3\)**.
In this case, by assumption, we have \(z\in C([0,T];W^{\frac{1}{2}+\varepsilon,\infty}(\mathbb{T}^{3}))\) and \(\partial_{t}z\in C([0,T];W^{-\frac{1}{2}+\varepsilon,\infty}(\mathbb{T}^{3}))\), where \(\varepsilon>0\) is sufficiently small. By (4.23) and Taylor's theorem, we have
\[\partial_{t}E(\vec{v}(t)) =-\int_{\mathbb{R}^{3}}\partial_{t}v(t)\big{(}\mathcal{N}(v+z)(t) -\mathcal{N}(v)(t)\big{)}dx \tag{4.25}\] \[=-5\int_{\mathbb{T}^{3}}\partial_{t}v(t)\cdot|v(t)|^{4}z(t)dx\] \[\quad-10\int_{\mathbb{R}^{3}}\partial_{t}v(t)\cdot|v(t)+\theta z (t)|^{2}(v(t)+\theta z(t))z(t)^{2}dx\] \[=:I_{1}+I_{2}\]
for some \(\theta\in(0,1)\). For \(I_{2}\), by the Cauchy-Schwarz inequality, Holder's inequality, and Cauchy's inequality, we obtain
\[|I_{2}| \lesssim\int_{\mathbb{R}^{3}}|\partial_{t}v(t)|\big{(}|v(t)|^{3}z (t)^{2}+z(t)^{5}\big{)}dx \tag{4.26}\] \[\leq\|\partial_{t}v(t)\|_{L^{2}}\big{(}\|z(t)\|_{L^{\infty}}^{4} \|v(t)\|_{L^{6}}^{6}+\|z(t)\|_{L^{10}}^{10}\big{)}^{\frac{1}{2}}\] \[\leq C(\|z(t)\|_{L^{\infty}},\|z(t)\|_{L^{10}})\big{(}1+E(\vec{v} (t))\big{)}.\]
We now consider \(I_{1}\). For \(0\leq t_{1}\leq t_{2}\leq T\), by integration by parts, Holder's inequalities, and Young's inequalities, we obtain
\[\int_{t_{1}}^{t_{2}}I_{1}dt^{\prime} =-\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{3}}\partial_{t}\big{(}|v |^{4}v\big{)}(t^{\prime})z(t^{\prime})dxdt^{\prime} \tag{4.27}\] \[=-\int_{\mathbb{R}^{3}}|v(t_{2})|^{4}v(t_{2})z(t_{2})dx+\int_{ \mathbb{R}^{3}}|v(t_{1})|^{4}v(t_{1})z(t_{1})dx\] \[\quad+\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{3}}|v(t^{\prime})|^{ 4}v(t^{\prime})\partial_{t}z(t^{\prime})dxdt^{\prime}\] \[\leq\delta\|v(t_{2})\|_{L^{6}}^{6}+C(\delta)\|z(t_{2})\|_{L^{6}}^ {6}+\|v(t_{1})\|_{L^{6}}^{6}+\|z(t_{1})\|_{L^{6}}^{6}\] \[\quad+\int_{t_{1}}^{t_{2}}\int_{\mathbb{R}^{3}}|v(t^{\prime})|^{ 4}v(t^{\prime})\partial_{t}z(t^{\prime})dxdt^{\prime}\]
for some \(0<\delta<1\). By duality, Holder's inequality, Lemma 2.1, and Lemma 2.2, we have
\[\begin{split}\int_{t_{1}}^{t_{2}}&\int_{\mathbb{T}^{3}}| v(t^{\prime})|^{4}v(t^{\prime})\partial_{t}z(t^{\prime})dxdt^{\prime}\\ &=\int_{t_{1}}^{t_{2}}\int_{\mathbb{T}^{3}}\langle\nabla\rangle^{ \frac{1}{2}-\varepsilon}\big{(}|v|^{4}v\big{)}(t^{\prime})\langle\nabla\rangle^ {-\frac{1}{2}+\varepsilon}(\partial_{t}z)(t^{\prime})dxdt^{\prime}\\ &\lesssim\int_{t_{1}}^{t_{2}}\big{\|}\langle\nabla\rangle^{ \frac{1}{2}-\varepsilon}\big{(}|v|^{4}v\big{)}(t^{\prime})\big{\|}_{L^{\frac{ 3}{3-\varepsilon}}}\big{\|}\langle\nabla\rangle^{-\frac{1}{2}+\varepsilon}( \partial_{t}z)(t^{\prime})\big{\|}_{L^{\infty}}dt^{\prime}\\ &\lesssim\|\partial_{t}z\|_{L^{\infty}_{T}W_{x}^{-\frac{1}{2}+ \varepsilon,\infty}}\int_{t_{1}}^{t_{2}}\|v(t^{\prime})\|_{L^{6}}^{4}\big{\|} \langle\nabla\rangle^{\frac{1}{2}-\varepsilon}v(t^{\prime})\big{\|}_{L^{ \frac{3}{1-\varepsilon}}}dt^{\prime}\\ &\leq\|\partial_{t}z\|_{L^{\infty}_{T}W_{x}^{-\frac{1}{2}+ \varepsilon,\infty}}\int_{t_{1}}^{t_{2}}E(\vec{v}(t^{\prime}))^{\frac{2}{3}} \|\langle\nabla\rangle v(t^{\prime})\|_{L^{2}}^{\frac{1}{2}}\|v(t^{\prime})\| _{L^{6}}^{\frac{1}{2}}dt^{\prime}\\ &\leq\|\partial_{t}z\|_{L^{\infty}_{T}W_{x}^{-\frac{1}{2}+ \varepsilon,\infty}}\bigg{(}1+\int_{t_{1}}^{t_{2}}E(\vec{v}(t^{\prime}))dt^{ \prime}\bigg{)}.\end{split} \tag{4.28}\]
Combining (4.25), (4.26), (4.27), and (4.28), we obtain
\[\begin{split} E(\vec{v}(t_{2}))&\leq C\Big{(}\|z \|_{L^{\infty}_{T}L^{\infty}_{x}},\|z\|_{L^{\infty}_{T}L^{10}_{x}},\|\partial_ {t}z\|_{L^{\infty}_{T}W_{x}^{-\frac{1}{2}+\varepsilon,\infty}}\Big{)}\int_{t _{1}}^{t_{2}}E(\vec{v}(t^{\prime}))dt^{\prime}\\ &\quad+C\Big{(}\|z\|_{L^{\infty}_{T}L^{\infty}_{x}},\|z\|_{L^{ \infty}_{T}L^{6}_{x}},\|z\|_{L^{\infty}_{T}L^{10}_{x}},\|\partial_{t}z\|_{L^{ \infty}_{T}W_{x}^{-\frac{1}{2}+\varepsilon,\infty}},\|v(t_{1})\|_{L^{6}}\Big{)} \end{split}\]
for any \(0\leq t_{1}\leq t_{2}\leq T\). By Gronwall's inequality, we get
\[E(\vec{v}(t))\leq C(\|(u_{0},u_{1})\|_{\mathcal{H}^{1}})e^{C(z,\|(u_{0},u_{1}) \|_{\mathcal{H}^{1}})t}\]
for any \(0<t\leq T\), which implies the desired energy bound.
**Remark 4.5**.: To make the computations in the above proof more rigorous, we need to work with smooth solutions \((v_{N},\partial_{t}v_{N})\) to (4.22) with truncated initial data \((P_{\leq N}u_{0},P_{\leq N}u_{1})\) and truncated perturbation term \(P_{\leq N}z\), where \(P_{\leq N}\) denotes the sharp frequency projection onto \(\{|n|\leq N\}\). Using similar arguments as in Lemma 4.1, we can show that \((v_{N},\partial_{t}v_{N})\) converges to \((v,\partial_{t}v)\) in \(C([0,T];\dot{\mathcal{H}}^{1}(\mathbb{R}^{d}))\). After establishing an upper bound for \(E(v_{N},\partial_{t}v_{N})\) that is independent of \(N\), we can take \(N\to\infty\) to obtain the desired energy bound for \(E(\vec{v})\).
We are now ready to show Theorem 1.2 by reducing it to the \(\mathbb{R}^{d}\) case. To achieve this, we adjust the use of the finite speed of propagation as in [39, Appendix A] to our stochastic setting.
Proof of Theorem 1.2.: Let \(T>0\) be a target time and \(N\in\mathbb{N}\). Given initial data \((u_{0},u_{1})\in\mathcal{H}^{1}(\mathbb{T}^{d})\), we define
\[u_{j,T}(x) :=\eta_{T}(x)\sum_{n\in\mathbb{Z}^{d}}\widehat{u_{j}}(n)e^{in \cdot x},\quad x\in\mathbb{R}^{d}\] \[u_{j,N,T}(x) :=\eta_{T}(x)\sum_{|n|\leq N}\widehat{u_{j}}(n)e^{in\cdot x},\quad x \in\mathbb{R}^{d}\]
for \(j=0,1\), where \(\eta_{T}\) is a smooth cutoff function on \([-2\pi T,2\pi T]\) as defined in (2.8).
We consider the following equation for \(\mathbf{u}_{N}\) on \(\mathbb{R}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}\mathbf{u}_{N,T}-\Delta \mathbf{u}_{N,T}+\mathcal{N}(\mathbf{u}_{N,T})=\eta_{T}P_{\leq N}(\phi\xi)\\ &(\mathbf{u}_{N,T},\partial_{t}\mathbf{u}_{N,T})|_{t=0}=(u_{0,N, T},u_{1,N,T}),\end{aligned}\right. \tag{4.29}\]
where \(P_{\leq N}\) denotes the sharp frequency cutoff onto \(\{|n|\leq N\}\) and \(\mathcal{N}(u)=|u|^{\frac{4}{d-2}}u\). Since the initial data and the noise are smooth, there exists a unique (smooth) global solution \(\mathbf{u}_{N,T}\) to (4.29) on \(\mathbb{R}^{d}\). By the finite speed of propagation, we have that \(u_{N}:=\mathbf{u}_{N}|_{[0,T]\times\mathbb{T}^{d}}\) is a solution to the following equation on \(\mathbb{T}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}u_{N}-\Delta u_{N}+ \mathcal{N}(u_{N})=P_{\leq N}(\phi\xi)\\ &(u_{N},\partial_{t}u_{N})|_{t=0}=(P_{\leq N}u_{0},P_{\leq N}u_{1 }).\end{aligned}\right.\]
We recall the definition of \(\mathbf{\Phi}_{T}\) in (2.9) and define \(\mathbf{\Phi}_{N,T}\) as \(\mathbf{\Phi}_{T}\) with the summation restricted to \(|n|\leq N\). By Lemma 2.8, it is not hard to see that \(\mathbf{\Phi}_{N,T}\) converges to \(\mathbf{\Phi}_{T}\) in \(C([0,T];W^{s+1-\varepsilon,\infty}(\mathbb{R}^{d}))\) given \(\phi\in\mathit{HS}(L^{2},H^{s})\). Note that \(\mathbf{\Phi}_{N,T}\) and \(\mathbf{\Phi}_{T}\) satisfy
\[\partial_{t}^{2}\mathbf{\Phi}_{N,T}-\Delta\mathbf{\Phi}_{N,T}=\eta_{T}P_{\leq N }(\phi\xi)\quad\text{and}\quad\partial_{t}^{2}\mathbf{\Phi}_{T}-\Delta\mathbf{ \Phi}_{T}=\eta_{T}(\phi\xi),\]
respectively, both with zero initial data. We also recall that \(\Phi\) is the stochastic convolution as defined in (2.7) and we let \(\Phi_{N}\) be the truncation of \(\Phi\) onto frequencies \(\{|n|\leq N\}\). By the finite speed of propagation for the linear solutions, we have
\[\Phi_{N}=\mathbf{\Phi}_{N,T}|_{[0,T]\times\mathbb{T}^{d}}\quad\text{and}\quad \Phi=\mathbf{\Phi}_{T}|_{[0,T]\times\mathbb{T}^{d}}.\]
We now define \(\mathbf{v}_{N,T}=\mathbf{u}_{N,T}-\mathbf{\Phi}_{N,T}\), which is the smooth global solution to the following perturbed NLW on \(\mathbb{R}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}\mathbf{v}_{N,T}-\Delta \mathbf{v}_{N,T}+\mathcal{N}(\mathbf{v}_{N,T}+\mathbf{\Phi}_{N,T})=0\\ &(\mathbf{v}_{N,T},\partial_{t}\mathbf{v}_{N,T})|_{t=0}=(u_{0,N, T},u_{1,N,T}).\end{aligned}\right.\]
In particular, \(v_{N}:=\mathbf{v}_{N,T}|_{[0,T]\times\mathbb{T}^{d}}\) is the smooth global solution to the following perturbed NLW on \(\mathbb{T}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}v_{N}-\Delta v_{N}+ \mathcal{N}(v_{N}+\Phi_{N})=0\\ &(v_{N},\partial_{t}v_{N})|_{t=0}=(P_{\leq N}u_{0},P_{\leq N}u_{1 }).\end{aligned}\right.\]
We also consider the following equation for \(\mathbf{v}_{T}\) on \(\mathbb{R}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}\mathbf{v}_{T}-\Delta \mathbf{v}_{T}+\mathcal{N}(\mathbf{v}_{T}+\mathbf{\Phi}_{T})=0\\ &(\mathbf{v}_{T},\partial_{t}\mathbf{v}_{T})|_{t=0}=(u_{0,T},u_{1, T}).\end{aligned}\right. \tag{4.30}\]
It is easy to see that \((u_{0,T},u_{1,T})\in\mathcal{H}^{1}(\mathbb{R}^{d})\) given \((u_{0},u_{1})\in\mathcal{H}^{1}(\mathbb{T}^{d})\). In addition, by Lemma 2.8, we have \(\mathbf{\Phi}_{T}\in C([0,T];L^{\infty}(\mathbb{R}^{d}))\) almost surely for \(4\leq d\leq 6\) and \(\mathbf{\Phi}_{T}\in C([0,T];W^{\frac{1}{2}+\varepsilon,\infty}(\mathbb{R}^{d}))\), \(\partial_{t}\mathbf{\Phi}_{T}\in C([0,T];W^{-\frac{1}{2}+\varepsilon,\infty}( \mathbb{R}^{d}))\) almost surely for \(d=3\) and \(\varepsilon>0\) sufficiently small. By Proposition 4.4, we have an a priori energy bound for \(\bar{\mathbf{v}}_{T}\), so that repeating the proof of Theorem 1.1 yields global well-posedness of the equation (4.30) (note that we do not need \(H^{1}(\mathbb{R}^{d})\)-regularity of \(\mathbf{\Phi}_{T}\)).
By using a similar argument as in Lemma 4.1, we can show that
\[\|\bar{\mathbf{v}}_{T}-\bar{\mathbf{v}}_{N,T}\|_{L^{\infty}_{T}}\dot{\mathcal{ H}}_{\pi}^{1}(\mathbb{R}^{d})\longrightarrow 0\]
and
\[\|\mathbf{v}_{T}-\mathbf{v}_{N,T}\|_{X([0,T]\times\mathbb{R}^{d})}\longrightarrow 0\]
as \(N\to\infty\), where we recall that the \(X(I\times\mathbb{R}^{d})\)-norm is as defined in (2.1). See also [39, Appendix A] for the precise steps for obtaining the convergence of \(\mathfrak{V}_{N,T}\) to \(\mathfrak{V}_{T}\). This in particular implies that \(v_{N}=\mathbf{v}_{N,T}|_{[0,T]\times\mathbb{T}^{d}}\) converges to \(v:=\mathbf{v}_{T}|_{[0,T]\times\mathbb{T}^{d}}\) in \(C([0,T];\dot{H}^{1}(\mathbb{T}^{d}))\cap X([0,T]\times\mathbb{T}^{d})\) and that \(\partial_{t}v_{N}\) converges to \(\partial_{t}v\) in \(C([0,T];L^{2}(\mathbb{T}^{d}))\). Since \(\mathbf{v}_{T}\) satisfies (4.30) and \(\Phi=\mathbf{\Phi}_{T}|_{[0,T]\times\mathbb{T}^{d}}\), we have that \(v\) satisfies the perturbed NLW on \(\mathbb{T}^{d}\):
\[\left\{\begin{aligned} &\partial_{t}^{2}v-\Delta v+\mathcal{N}(v+ \Phi)=0\\ &(v,\partial_{t}v)|_{t=0}=(u_{0},u_{1}),\end{aligned}\right.\]
and that \(v\) satisfies the following Duhamel formulation:
\[v(t)=V_{\mathbb{T}^{d}}(t)(u_{0},u_{1})-\int_{0}^{t}S_{\mathbb{T}^{d}}(t-t^{ \prime})\big{(}\mathcal{N}(v+\Phi)(t^{\prime})\big{)}dt^{\prime}, \tag{4.31}\]
where \(V_{\mathbb{T}^{d}}\) and \(S_{\mathbb{T}^{d}}\) has the same forms as \(V\) and \(S\) in (1.5) but on the periodic domain \(\mathbb{T}^{d}\). From (4.31), we can easily see that \(v\in C([0,T];L^{2}(\mathbb{T}^{d}))\). Thus, we deduce that \(u:=\Phi+v\) is a solution to (1.1) on \(\mathbb{T}^{d}\) in the class
\[\Phi+C([0,T];\mathcal{H}^{1}(\mathbb{T}^{d}))\subset C([0,T];\mathcal{H}^{s+1 -\varepsilon}(\mathbb{T}^{d})),\]
where the inclusion follows from Lemma 2.8.
**Acknowledgements.** The authors would like to thank Professor Tadahiro Oh for suggesting this problem and for his support and advice throughout the entire project. E.B. was supported by Unite de Mathematiques Pures et Appliquees UMR 5669 ENS de Lyon / CNRS. G.L. was supported by the EPSRC New Investigator Award (grant no. EP/S033157/1). R.L. was supported by the European Research Council (grant no. 864138 "SingStochDispDyn").
|
2309.15494 | VideoAdviser: Video Knowledge Distillation for Multimodal Transfer
Learning | Multimodal transfer learning aims to transform pretrained representations of
diverse modalities into a common domain space for effective multimodal fusion.
However, conventional systems are typically built on the assumption that all
modalities exist, and the lack of modalities always leads to poor inference
performance. Furthermore, extracting pretrained embeddings for all modalities
is computationally inefficient for inference. In this work, to achieve high
efficiency-performance multimodal transfer learning, we propose VideoAdviser, a
video knowledge distillation method to transfer multimodal knowledge of
video-enhanced prompts from a multimodal fundamental model (teacher) to a
specific modal fundamental model (student). With an intuition that the best
learning performance comes with professional advisers and smart students, we
use a CLIP-based teacher model to provide expressive multimodal knowledge
supervision signals to a RoBERTa-based student model via optimizing a
step-distillation objective loss -- first step: the teacher distills multimodal
knowledge of video-enhanced prompts from classification logits to a regression
logit -- second step: the multimodal knowledge is distilled from the regression
logit of the teacher to the student. We evaluate our method in two challenging
multimodal tasks: video-level sentiment analysis (MOSI and MOSEI datasets) and
audio-visual retrieval (VEGAS dataset). The student (requiring only the text
modality as input) achieves an MAE score improvement of up to 12.3% for MOSI
and MOSEI. Our method further enhances the state-of-the-art method by 3.4% mAP
score for VEGAS without additional computations for inference. These results
suggest the strengths of our method for achieving high efficiency-performance
multimodal transfer learning. | Yanan Wang, Donghuo Zeng, Shinya Wada, Satoshi Kurihara | 2023-09-27T08:44:04Z | http://arxiv.org/abs/2309.15494v1 | # VideoAdviser: Video Knowledge Distillation for Multimodal Transfer Learning
###### Abstract
Multimodal transfer learning aims to transform pretrained representations of diverse modalities into a common domain space for effective multimodal fusion. However, conventional systems are typically built on the assumption that all modalities exist, and the lack of modalities always leads to poor inference performance. Furthermore, extracting pretrained embeddings for all modalities is computationally inefficient for inference. In this work, to achieve high efficiency-performance multimodal transfer learning, we propose _VideoAdviser_, a video knowledge distillation method to transfer multimodal knowledge of video-enhanced prompts from a multimodal fundamental model (teacher) to a specific modal fundamental model (student). With an intuition that the best learning performance comes with professional advisers and smart students, we use a CLIP-based teacher model to provide expressive multimodal knowledge supervision signals to a RoBERTa-based student model via optimizing a step-distillation objective loss--first step: the teacher distills multimodal knowledge of video-enhanced prompts from classification logits to a regression logit--second step: the multimodal knowledge is distilled from the regression logit of the teacher to the student. We evaluate our method in two challenging multimodal tasks: video-level sentiment analysis (MOSI and MOSEI datasets) and audio-visual retrieval (VEGAS dataset). The student (requiring only the text modality as input) achieves an MAE score improvement of up to **12.3%** for MOSI and MOSEI. Our method further enhances the state-of-the-art method by **3.4%** mAP score for VEGAS without additional computations for inference. These results suggest the strengths of our method for achieving high efficiency-performance multimodal transfer learning.
## 1 Introduction
Transfer learning is a promising methodology that focuses on transferring pretrained representation domains to nearby target domains Zhuang et al. (2021). For instance, finetuning a pretrained language model on a small annotated dataset enables high-performance text sentiment analysis Hazarika et al. (2020). Recent fundamental models on diverse modalities such as language models (_e.g._, RoBERTa Liu et al. (2019), GPT-3 Brown et al. (2020)), visual models (_e.g._, ViT Dosovitskiy et al. (2021)), and multimodal models (_e.g._, CLIP Radford et al. (2021), MEET Nie et al. (2022)) have millions of parameters and can provide robust modal representations. With such advancement, multimodal transfer learning aims to transform pretrained representations of diverse modalities into a common domain space for effective multimodal fusion Zhen et al. (2022), Albanie et al. (2018). It has been broadly applied to multimodal tasks such as video-level sentiment analysis Yu et al. (2021), Han et al. (2021), Hu et al. (2022), and audio/text-video retrieval tasks Zeng et al. (2020), Zhen et al. (2019), Han et al. (2021), Zeng et al. (2022).
Existing works on multimodal transfer learning unify adversarial learning to regularize the embedding distributions between different modalities, leading to effective multimodal fusion He et al. (2017), Wang et al. (2017), Zhang et al. (2017), Zhen et al. (2019), Wang et al. (2022). However, conventional systems are typically built on the assumption that all modalities exist, and the lack of modalities always leads to poor inference performance. For instance, vision-language models typically fail to achieve expected performance when given only text data as input. Furthermore, extracting pretrained embeddings for all modalities is computationally inefficient for inference. Therefore, improving robust
multimodal transfer learning to achieve high efficiency-performance inference is crucial for practical applications, which motivates this work.
Knowledge distillation (KD) is first proposed for achieving an efficient student model by transforming embedded knowledge in the predicted logits of the teacher model to a smaller student model Hinton et al. (2015). Recent works have expanded it to multimodal transfer learning by distilling mutual information from one modality to another Gupta et al. (2016); Thoker and Gall (2019); Jiao et al. (2020); Hou et al. (2020); Pan et al. (2020). However, these works always need to sacrifice the performance of the teacher model, requiring the teacher model and the student model distributed in neighboring domains (_e.g._, vision\(\rightarrow\)vison, text\(\rightarrow\)text).
In this paper, with an intuition that the best learning performance comes with professional advisers and smart students, to achieve high efficiency-performance multimodal knowledge distillation, we propose _VideoAdviser_ shown in Figure 1, a video knowledge distillation method to transfer multimodal knowledge from a strong multimodal fundamental model (teacher) to a powerful specific modal fundamental model (student) via optimizing a step-distillation objective loss. As CLIP is a multimodal fundamental model pretrained with cross-modal contrastive learning on tremendous image-text pairs Radford et al. (2021), we employ it as the teacher model to obtain multimodal knowledge of video-enhanced prompts by incorporating the video and text prompt representations. The teacher model utilizes CLIP's visual and text encoders to obtain video and text prompt embeddings without freezing the pretrained weights to preserve multimodal representation space learned by CLIP. By adapting transformer-based modules on these embeddings and extracted frame-level facial expression features, the teacher model acquires expressive multimodal knowledge of video-enhanced prompts by performing video and text prompt representations learning. To sufficiently absorb distilled multimodal knowledge from the teacher model, we employ a large-scale language model RoBERTa Liu et al. (2019) as the student model. Since RoBERTa is a transformer-based architecture composed of huge parameters, we finetune its full parameters to leverage RoBERTa's powerful architecture to achieve high-performance student models for inference. In addition, we propose a step-distillation objective loss to distill coarse-fine grained multimodal knowledge to further improve the multimodal knowledge distillation. Motivated by multiscale representation learning enabling the fusion of enriched coarse-fine grained representations Li et al. (2022); Jiao et al. (2023), we consider that multitask with different target
Figure 1: A conceptual diagram illustrates the difference between the conventional system and our system: our system focuses on transferring multimodal knowledge from a multimodal fundamental model (_e.g._, CLIP) to a language fundamental model (_e.g._, RoBERTa-Large), and requires text only to achieve high efficiency-performance inference. On the other hand, the conventional system focuses on multimodal fusion and requires complex modules (diverse modal encoders and a multimodal fusion module) for inference.
granularities allows the model to acquire representative knowledge at diverse granularities. For instance, classification encourages the model to separate the data point into multiple categorical classes representing an interval of consecutive real values to acquire knowledge at a coarse granularity. In contrast, regression enables the model to distinguish the data point into continuous real values instead of using classes to learn knowledge at a fine granularity. To this end, in the first step, the teacher model distills multimodal knowledge of video-enhanced prompts from classification logits to a regression logit to unify knowledge at both coarse and fine granularity; In the second step, the unified multimodal knowledge is further distilled from the teacher model to the student model.
We evaluate _VideoAdviser_ in two challenging multimodal tasks: video-level sentiment analysis (MOSI and MOSEI datasets) and audio-visual retrieval (VEGFAS dataset). The RoBERTa-based student model requiring only text data as input outperforms the state-of-the-art multimodal model's MAE score by **12.3%** for MOSI and **2.4%** for MOSEI. Our method also enhances the state-of-the-art audio-visual cross-modal model by **3.4%** mAP score for VEGFAS without additional computations for inference. Ablation studies further demonstrate that our method is able to improve the state-of-the-art method's MAE score by over **3.0%** with almost half the parameters. These results suggest the strengths of our method for achieving high efficiency-performance multimodal transfer learning.
## 2 Related work
### Multimodal fundamental model
CLIP Radford et al. (2021) is a multimodal fundamental model that learns transferable visual models from natural language supervision on a dataset of 400 million (image, text) pairs. It jointly trains an image encoder and a text encoder using contrasting learning objectives to obtain a joint multimodal representation space. Inspired by its remarkable zero-shot generation ability for downstream image tasks, the work Ni et al. (2022) proposes XCLIP to expand pretrained CLIP on general video recognition by finetuning it on video data using a video-specific prompting module that enhances the video representation to the text representation. The work Gu et al. (2021) utilizes a pretrained CLIP for open-vocabulary object detection by distilling visual knowledge from cropped image regions. In this work, we adapt a pretrained CLIP on distilling multimodal knowledge of video-enhanced prompts from the teacher model to the student model via a step-distillation objective loss.
### Knowledge distillation based transfer Learning
In addition to achieving a lightweight student model by minimizing the KL divergence between the probabilistic outputs of a teacher and student model Hinton et al. (2015), recent works on knowledge distillation focus on transferring representational knowledge from a teacher model to a student model Tian et al. (2020); Gu et al. (2021); Wang and Yoon (2022). For instance, these works Wang et al. (2020, 2020) distill linguistic knowledge from a text encoder to a visual encoder by learning the mapping between modal representations. The work Croitoru et al. (2021) utilizes multiple text encoders to perform cross-modal knowledge distillation for stronger text-video retrieval. The work Dai et al. (2022) distills expressive text representations from a generation model to the text encoder of CLIP by minimizing text-text feature distance. However, these works mostly focus on knowledge distillation in the common modal domain or show limited performance in the cross-modal domain. In contrast, to achieve expressive knowledge distillation for multimodal transfer learning tasks, we propose a RoBERTa-based student model to improve multimodal knowledge distillation by leveraging its powerful transformer architecture.
### Video-level sentiment analysis task
Recent works Hazarika et al. (2020); Han et al. (2021); Yu et al. (2021) on video-level sentiment analysis tasks focus on improving modality fusion. The work Wang et al. (2017) proposes VAE-Based adversarial learning method to map multimodal representations to a joint domain space for improving the modality fusion process. The work Hu et al. (2022) achieves SOTA performance on MOSI Zadeh et al. (2016) and MOSEI Bagher Zadeh et al. (2018) dataset by introducing a pretrained modality fusion module that fuses multimodal representation from multi-level textual information by injecting acoustic and visual signals into a text encoder. However, all these works require preprocessed multimodal embeddings as the input which is inefficient for inference. In contrast, we employ a knowledge distillation approach that requires only one specific modality leading to efficient inference.
### Audio-visual retrieval task
Recent works on audio-visual retrieval tasks exploit supervised representation learning methods to generate new features across modalities in a common space Han et al. (2021); Rasiwasia et al. (2014); Zeng et al. (2018); Zheng et al. (2020),
Zhen et al. (2019); Zeng et al. (2020, 2022, 2023), such that the audio-visual features can be measured directly. Inspired by the C-CCA Rasiwasia et al. (2014) that aims at finding linear transformations for each modality, C-DCCA Zeng et al. (2018) tries to learn non-linear features in the common space by using deep learning methods. Deep learning methods by using rank loss to optimize the predicted distances, such as TNN-C-CCA Zeng et al. (2020), and CCTL Zeng et al. (2022) models, which apply triplet losses as the objective functions to achieve better results than other CCA-variant methods. The EICS model Zeng et al. (2023) learns two different common spaces to capture modality-common and modality-specific features, which achieves the SOTA results so far. In this paper, we enable our method to enhance the extracted audio and visual representations of the SOTA model by distilling multimodal knowledge from a CLIP-based teacher model.
## 3 Problem setting
This work focuses on video-level sentiment analysis and audio-visual retrieval tasks, respectively. For the video-level sentiment analysis task, each data point consists of a video \(\mathbf{M}\), the cropped sequential face images \(I\), the divided speech text \(T_{speech}\), and the class text \(T_{class}\), our goal is to predict the sentiment intensity \(\mathcal{Z}_{pred}\in[-3,3]\) by giving only speech text \(T_{speech}\) for inference. For the audio-visual retrieval task, assume that \(\Gamma=\{\gamma_{i}\}_{i=1}^{N}\) is a video collection, \(\gamma_{i}=\{a_{i},v_{i}\}\), where \(N\) indicates the data size, \(a_{i}\in\mathbb{R}^{D1}\) and \(v_{i}\in\mathbb{R}^{D2}\) are audio and visual features from different feature spaces. Our target aims at feeding them into a common space by mapping functions \(f(x)\) and \(g(x)\) to generate new features \(f(a_{i})\) and \(g(v_{i})\). As a result, each query \(a_{i}\) for example will obtain a rank list from another modality based on \(query\)-\(v_{j}(i\neq j)\) similarity.
## 4 Methodology
In this section, we explain our method _VideoAdviser_ in detail. As shown in Fig. 2, our method consists of a CLIP-based model as the teacher (SS 4.1) and a RoBERTa-based model as the student (SS 4.2). The teacher and student models are
Figure 2: Architecture of _VideoAdviser_ using a CLIP-based model (the teacher) to distill multimodal knowledge of video-enhanced prompts to a RoBERTa-based model (the student): the teacher model utilizes pretrained CLIP’s text and visual encoders, and a facial expression encoder to obtain the sentiment class text embedding, the frame-level embedding, and the facial expression embedding. Then, the teacher model employs CCT, MIT, MLP, and a video-specific prompting module, and minimizes a binary sentiment classification loss and a sentiment regression loss. Meanwhile, the student model is finetuned on speech text by minimizing a sentiment regression loss and a step-distillation loss (the region in purple). During inference, the speech text is used to enable sentiment intensity prediction. Here, CCT, MIT, and MLP stand for the cross-frame communication transformer, multi-frame integration transformer, and multi-layer perceptron, respectively.
jointly trained to achieve knowledge distillation across modalities. The student model enables sentiment intensity prediction by giving only a speech text for inference (SS 4.3). We use \(\mathcal{F}(\cdot)\), \(\mathcal{V}(\cdot)\), \(\mathcal{P}(\cdot)\) and \(\mathcal{T}(\cdot)\) to denote the facial expression encoder, visual encoder, prompt encoder, and text encoder.
### The CLIP-based teacher model
Facial expression embeddingTo enhance the visual representations of the teacher model for sentiment intensity prediction, we first use OpenFace Baltrusaitis et al. (2016) to crop face images \(\{I_{i}\}_{i=1}^{T}\in\mathbb{R}^{P^{2}\times 3}\) with each of size \(P\times P\) pixels from \(T\) sampled video frames, then, we extract frame-level facial expression embedding \(\mathbf{v}^{(f)}\in\mathbb{R}^{T\times D}\) with a facial expression encoder \(\mathcal{F}(\cdot)\)Albanie and Vedaldi (2016) that is pretrained on the VGG-Face dataset Parkhi et al. (2015). Here, \(\mathbf{v}^{(f)}\) is an 8-dimensional sequential vector of length 64 [\(T=64\), \(D=8\)]. More details of the pretrained model on Albanie's website 1.
Footnote 1: [https://www.robots.ox.ac.uk/~albanie/mcn-models.html](https://www.robots.ox.ac.uk/~albanie/mcn-models.html)
\[\mathbf{v}^{(f)}=\mathcal{F}(\{I_{i}\}_{i=1}^{T}) \tag{1}\]
Visual embeddingTo fully transfer the powerful generality of pretrained CLIP Radford et al. (2021) from image to video, we freeze the parameters of pretrained CLIP visual encoder \(\mathcal{V}(\cdot)\) to obtain frame-level visual embedding \(\mathbf{v}^{(v)}\in\mathbb{R}^{T\times D}\), where \(T\) denotes the number of sampled video frames and \(D\) is the dimension of visual embedding. Following Ni et al. (2022), given a video clip \(M\in\mathbb{R}^{T\times H\times W\times 3}\) of \(T\) sampled video frames with \(H\times W\) pixels, we use ViT-L/14 Dosovitskiy et al. (2021) to first divide t-th frame into \(N\) patches \(\{x_{t,i}\}_{i=1}^{N}\in\mathbb{R}^{P^{2}\times 3}\), where \(t\in T\) and \(N=HW/P^{2}\). Then, the patches \(\{x_{t,i}\}_{i=1}^{N}\) is mapped to \(\mathbf{v}^{(v)}=\{v_{t}^{(v)}\}_{t=1}^{T}\) with a linear transformation \(f_{m}:\mathbb{R}^{P^{2}\times 3}\rightarrow\mathbb{R}^{3P^{2}\times D}\).
\[\mathbf{v}^{(v)}=\mathcal{V}(f_{m}(\{\mathbf{x}_{t}\}_{t=1}^{T})) \tag{2}\]
Text prompt embeddingWe employ the text encoder \(\mathcal{P}(\cdot)\) of pretrained CLIP to obtain text prompt embedding \(\mathbf{v}^{(p)}\in\mathbb{R}^{C\times D}\) of \(C\) sentiment classes by giving the sentiment class label \(T_{class}\in\{\text{negative,positive}\}\), where "positive" class includes \(0\). The text prompt such as "A video with the \(\{T_{class}\}\) face" is generated with a text prompt generator \(f_{g}\) and encoded as
\[\mathbf{v}^{(p)}=\mathcal{P}(f_{g}(T_{class})) \tag{3}\]
We employ the cross-frame communication transformer (CCT), multi-frame integration transformer (MIT), and video-specific prompting modules to obtain expressive multimodal sentiment knowledge. The CCT is a multi-layer transformer with cross-frame attention introduced in Ni et al. (2022) to enable cross-frame information exchange. It is used to obtain cross-frame visual representations by giving a modified visual embedding \(\bar{\mathbf{v}}^{(v)}=\{\bar{\mathbf{v}}^{(v)}_{t}\}_{t=1}^{T}\), where \(\bar{\mathbf{v}}^{(v)}_{t}=[x_{class},v_{t}^{(v)}]+\mathbf{e}_{pos}\). \(x_{class}\) is a learnable frame representation and \(e_{pos}\) is a position embedding of patches in a frame. The MIT is a normal transformer layer constructed by standard multi-head self-attention and feed-forward networks. Given frame-level embeddings \(\mathbf{v}^{(f)}\) and \(\bar{\mathbf{v}}^{(v)}\), we finally obtain the video representation \(V\) as follows:
\[V^{(f)}=\operatorname{AvgPool}(\operatorname{MIT}(\mathbf{v}^{(f)})) \tag{4}\] \[V^{(v)}=\operatorname{AvgPool}(\operatorname{MIT}(\operatorname{CCT }(\bar{\mathbf{v}}^{(v)})))\] (5) \[V=f_{v}([V^{(f)}||V^{(v)}]) \tag{6}\]
where \(f_{v}:\mathbb{R}^{2\mathcal{D}}\rightarrow\mathbb{R}^{\mathcal{D}}\) is a two-layer MLP. \(\operatorname{AvgPool}\) denotes an average pooling layer. "\(||\)" denotes a concatenation operator used to process facial expression-conditioned video representation. We then transform the **video representation**\(V\) to the **video logit** (see Fig. 2) with a two-layer MLP.
Inspired by Ni et al. (2022), the teacher model employs a video-specific prompting module to enhance the prompt embedding with cross-frame visual representations. The video-specific prompting module applies a normal multi-hand attention Vaswani et al. (2017) to obtain the **video-enhanced prompt representation**\(\bar{\mathbf{v}}^{(p)}\in\mathbb{R}^{C\times D}\) (see Fig. 2) as
\[\bar{\mathbf{v}}^{(p)}=\mathbf{v}^{(p)}+\operatorname{Multi\_Hand\_Attention}( \operatorname{CCT}(\bar{\mathbf{v}}^{(v)})) \tag{7}\]
Then, we compute dot product between video representation \(V\) and video-specific prompt representation \(\bar{\mathbf{v}}^{(p)}=\{\bar{\mathbf{v}}^{(p)}_{i}\}_{i=1}^{C}\) to output the similarity score \(\mathbf{p}=\{p_{i}\}_{i=1}^{C}\) with a softmax layer as
\[p_{i}=\operatorname{softmax}(\bar{\mathbf{v}}^{(p)}_{i}\cdot V)=\frac{\exp(\bar{ \mathbf{v}}^{(p)}_{i}\cdot V)}{\sum_{i\in C}\exp(\bar{\mathbf{v}}^{(p)}_{i}\cdot V)} \tag{8}\]
where \(C\) indicates the number of sentiment classes. We further transform \(\mathbf{p}\) into the **video-enhanced prompt logit** (see Fig. 2) with a two-layer MLP.
### The RoBERTa-based student model
To leverage the powerful transformer-based architecture of fundamental language models, we structure a RoBERTa-based student model Liu et al. (2019) that consists of a text encoder \(\mathcal{T}(\cdot)\) and a two-layer MLP. Given the speech text \(T_{speech}\), the student model obtains text representation \(V^{(t)}\) with \(\mathcal{T}(\cdot)\), and output sentiment intensity \(\mathcal{Z}_{pred}\) with the MLP into the **text logit** (see Fig. 2) as
\[\mathcal{Z}_{pred}=\mathrm{logit}(V^{(t)}),V^{(t)}=\mathcal{T}(T_{speech}) \tag{9}\]
Where \(V^{(t)}\in\mathbb{R}^{D}\), and \(\mathrm{logit}(\cdot):\mathbb{R}^{D}\rightarrow\mathbb{R}^{1}\) indicates the two-layer MLP.
### Training objectives
We simultaneously optimize the teacher and student models by applying mean squared error (MSE) loss to obtain video and text sentiment knowledge. Both teacher and student models minimize the \(L_{2}\) distance as follows:
\[\mathcal{L}_{v}^{(r)}= \mathrm{MSE(logit}(V),l^{(r)}) \tag{10}\] \[\mathcal{L}_{t}^{(r)}= \mathrm{MSE(\mathcal{Z}_{pred},l^{(r)})}= \frac{1}{B}\sum_{i=1}^{B}\left|\left|\mathcal{Z}_{pred}-l^{(r)} \right|\right|^{2} \tag{11}\]
where \(B\) indicates batch size, \(\mathcal{L}_{v}^{(r)}\) indicates MSE between the teacher model's video logit and sentiment label \(l^{(r)}\), and \(\mathcal{L}_{t}^{(r)}\) indicates MSE between the student model's text logit (\(\mathcal{Z}_{pred}\)) and \(l^{(r)}\). Here, \(\mathrm{logit}(V)\) is a two-layer MLP for transforming video representation \(V\) into the video logit.
To learn the video-enhanced prompt representation to fuse multimodal knowledge of video and class text, we use the binary sentiment classification label \(l^{(c)}\) (see Fig. 3) synthesized from the sentiment label to optimize the teacher model with a cross-entropy loss \(\mathcal{L}_{v}^{(c)}\) as
\[\mathcal{L}_{v}^{(c)}=-\sum_{i=1}^{C}l_{i}^{(c)}\log(p_{i}), \tag{12}\]
We optimize a step-distillation objective loss to achieve multimodal knowledge distillation from the teacher model to the student model. The step-distillation objective loss consists of a **prompt-video distance minimization**\(\mathcal{L}_{p\to v}\) and a **video-text distance minimization**\(\mathcal{L}_{v\to t}\), where \(\mathcal{L}_{p\to v}\) is optimized to align coarse-grained classification knowledge in the video-enhanced prompt logit and fine-grained regression knowledge in the video logit, \(\mathcal{L}_{v\to t}\) is optimized to align knowledge in the video logit of the teacher model and the text logit of the student model. We apply MSE loss to perform the step-distillation as follows:
\[\mathcal{L}_{p\to v}=\mathrm{MSE(logit}(\mathbf{p}),\mathrm{ logit}(V)) \tag{13}\] \[\mathcal{L}_{v\to t}=\mathrm{MSE(logit}(V),\mathcal{Z}_{pred}) \tag{14}\]
where \(\mathrm{logit}(\mathbf{p})\) indicates the coarse-grained classification knowledge in Eq. 12.
We finally have a joint loss \(\mathcal{L}\) for training the teacher and student models end-to-end as
\[\mathcal{L}=\alpha\mathcal{L}_{v}^{(r)}+\beta\mathcal{L}_{t}^{(r)}+\gamma \mathcal{L}_{v}^{(c)}+\delta\mathcal{L}_{p\to v}+\psi\mathcal{L}_{v \to t} \tag{15}\]
where \(\alpha\), \(\beta\), \(\gamma\), \(\delta\), and \(\psi\) indicate the importance of each loss value. They are empirically set as \(1:10:1:10:1\) to keep all loss values on the same scale.
## 5 Experiment
In this section, we conducted empirical experiments on video-level sentiment analysis and audio-visual retrieval tasks to demonstrate the high efficiency-performance of our method.
### Dataset
MOSI Zadeh et al. (2016) and MOSEI Bagher Zadeh et al. (2018) are multimodal datasets collected from online video for evaluating video-level sentiment analysis tasks. We show the dataset size in Tab. 1. MOSEI drops the data lacking modalities to fairly evaluate recent modality fusion-based methods Wang et al. (2022). We compared the video segment IDs of each data point for each modality and saved only the data points associated with a common segment ID. The modified MOSEI dataset was found to be more challenging than the original dataset as it lowered the strong baseline MSE score by 4.9% (see Tab. 2). Both datasets are annotated with a Likert scale in the range of \([-3,3]\), _i.e._, (-3: highly negative, -2: negative, -1: weakly negative, 0: neutral, +1: weakly positive, +2: positive, +3: highly positive). We further synthesize binary classification label, _i.e._, ([-3,0]: negative, [0,3]: non-negative) used for optimizing the teacher model (SS4.1). The label distribution is illustrated in Fig. 3. MOSEI is imbalanced and over \(65\%\) of data is distributed in \([-1,1]\).
VEGAS dataset Zhou et al. (2018) is applied for the audio-visual retrieval task, which contains 28,103 videos in total as shown in Tab. 1. Each video can be embedded as an audio feature vector and a visual feature vector, and the audio-visual pair shares the same single label. The label represents an audio event (_e.g._, baby crying) of the human voice or natural sound. The number of label classes is 10, and the length of each audio-visual pair ranges from 2 to 10 seconds.
### Evaluation metric
We use the mean absolute error (MAE), accuracy (\(A^{7}\)), accuracy (\(A^{2}\)), and weight-\(F1\) score for evaluating MOSI and MOSEI. \(A^{7}\) denotes a 7-class and \(A^{2}\) denotes a binary accuracy metric. Since MOSI and MOSEI are regression problems, we consider MAE to be the most reasonable metric for fair evaluations. In addition to the binary accuracy reported by most of the previous works, we evaluate the 7-class accuracy as did the SOTA method Hu et al. (2022) to eliminate the effect of the data imbalance. For the audio-visual retrieval task, we apply the mean average precision (mAP) as previous works Zeng et al. (2023, 2022) to evaluate our model.
### Training setting
We train the teacher and the student models simultaneously and use only the student model for inference. The text modality is used for evaluating MOSI and MOSEI. On the other hand, as shown in Fig. 4, we utilize the teacher model to distill multimodal knowledge for both visual and audio encoders of the state-of-the-art model EICS Zeng et al. (2023) for audio-visual retrieval tasks. Both visual and audio encoders are used as student models to evaluate VEGAS. We show the hyperparameters of _VideoAdviser_ (SS4) for both tasks in detail in Tab. 3.
### Performance
#### 5.4.1 Evaluation of video-level sentiment analysis
We compared _VideoAdviser_ with strong baseline methods on the test set of MOSI and MOSEI in Tab. 2. Compared with the state-of-the-art method UniMSE Hu et al. (2022) that utilizes the powerful architecture of a large-scale pretraining model T5 Raffel et al. (2020) to improve the multimodal fusion by embedding multimodal signals into an auxiliary layer of T5, _VideoAdviser_ is a multimodal knowledge distillation-based method that distills multimodal knowledge from a multimodal fundamental model CLIP Ni et al. (2022) to a language model RoBERTa Liu et al. (2019). UniMSE was trained by integrating the training datasets of MOSI, MOSEI, MELD Poria et al. (2019), IEMOCAP Busso et al. (2008) and multimodal signals are required for inference. In contrast, our method was trained using the target dataset and requires only text data for inference. _VideoAdviser_ significantly improves UniMSE's MAE score by **12.3%** for MOSI, and outperforms a strong baseline method VAE-AMDT's MAE score by **2.4%** for MOSEI. As we use the teacher model to offer auxiliary multimodal supervision signals to the student model, by leveraging the strengths of the learned multimodal space of the teacher model and the large-scale parameters of the student model, we think our method
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Dataset & Train & Validation & Test & Total \\ \hline \hline MOSI Zadeh et al. (2016) & 1,284 & 229 & 686 & 2,199 \\ MOSEI Bagher Zadeh et al. (2018) & 9,473 & 1,206 & 2,710 & 13,389 \\ VEGAS Zhou et al. (2018) & 22,482 & - & 5,621 & 28,103 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset size. MOSEI uses the same dataset as Wang et al. (2022).
is effective for achieving high-performance multimodal knowledge distillation via minimizing the step-distillation objective loss (SS4.3).
#### 5.4.2 Evaluation of audio-visual retrieval
We further evaluated our _VideoAdviser_ on the VEGAS dataset in Tab. 4. Compared to the state-of-the-art method EICS Zeng et al. (2023) that builds two different common spaces to learn the modality-common and modality-specific
\begin{table}
\begin{tabular}{l|c|c|c|c||c|c|c|c|c} \hline \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**MOSI**} & \multicolumn{6}{c}{**MOSEI**} \\ \cline{2-10} & MAE \(\downarrow\) & \(A^{\uparrow}\uparrow\) & \(A^{2}\uparrow\) & F1 \(\uparrow\) & Corr \(\uparrow\) & MAE \(\downarrow\) & \(A^{\uparrow}\uparrow\) & \(A^{2}\uparrow\) & F1 \(\uparrow\) & Corr \(\uparrow\) \\ \hline \hline MISA Hazarika et al. (2020) & 0.804 & - & 80.8 & 80.8 & 0.764 & 0.568 & - & 82.6 & 82.7 & 0.717 \\ VAE-AMDT Wang et al. (2022) & 0.716 & - & 84.3 & 84.2 & - & 0.526* & - & 82.8* & **87.5*** & - \\ MAG-BERT Rahman et al. (2020) & 0.712 & - & 84.2 & 84.1 & 0.796 & 0.539 & - & 84.7 & 84.5 & - \\ Self-MM Yu et al. (2021) & 0.713 & - & 84.0 & 84.4 & 0.798 & 0.530/0.579* & - & 82.8/84.6* & 82.5/84.6* & 0.765/- \\ MMM Han et al. (2021a) & 0.700 & 46.7 & 84.1 & 84.0 & 0.800 & 0.526 & 54.2 & 82.2 & 82.7 & 0.772 \\ UniMSE Hu et al. (2022) & 0.691 & 48.7 & 85.9 & 85.3 & 0.809 & 0.523 & 54.4 & **85.9** & 85.8 & 0.773 \\ _VideoAdviser_ **(ours)** & **0.568** & **51.3** & **87.7** & **87.9** & **0.872** & **0.502*** & **54.5*** & 84.5* & 85.0* & **0.810*** \\ \hline Human & 0.710 & - & 85.7 & 87.5 & 0.820 & - & - & - & - & - \\ \hline \end{tabular}
\end{table}
Table 2: Comparison results for MOSI and MOSEI. Our model reduces the state-of-the-art UniMSE’s MAE score by **12.3%** for MOSI, and VAE-AMDT’s MAE by **2.4%** for MOSEI. Here, (\(\downarrow\)) indicates the lower the MAE, the better the performance, and (\(\uparrow\)) indicates the vice-versa. (*) indicates the results produced on the modified MOSEI dataset.
Figure 3: Label distribution of (a) MOSI and (b) MOSEI. The synthesized binary classification label is illustrated in different colors (the “negative” class in red color and the “non-negative” class in blue color).
features, which achieves an average mAP of 0.788. Our method utilizes the distilled multimodal knowledge to enhance the performance of EICS. As a result, it achieves an average mAP of 0.822 and improves EICS Zeng et al. (2023) by **3.4%**, suggesting the generality of our method on audio-visual retrieval tasks.
### Efficiency
By comparing the number of parameters with state-of-the-art models in Fig. 5, our proposed _VideoAdviser_ requires only a language model as the student that is able to achieve a high efficiency-performance model for inference. The Student (BERT Devlin et al. (2019)) achieved a compatible MAE score with fewer parameters than previous BERT-based models. Moreover, these models always process visual and audio signals for multimodal fusion, which might require more parameters and increase the computation cost. Compared with the state-of-the-art model UniMSE that uses a pretrained transformer-based language model T5 Raffel et al. (2020) to perform multimodal fusion, our model, the student (ROBERTa-Base Liu et al. (2019)) with nearly half of the parameters reduces MAE score of over **3.0** point, suggesting the high efficiency-performance of our method. _VideoAdviser_ was further improved over **9.0** point by adopting a RoBERTa-Large model as the student model.
### Analysis
#### 5.6.1 Effectiveness of components of the teacher model
We studied the effects of two core components of the teacher model (Facial expression encoder and video-specific prompting module) in Tab. 6. The results show that these two components help improve the multimodal knowledge distillation and boost the final performance of the student model. We believe that the facial expression encoder provided extra visual knowledge, and the video-specific prompting module further associated visual knowledge with text prompt representations encoded by the prompt encoder.
#### 5.6.2 Effectiveness of the student model
We studied the effects of _VideoAdviser_ on different student models in Tab. 7. We select two language models (BERT and RoBERTa) that have frequently been used in recent works Hazarika et al. (2020), Wang et al. (2022), Rahman
Figure 4: Architecture of _VideoAdviser_ for audio-visual retrieval task using a CLIP-based model (the teacher) to distill multimodal knowledge of video-enhanced prompts to an EICS-based audio-visual model (the student). The teacher model is finetuned for the audio event classification to distill multimodal knowledge to the student model via the step-distillation loss (the region in purple). We adopt 3-layer MLP with 128-dimensional hidden layers.
et al. (2020); Yu et al. (2021); Han et al. (2021a). By comparing the performance of language models with and without adopting a teacher model, the results demonstrate that our method improves a general language model's MAE score by over **6.0** point on average, suggesting the efficacy and generality of our method with different student models. We consider that the teacher model offers auxiliary multimodal supervision to the student model during training, the language model-based students are able to learn multimodal knowledge from the teacher with their large-scale parameters.
We further trained a student model by freezing pretrained parameters, which dramatically dropped the MAE score from \(0.568\) to \(1.478\). This result makes us believe that in order to achieve expressive multimodal knowledge distillation across modalities, it is essential to finetune full parameters to leverage the strengths of large-scale pretrained models with powerful representational learning capabilities.
#### 5.6.3 Modality effectiveness
To confirm the robustness of _VideoAdviser_ in multimodal knowledge distillation not only for text modality but also for diverse modalities such as visual and audio modalities, we respectively studied the effects on visual and audio modalities for audio-visual retrieval tasks. As the results indicated in Tab. 8, the proposed step-distillation works for both modalities by boosting the baseline EICS model by over 1% mAP score. By associating both sides, we finally improved the baseline by 3.4%.
\begin{table}
\begin{tabular}{l|l|c} \hline \hline & Hyperparameter & MOSI, MOSEI & VEGAS \\ \hline \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & visual encoder & ViT-L/14 \\ & Num. of frames & 8 \\ & Frame size & 224\(\times\)224 \\ & visual embedding size (input) & (B, 64, 8) & (B, 1, 10) \\ & Visual hidden layer size & (B, 128) \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & Prompt encoder & ClipTextModel \\ & Prompt embedding size (input) & (B,77,512) \\ & Prompt hidden layer size & 128 \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & Text encoder & RoBERTa-large & - \\ & Text embedding size (input) & (B,100,1024) & - \\ & Text hidden layer size & 128 & - \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & Audio encoder & - & EICS model \\ & Audio feature size (input) & - & 10 \\ & audio hidden layer size & - & 128 \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & Video-enhanced prompt logit & (B, 1) & (B, 10) \\ & Video logit & (B, 1) & (B, 10) \\ & Text logit & (B, 1) & - \\ & Audio logit & - & (B, 10) \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & Method & AdamW Kingma and Ba (2015) \\ & Learning rate & 8e-6 \\ \cline{1-1} & Warmup steps & 15 \\ \cline{1-1} & Schedular & cosine\_schedule\_with\_warmup \\ \hline \multirow{5}{*}{
\begin{tabular}{} \end{tabular} } & GPU & GTX 1080 Ti \\ \cline{1-1} & Batch size & 4 \\ \cline{1-1} & Training epochs & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The hyperparameters for training _VideoAdviser_. Here, “B” denotes the batch size, “Audio logit” denotes the output of the audio encoder for VEGAS (see Fig. 4).
#### 5.6.4 Effectiveness of dataset size
In general, the larger the dataset, the better the performance. We trained _VideoAdviser_ with a combination of the MOSI and MOSEI datasets to see if we can further improve the performance. As the results indicated in Tab. 9, The model performs much better than those trained on individual datasets and suggests the efficacy of our approach for different dataset sizes.
#### 5.6.5 Effectiveness of the step-distillation loss
We ablatively studied the effects of our proposed step-distillation loss for multimodal knowledge distillation in Tab. 10. Without the first step--distilling multimodal knowledge from a video-enhanced prompt logit to a video logit (see Fig. 2), the learned multimodal space of CLIP cannot be passed to the student model via the video logit, resulting poor student model performance. On the other hand, it improves the regular language model (w/o step-distillation) **4.2%** MAE score
\begin{table}
\begin{tabular}{l|c|c|c} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**VEGAS**} \\ \cline{2-4} & A\(\rightarrow\)V & V\(\rightarrow\)A & Average \\ \hline \hline Random & 0.110 & 0.109 & 0.109 \\ \hline BiC-Net Han et al. (2021) & 0.680 & 0.653 & 0.667 \\ C-CCA Rasiwasia et al. (2014) & 0.711 & 0.704 & 0.708 \\ C-DCCA Yu et al. (2019) & 0.722 & 0.716 & 0.719 \\ DCIL Zheng et al. (2020) & 0.726 & 0.722 & 0.724 \\ DSCMR Zhen et al. (2019) & 0.732 & 0.721 & 0.727 \\ TNN-C-CCA Zeng et al. (2020) & 0.751 & 0.738 & 0.745 \\ CCTL Zeng et al. (2022) & 0.766 & 0.765 & 0.766 \\ EICS Zeng et al. (2023) & 0.797 & 0.779 & 0.788 \\ \hline _VideoAdviser_ (ours) & **0.825** & **0.819** & **0.822** \\ \hline \end{tabular}
\end{table}
Table 4: The mAP comparison results with state-of-the-art models for VEGAS. Here, “V” and “A” indicate “Video” and “Audio”, respectively.
\begin{table}
\begin{tabular}{l|c|c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Parameters**} & **MOSI** \\ \cline{2-3} & MAE \\ \hline \hline
**BERT-based model** & & \\ - MISA Hazarika et al. (2020) & \textgreater{}110M & 0.804 \\ - MAG-BERT Rahman et al. (2020) & \textgreater{}110M & 0.712 \\ - Self-MM Yu et al. (2021) & \textgreater{}110M & 0.713 \\ - MMM Han et al. (2021) & \textgreater{}110M & 0.700 \\ \hline
**T5-based model** & & \\ - UniMSE Hu et al. (2022) & \textgreater{}231M & 0.691 \\ \hline
**RoBERTa-based model** & & \\ - VAE-AMDT Wang et al. (2022) & \textgreater{}355M & 0.716 \\ \hline _VideoAdviser_ (ours) & & \\ - Student (BERT) & **110M** & 0.704 \\ - Student (RoBERTa-Base) & 125M & 0.660 \\ - Student (RoBERTa-Large) & 361M & **0.568** \\ \hline \end{tabular}
\end{table}
Table 5: Efficiency comparison. _VideoAdviser_ is able to train a high efficiency-performance student model compared to state-of-the-art methods for inference. The student (RoBERTa-Base) outperforms the SOTA by over **3.0** point with nearly half the parameters.
and suggests the effectiveness of the second step--distilling the knowledge of the video logit from the teacher model to the student model. Moreover, by optimizing the first and second steps, our proposed method outperforms a cutting-edge contrastive representation distillation method (CRD) Tian et al. (2020) that proposed a contrastive-based objective for transferring knowledge between deep networks. Compared to the CRD which is designed to model mutual information across dimensions of the knowledge representations, Our proposed step-distillation applies MSE to mapping mutual information across modalities via one-dimensional logits (_i.e._, video-enhanced prompt logit, video logit, and text logit). Our method performs better than CRD in transferring regression information for multimodal knowledge distillation.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**MOSI**} \\ \cline{2-5} & MAE & \(A^{T}\) & \(A^{2}\) & F1 \\ \hline \hline _VideoAdviser_ (ours) & **0.568** & **51.3** & 87.7 & **87.9** \\ - w/o Facial expression encoder & 0.579 & 50.2 & 86.8 & 86.4 \\ - w/o Video-specific prompting & 0.570 & 50.1 & **88.1** & 87.7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation results show the effects of components of the teacher model for multimodal knowledge distillation on MOSI dataset.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**MOSI**} \\ \cline{2-3} & MAE & \(A^{2}\) & F1 \\ \hline \hline Teacher (CLIP-based model) & - & 57.3 & - \\ \hline BERT w/o teacher & 0.753 & 84.1 & 83.6 \\ Student (BERT) & 0.704 & 84.7 & 83.8 \\ \hline RoBERTa-Base w/o teacher & 0.719 & 84.6 & 84.3 \\ Student (RoBERTa-Base) & 0.660 & 85.4 & 84.6 \\ \hline RoBERTa-Large w/o teacher & 0.660 & 87.3 & 87.3 \\
**Student (RoBERTa-Large)** & **0.568** & **87.7** & **87.9** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Effects in different student models. Our method improves the MAE score of pretrained language models by over **6.0** point on average.
Figure 5: Visualization of logistic knowledge distribution with and without the step-distillation objective loss. The top row plots the histograms of logit by applying the step-distillation, and the bottom row indicates the vice-versa. The groudTruth indicates the label distribution, and text_logit indicates the predicted regression score of the student model. Our method using the step-distillation (the top) demonstrates a distribution of regression scores close to the groundTruth, affected by the knowledge distribution of the “video_logit” and “video_enhanced_prompt_logit”.
In addition, we show comparison results of the proposed step-distillation loss with three widely-known distillation function KD Hinton et al. (2015), FitNet Romero et al. (2014) and PKT Passalis and Tefas (2018) in Tab. 11. KD and PKT are proposed to minimize the KL divergence between the probabilistic outputs of a teacher and student model. On the other hand, FitNet and our step-distillation aim at minimizing the \(L_{2}\) distance for knowledge distillation. Compared to KD, FitNet and PKT are one-step distillation loss functions, whereas our step-distillation performs two-step distillation, with the aim of transferring multimodal knowledge across multiple scales. To achieve a fair comparison, we adapted these three approaches to our problem setting of two-step distillation. As the results indicated in Tab. 11, the step-distillation outperforms other approaches and suggests its efficacy on multimodal knowledge distillation. We noted that the PKT-based two-step distillation achieves a compatible score with ours. We consider that audio-visual tasks focus on distilling multimodal knowledge of categorical audio events rather than fine-grained regressional knowledge so that transferring probabilistic knowledge of each category can also work well. Compared to KD which utilized the softmax function to obtain probabilistic knowledge, PKT adopted the cos-similarity function to better obtain dimension-level correlation with the probabilistic knowledge.
We further illustrate the logistic knowledge distribution with and without the step-distillation loss in Fig. 5. Compared to the "Text_logit w/o step-distillation" that plots the histogram of regression scores without performing the step-distillation, "Text_logit w/ step-distillation" is close to the groundTruth label distribution. Especially the distribution in the range of \([-1,1]\) is strongly affected by the teacher model. Because the "Video_logit w/o step-distillation" distributes in the range of \([-1.5,2]\) and the "Video_enhanced_prompt_logit w/o step-distillation" distributes in the range of \([-0.4,0.2]\), by performing the step-distillation, the predicted regression score produced by the student model can be affected by the gap of these different distributions, and demonstrate that our proposed step-distillation is effective for multimodal knowledge distillation.
### Significance Testing
We tested the stability of the performance improvement by _VideoAdviser_ using the Almost Stochastic Order test (ASO) Del Barrio et al. (2018), Dror et al. (2019) as implemented by Ulmer et al. (2022). We compared three models, _VideoAdviser_ (ours), _VideoAdviser_ w/o step-distillation (baseline), and CRD based on five random seeds each using ASO with a confidence level of \(\alpha=0.05\). ASO computes a score (\(\epsilon_{\text{min}}\)) indicated in Tab. 12 to represent how far the first model is from being significantly better with respect to the second. \(\epsilon_{\text{min}}=0\) represents truly stochastic dominance and \(\epsilon_{\text{min}}<0.5\) represents almost stochastic dominance.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**VEGAS**} \\ \cline{2-4} & A\(\rightarrow\)V & V\(\rightarrow\)A & Average \\ \hline \hline baseline (EICS Zeng et al. (2023)) & 0.797 & 0.779 & 0.788 \\ \hline _VideoAdviser_ (ours) & & & \\ -w/ video distillation & 0.794 & 0.810 & 0.802 \\ -w/ audio distillation & 0.791 & 0.815 & 0.803 \\ -w/ (audio and video) distillation & **0.825** & **0.819** & **0.822** \\ \hline \end{tabular}
\end{table}
Table 8: Ablation results show the effects of step-distillation on audio and video modalities for VEGAS. Here, “w/ video distillation” indicates that the step-distillation is only adopted for the visual modality of the student model, “w/ audio distillation” indicates the other side, and “w/ audio and video distillation” indicates both sides (see Fig. 4).
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Test dataset** & MAE & \(A^{7}\) & \(A^{2}\) \\ \hline \hline MOSI & **0.546** (0.568) & **51.3** (51.3) & **88.5** (87.7) \\ MOSEI & **0.491** (0.502) & **55.6** (54.5) & 84.2 (**84.5**) \\ MOSI+MOSEI & 0.502 & 54.79 & 85.05 \\ \hline \end{tabular}
\end{table}
Table 9: Results of _VideoAdviser_ trained with a combination of MOSI and MOSEI datasets. The model performs much better for both the MOSI and MOSEI test sets. Here, (*) denotes the result of the model trained on the individual dataset.
## 6 Conclusion
We proposed a novel multimodal knowledge distillation method, _VideoAdviser_, which leverages the strengths of learned multimodal space of the CLIP-based teacher model and large-scale parameters of the RoBERTa-based student model to perform multimodal knowledge transfer by optimizing a step-distillation objective loss. In the evaluation of two multimodal tasks, our method significantly outperforms SoTA methods up to **12.3%** MAE score with a single modal encoder used in inference for video-level sentiment analysis, and **3.4%** mAP for audio-visual retrieval tasks, suggesting its strengths in high efficiency-performance. Ablation studies further demonstrate the efficacy of our proposed step-distillation objective loss in improving multimodal knowledge distillation. In the next step, we will adapt meta-learning to further explore the capability of multimodal transfer learning in a few-shot setting.
|
2309.05687 | Demystifying Practices, Challenges and Expected Features of Using GitHub
Copilot | With the advances in machine learning, there is a growing interest in
AI-enabled tools for autocompleting source code. GitHub Copilot has been
trained on billions of lines of open source GitHub code, and is one of such
tools that has been increasingly used since its launch in June 2021. However,
little effort has been devoted to understanding the practices, challenges, and
expected features of using Copilot in programming for auto-completed source
code from the point of view of practitioners. To this end, we conducted an
empirical study by collecting and analyzing the data from Stack Overflow (SO)
and GitHub Discussions. We searched and manually collected 303 SO posts and 927
GitHub discussions related to the usage of Copilot. We identified the
programming languages, Integrated Development Environments (IDEs), technologies
used with Copilot, functions implemented, benefits, limitations, and challenges
when using Copilot. The results show that when practitioners use Copilot: (1)
The major programming languages used with Copilot are JavaScript and Python,
(2) the main IDE used with Copilot is Visual Studio Code, (3) the most common
used technology with Copilot is Node.js, (4) the leading function implemented
by Copilot is data processing, (5) the main purpose of users using Copilot is
to help generate code, (6) the significant benefit of using Copilot is useful
code generation, (7) the main limitation encountered by practitioners when
using Copilot is difficulty of integration, and (8) the most common expected
feature is that Copilot can be integrated with more IDEs. Our results suggest
that using Copilot is like a double-edged sword, which requires developers to
carefully consider various aspects when deciding whether or not to use it. Our
study provides empirically grounded foundations that could inform developers
and practitioners, as well as provide a basis for future investigations. | Beiqi Zhang, Peng Liang, Xiyu Zhou, Aakash Ahmad, Muhammad Waseem | 2023-09-11T16:39:37Z | http://arxiv.org/abs/2309.05687v1 | # Demystifying Practices, Challenges and Expected Features of Using GitHub Copilot
###### Abstract
With the advances in machine learning, there is a growing interest in AI-enabled tools for autocompleting source code. GitHub Copilot, also referred to as the "AI Pair Programmer", has been trained on billions of lines of open source GitHub code, and is one of such tools that has been increasingly used since its launch in June 2021. However, little effort has been devoted to understanding the practices, challenges, and expected features of using Copilot in programming for auto-completed source code from the point of view of practitioners. To this end, we conducted an empirical study by collecting and analyzing the data from Stack Overflow (SO) and GitHub Discussions. More specifically, we searched and manually collected 303 SO posts and 927 GitHub discussions related to the usage of Copilot. We identified the programming languages, Integrated Development Environments (IDEs), technologies used with Copilot, functions implemented, benefits, limitations, and challenges when using Copilot. The results show that when practitioners use Copilot: (1) The major programming languages used with Copilot are _JavaScript_ and _Python_, (2) the main IDE used with Copilot is _Visual Studio Code_, (3) the most common used technology with Copilot is _Node.js_, (4) the leading function implemented by Copilot is _data processing_, (5) the main purpose of users using Copilot is _to help generate code_, (6) the significant benefit of using Copilot is _useful code generation_, (7) the main limitation encountered by practitioners when using Copilot is _difficulty of integration_, and (8) the most common expected feature is that Copilot _can be integrated with more IDEs_. Our results suggest that using Copilot is like a double-edged sword, "Corresponding author.
which requires developers to carefully consider various aspects when deciding whether or not to use it. Our study provides empirically grounded foundations that could inform software developers and practitioners, as well as provide a basis for future investigations on the role of Copilot as an AI pair programmer in software development.
GitHub Copilot, Stack Overflow, GitHub Discussions, Repository Mining
## 1 Introduction
Large Language Models (LLMs) and Machine Learning (ML) for autocompleting source code are becoming more and more popular in software development. LLMs nowadays incorporate powerful capabilities for Natural Language Processing (NLP) [1], and ML approaches have been widely applied to source code using a variety of new tools to support software development [2], which makes it possible to use LLMs to synthesize code in general-purpose languages [1]. Recently, NLP-based code generation tools have come into the limelight, with generative pre-trained language models trained on large corpus of code in an attempt to provide reasonable auto-completion of the source code when programmers write code [3]. Released in June 2021, GitHub Copilot has recently emerged as an "AI pair programmer", which is powered by OpenAI Codex and suggests code or entire functions in IDEs as a plug-in [4] to help developers achieve code auto-completion in development.
Although the emergence of AI-assisted programming tools has empowered practitioners in their software development efforts, there is little evidence and lack of empirically-rooted studies (e.g, [3], [5], [6]) on the role of AI-assisted programming tools in software development. The existing studies such as [7] and [8] primarily focus on the correctness and understanding of the code suggested by Copilot, and little is known about the practices, challenges, and expected features of using Copilot during programming and software development activities for the developers and users of Copilot. **To ameliorate this gap**, we conducted this study that collects data from Stack Overflow (SO) and GitHub Discussions to get practitioners' perspectives on using Copilot during software development. While Bird _et al._ investigated Copilot users' initial experiences of how they would use Copilot, as well as what challenges they encountered by three studies [5], our study explored the practices, challenges, and expected features of using Copilot by analyzing the data from developer communities. The emergence of Copilot has shifted the paradigm of pair programming, and it is challenging for software development teams to adopt this approach and tool on a large scale [5]. Our work intends to provide developers and users of GitHub Copilot with comprehensive insights on the practices, purposes, and expected features of using Copilot.
**The contributions of this work**: (1) we identified the programming languages, IDEs, and technologies used with Copilot; (2) we provided the functions implemented by Copilot, the purposes, benefits, limitations/challenges, and expected features of using Copilot; and (3) we collected and added more data related to Copilot from SO and GitHub Discussions (134 posts and 272 discussions, leading to 303 posts and 927 discussions in total) for purposes of enhancing the external
validity of our study results.
This paper is an extension of our previous conference paper [9] published in the Proceedings of the 35th International Conference on Software Engineering and Knowledge Engineering (SEKE 2023), and the following additions show how we extended the previous work: (1) we extended our dataset by including the latest data from Stack Overflow and GitHub Discussions formulated before June 18th, 2023 (303 SO posts and 927 GitHub discussions in total); we explored and reported the purposes of using GitHub Copilot in RQ1.4; (3) we investigated and reported the expected features of users about GitHub Copilot from practitioners' perspectives in RQ2.3; and (4) we provide more implications based on the study results.
The structure of the paper: Section 2 surveys the related work, and Section 3 presents the research design of this study. Section 4 provides the study results, which are further discussed in Section 5. The potential threats to validity are clarified in Section 6 and Section 7 concludes this work with future directions.
## 2 Related Work
### Analyzing the Code Generated Using Copilot
Several studies focused on the security issues of Copilot. Sandoval _et al._[10] conducted a user study to investigate the impact of programming with LLMs that support Copilot. Their results show that LLMs have a positive impact on the correctness of functions, and they did not find any decisive impact on the correctness of safety. Several studies focused on the quality of the code generated by Copilot. Imai [11] compared the effectiveness of programming with Copilot versus human programming, and found that the generated code by Copilot is inferior than human-written code. Yetistiren _et al._[8] assessed the quality of generated code by Copilot in terms of validity, correctness, and efficiency. Their empirical analysis shows Copilot is a promising tool. Madi _et al._[6] focused on readability and visual inspection of Copilot generated code. Through a human experiment, their study highlights that programmers should beware of the code generated by tools. Wang _et al._[12] collected practitioners' expectations on code generation tools through a mixed-methods approach. They found that effectiveness and code quality is more important than other expectations. Several studies focused on the limitations and challenges in Copilot assisted programming.
### Capabilities and Limitations of Copilot
Several studies have explored the capabilities and limitations of GitHub Copilot. Dakhel _et al._[13] explored Copilot's capabilities through empirical evaluations, and their results suggest that Copilot shows limitations as an assistant for developers. Nguyen and Nadi [14] conducted an empirical study to evaluate the correctness and comprehensibility of the code suggested by Copilot. Their findings revealed that Copilot's suggestions for different programming languages do not differ significantly,
and they identified potential shortcomings of Copilot, like generating complex code. Bird _et al._[5] conducted three studies to understand how developers use Copilot and their findings indicated that developers spent more time assessing suggestions by Copilot than doing the task by themselves. Sarkar _et al._[15] compared programming with Copilot to previous conceptualizations of programmer assistance to examine their similarities and differences, and discussed the issues that might arise in applying LLMs to programming.
Compared to the existing work (e.g., [8], [13]), our work intends to understand the practices, challenges, and expected features of using Copilot by exploring various aspects of Copilot's usage.
## 3 Research Design
The goal of this study is to understand the practices, benefits and challenges, and expected features of using GitHub Copilot from the point of view of practitioners in the context of Copilot related SO posts and GitHub discussions. We conducted and reported this exploratory study by following the guidelines for empirical studies in software engine ring proposed in [16]. We formulated two Research Questions (RQs) with eight sub-RQs that contribute to the goal of this study, as presented in Section 3.1 with their rationale. The overview of the research process is shown in Figure 1 and detailed in Section 3.2 and Section 3.3.
### Phase A - Outlining the Research Questions
We aimed to explore the characteristics of practices, challenges, and expected features of using GitHub Copilot by answering the RQs in the following:
**RQ1: Programming Languages, IDEs, Technologies, Functions, and Purposes of Using GitHub Copilot**
**RQ1.1: What programming languages are used with GitHub Copilot?**
Figure 1: Overview of the research process
_Rationale:_ In software development, the role of programming languages is fundamental - enabling programmers to translate requirements by writing source code using a specific programming language such as Java or Python. This RQ aims to collect the programming languages that developers tend to use with Copilot.
**RQ1.2: What IDEs are used with GitHub Copilot?**
_Rationale:_ Copilot is a third-party plug-in used in IDEs. This RQ aims to identify the IDEs frequently used with Copilot. The answers of this RQ can help developers choose which IDE to use when they code with Copilot.
**RQ1.3: What technologies are used with GitHub Copilot?**
_Rationale:_ When writing code, programmers need to employ certain technologies to complete the development. This RQ aims to investigate the technologies that can be used with Copilot (e.g., frameworks), and the answers of this RQ can help developers to choose the technologies when they use Copilot.
**RQ1.4: What functions are implemented by using GitHub Copilot?**
_Rationale:_ Copilot can complete entire functions according to users' comments. This RQ aims to provide a categorization of the functions that can be implemented by Copilot, and the answers of this RQ can provide developers guidance when implementing functions by using Copilot.
**RQ1.5: What are the purposes of using GitHub Copilot?**
_Rationale:_ Users may use Copilot for different purposes. This RQ aims to explore what are users using Copilot for, and the answers of this RQ can help us get a better understanding of how people are using Copilot.
**RQ2: Benefits, Limitations & Challenges, and Expected Features of Using GitHub Copilot**
**RQ2.1: What are the benefits of using GitHub Copilot?**
_Rationale:_ Using Copilot to assist programming can bring many benefits (e.g., reducing the workload of developers). This RQ aims to collect the advantages brought to the development by applying Copilot.
**RQ2.2: What are the limitations & challenges of using GitHub Copilot?**
_Rationale:_ Although using Copilot to assist in writing code can help developers with their programming activities, there are still restrictions and problems when using Copilot. This RQ aims to collect and identify the limitations and challenges practitioners may experience when using Copilot. The answers of this RQ can help practitioners make an informed decision when deciding whether to code with the help of Copilot.
**RQ2.3: What are the expected features of users about GitHub Copilot?**
_Rationale:_ Since its released on June, 2021, GitHub Copilot has become increasingly popular among programmers and has been developed into a mature automated code-completion tool. However, there are still some features Copilot does not provide but users want to have. This RQ aims to investigate the expected features that users expect about GitHub Copilot. The answers of this RQ can give suggestions to the development team of GitHub Copilot, guiding them to make Copilot an AI
enabled coding tool that can better meet users' needs.
### Phase B - Data Collection and Filtering
This study focuses on understanding the practices, challenges, and expected features of using Coplicot collected from SO and GitHub Discussions. SO is a popular software development community and has been widely used by developers to ask and answer questions as a Q&A platform. GitHub Discussions is a feature of GitHub used to support the communication among the members of a project. Different from SO, GitHub Discussions can provide various communication intentions, not just question-answering (e.g., a discussion can report errors or discuss the potential development of a software project) [17], from which the data can be complementary to the data from SO. Besides, GitHub Discussions can provide a center of a community knowledge base connected with other artifacts in a project [17]. Therefore, we decided to use SO and GitHub Discussions as the data sources to answer our RQs, and we conducted the searches for both SO and GitHub Discussions on June 18th, 2023. We conducted the data filtering manually because the numbers of posts from SO and discussions from GitHub Discussions are not large, which could be completed in an acceptable effort and time by manual work. Another reason for using a manual approach is that conducting data filtering using an automatic approach would lead to false positive filtering results and thus make the filtering results inaccurate, which will negatively affect the findings we obtain.
**For SO**, "_copilot_" is used as the term to search the posts related to Copilot. After searching, we got a total of 714 posts that include the search term "_copilot_". The term "_copilot_" may appears more than once in a post, so there may be duplicates in the URL collection of these retrieved posts. After removing the duplicates, we ended up with 678 posts with unique URLs. We conducted the data To manually label posts related to Copilot, we conducted a pilot data labelling by two authors with 10 retrieved SO posts. Specifically, the inclusion criterion is that the post must provide information referring to Copilot. We calculated the Cohen's Kappa coefficient [18] to measure the consistency of labelled posts, which is 0.773, thus indicating a decent agreement between the two coders. After excluding the irrelevant posts in the search results, we finally got 303 Copilot related SO posts.
**For GitHub Discussions**, GitHub discussions are organized according to categories. After exploring the categories on GitHub Discussions, we found the "Copilot" category which contains the feedback, questions, and conversations about Copilot [19] under the "GitHub product categories". Since all the discussions under the "Copilot" category are related to Copilot, we then included all the discussions under the "Copilot" category as related discussions to extract data. The number of discussions related to Copilot is 927.
### Phase C - Data Extraction and Analysis
To answer the RQs in Section 3.1, similar to Data Collection and Filtering in Section 3.2, we manually extracted the data items listed in Table 1. The first and third authors conducted a pilot data extraction independently with 10 SO posts and 10 GitHub discussions randomly selected from the 303 SO posts and 927 GitHub discussions. The second author was involved to discuss with the two authors and came to an agreement if any disagreements were found during the pilot. After the pilot, the criteria for data extraction were determined: (1) for all the data items listed in Table 1, they will be extracted and counted only if they are explicitly mentioned by developers that they were used with Coplot; (2) if the same developer repeatedly mentioned the same data item in an SO post or a GitHub discussion, we only counted it once. In a post or discussion, multiple developers may mention Coplilot related data items, resulting in situation that the total number of instances of certain data item extracted may be greater than the number of posts and discussions. The first and third authors further extracted the data items from the filtered posts and discussions according to the extraction criteria, marked uncertain parts, and discussed with the second author to reach a consensus. Finally, the first author rechecked all the extraction results by the two authors from the filtered posts and discussions to ensure the correctness of the extracted data.
\begin{table}
\begin{tabular}{c l l c} \hline \# & Data Item & **Description** & **RQ** \\ \hline D1 & Programming language & _Programming languages used with Copilot_ & RQ1.1 \\ \hline D2 & IDE & _IDEs used with Copilot_ & RQ1.2 \\ \hline D3 & Technology & _Technologies used with Copilot_ & RQ1.3 \\ \hline D4 & Function & _Functions implemented by Copilot_ & RQ1.4 \\ \hline D5 & Purpose & _Intentions of using Copilot_ & RQ1.5 \\ \hline D6 & Benefit & _Benelts brought by using Copilot_ & RQ2.1 \\ \hline D7 & Limitation and challenge & _Restrictions and difficulties when using Copilot_ & RQ2.2 \\ \hline D8 & Expected feature & _Features that users want Copilot to provide_ & RQ2.3 \\ \hline \end{tabular}
\end{table}
Table 1: Data items extracted and their corresponding RQs
\begin{table}
\begin{tabular}{c l l c} \hline \# & Data Item & **Data Analysis Method** & **RQ** \\ \hline D1 & Programming language & Descriptive statistics [20] & RQ1.1 \\ \hline D2 & IDE & Descriptive statistics & RQ1.2 \\ \hline D3 & Technology & Descriptive statistics & RQ1.3 \\ \hline D4 & Function & Constant comparison [21] & RQ1.4 \\ \hline D5 & Purpose & Constant comparison & RQ1.5 \\ \hline D6 & Benefit & Constant comparison & RQ2.1 \\ \hline D7 & Limitation and challenge & Constant comparison & RQ2.2 \\ \hline D8 & Expected Feature & Constant comparison & RQ2.3 \\ \hline \end{tabular}
\end{table}
Table 2: Data items and their analysis methods
#### 3.3.1 Analyze Data
For RQ1.1, RQ1.2, and RQ1.3, we used descriptive statistics [20] to analyze and present the results. For RQ1.4, RQ1.5, and RQ2, we conducted a qualitative data analysis by applying the Constant Comparison method [22]. We constantly compared each part of the data (e.g., emergent codes) to explore differences and similarities in the extracted data to form categories [23]. Note that for answering RQ1.4, we categorized the functions (D4) based on developers' discussions, i.e., developers' descriptions of the mentioned functions. Firstly, the first and the third authors coded the filtered posts and discussions with the corresponding data items listed (see Table 1). Secondly, the first author reviewed the coded data by the third author to make sure the extracted data were coded correctly. Finally, the first author combined all the codes into higher-level concepts and turned them into categories. After that, the second author examined the coding and categorization results, in which any divergence was discussed till the three authors reached an agreement. The data analysis methods with their corresponding data items and RQs are listed in Table 2. The data analysis results are provided in [24].
## 4 Results
In this section, the study results of RQ1.1 to RQ1.4 are visualized in Fig 2, and the results of RQ1.5 and RQ2.1 to RQ2.3 are provided in Table 3, 4, 5, and 6.
### RQ1: Programming Languages, IDEs, Technologies, and Functions of Using GitHub Copilot
**RQ1.1: What programming languages are used with GitHub Copilot?**
Figure 2a lists the 19 programming languages used with Copilot, in which _JavaScript_ and _Python_ are the most frequently used ones, both accounting for one fifth. Besides, developers often write _C#_ and _Java_ code when using Copilot, as one practitioner mentioned "_the GitHub Copilot extension is enabled in my VS 2022 C# environment_" (GitHub #14115). _HTML+CSS_, _TypeScript_, _Golang_, \(C\), _Rust_, _PHP_, and _Kotlin_ were used with Copilot between 3^12 times each (1.5% to 6.1%). The rest programming languages (e.g., _Perl_, _Ruby_), and _Visual Basic_ which are not popular, were mentioned only once with Copilot.
**RQ1.2: What IDEs are used with GitHub Copilot?**
Figure 2b shows 25 types of IDEs that are used with Copilot. _Visual Studio Code_ is the dominant IDE, accounting for 48.0%. When first released, Copilot only worked with _Visual Studio Code_ editor, and it is expected that _Visual Studio Code_ is the IDE most often used with Copilot. Mainstream code editors, including _Visual Studio_, _IntelliJ IDEA_, _NeoVim_, and _PyCharm_ are also occasionally used, account for 38.2% in total. The remaining IDEs were rarely mentioned by developers, and one possible reason is that there are often integration issues when using Copilot within them according to the results of RQ2.2.
**RQ1.3: What technologies are used with GitHub Copilot?**
Figure 2c presents 23 technologies used with Copilot. We find that these identified technologies include frameworks, APIs, and libraries. _Node.js_, whose proportion is more than 45%, is one of the most popular back-end runtime environments for _JavaScript_, which is also the most frequently used language with Copilot (see the results of RQ1.1), thus it is reasonable that _Node.js_ is the major technology used with Copilot. In addition, _NET_ which works for Web development, and _Vue_, _React_, _Flutter_, and _Ajax_ which are frameworks for front-end development, were mentioned less often compared to _Node.js_. The rest of the identified technologies, many of which relate to machine learning (e.g., _Pandas_, _Dlib_, and _OpenCV_) or front-end development (e.g., _Htmx_, _Vanilla JS_, and _Next.js_), are rarely used with Copilot, and each of them appears only once.
**RQ1.4: What functions are implemented by using GitHub Copilot?**
Figure 2d shows 14 functions implemented by using Copilot. The main function implemented by Copilot is _data processing_, indicating that developers tend to make
Figure 2: Programming languages, IDEs, technologies, and implemented functions of using Copilot (results of RQ1.1 to RQ1.4)
use of Copilot to write functions working with data. Besides, _test_ (15.1%) and _front-end element control_ (13.2%) are the functions that account for more than 10% besides _data processing_. When implementing functions, developers also use Copilot to code _string processing_, _image processing_, and _algorithm_, among which _image processing_ and _algorithm_ account for the same, i.e., 7.5%. The rest types of functions are seldom implemented by using Copilot, which are mentioned by developers twice or once.
**RQ1.5: What are the purposes of users using GitHub Copilot?**
Table 3 shows nine purposes of using Copilot. 43.3% of the developers from SO and GitHub Discussions indicated that they used Copilot _to help generate code_ they needed. Another 9 developers (15.0%) said that they wanted _to try out the functionality of Copilot_, so they downloaded the tool and used it. _To fix bugs_ is the third most popular reason for developers to use Copilot. When developers found their code did not work and they could not fix the bugs by themselves, they would turn to Copilot for help, as one developer mentioned in the post "_I've been working with copilot and chatgpt for days trying to get the code to work by fixing it_" (SO #76498229). _To improve coding ability_ and _to provide ideas for writing code_ have the same percentage, 6.7%. Some developers used Copilot to help them learn knowledge related to coding _to improve coding ability_, for examples, one developer said that "_I am trying to learn js_" (GitHub #6947) when he applied for free use of Copilot. Some developers did not want to directly use the code suggested by Copilot, and they only wanted to refer to the suggestions of Copilot to provide them with ideas on how to solve their problems. Copilot can be used _for educational purposes_ and _for research purposes_ as well. Users also use Copilot to help them _generate code comments_, as one developer mentioned that he used "_Copilot in Visual Studio 2022 to document code_" (SO #76070342), but the percentage of this purpose is only 5.0%. Besides, two developers (3.3%) made use of Copilot _to check the code_ because they could not find why the code did not work.
### RQ2: Benefits, Limitations & Challenges, and Expected Features of Using GitHub Copilot
**RQ2.1: What are the benefits of using GitHub Copilot?**
Table 4 highlights 10 benefits of using Copilot. Most developers mentioned that they used Copilot for _useful code generation_, which reduced their workload and gave them help when they have no idea about how to write code. Programming with Copilot also brings _faster development_, as one discussion remarked, Copilot "_saves developers a lot of time_" (GitHub #35850). Meanwhile, _better code quality_ can be obtained by using Copilot. Compared to the code written by developers themselves, the code suggested by Copilot is usually shorter and more correct, as one developer said, "_often Copilot is smarter than me_" (SO #74512186). Copilot can use machine learning models to learn code style of developers, so as to offer _good adaptation to users' code patterns_. Four developers mentioned that Copilot
can give them _better user experience_ than other AI-assisted programming tools, for example, one developer stated that "_Copilot works totally different compared to all
\begin{table}
\begin{tabular}{l l c c} \hline
**Benefit** & **Example** & **Count** & **\%** \\ \hline Useful code generation & _I find myself writing a lot of tests, and Copilot_ & 28 & 45.9\% \\ & _is excellent at helping with writing repetitive tests_ & & \\ & (GitHub \#9282) & & \\ \hline Faster development & _I really enjoy using it, it reduce programming time_ & 10 & 16.4\% \\ & (GitHub \#17382) & & \\ \hline Better code quality & _it’s faster_ & _simpler to your solution_ (SO & 7 & 11.5\% \\ & \#6848(18725) & & \\ \hline Good adaptation to users’ code patterns & _GitHub could adapt to your coding practices_ (SO & 4 & 6.6\% \\ & _=_ & _g67048080_) & \\ \hline Better user experience & _Since copilot works totally different compared to all the other products out there, it is a lot more fun to use and does not annoy me like some other AI systems_ (GitHub \#27450) & & \\ \hline Free for students & _If you are a student you can sign up for the GitHub_ & 3 & 4.9\% \\ & _Student Pack, which gives a lot of benefits, one being copilot for free_ (GitHub \#31494) & & \\ \hline Powerful code interpretation and conversion functions & _Does Copilot have code explanation feature or something similar? It doesn some active members_ & 2 & 3.3\% \\ \hline Frequent updates to provide more features & _Keep in mind that there are updates to the plugin very frequently, so there’s still hope_ (SO \#70428218) & 1 & 1.6\% \\ \hline Strong integration capability & _GitHub is supporting more editors_ (GitHub \#6858) & 1 & 1.6\% \\ \hline Ease of study and use & _when using this plugin, can study at a relatively low cost_ (GitHub \#8028) & 1 & 1.6\% \\ \hline \end{tabular}
\end{table}
Table 4: Benefits of using GitHub Copilot (results of RQ2.1)
\begin{table}
\begin{tabular}{l l c c} \hline
**Purpose** & **Example** & & **Count** & **\%** \\ \hline To help generate code & _GitHub CoPilot was able to fill in the code I needed_ & 26 & 43.3\% \\ & _after I wrote the steps in comments_ (SO \#719505508) & & \\ \hline To try out the functionality of Copilot & _I use VIM and Intelli_ _on a daily basis, and I re-_ & 9 & 15.0\% \\ & _costly installed VS Code to try out the newest “Copilot_ & & \\ & _tot Clant” features_ (SO \#6734951) & & \\ \hline To fix bugs & _ spent a month trying to get to work on my own_ & 7 & 11.7\% \\ & _and after a week of trying to get chaptapt or copilot_ & & \\ & _to fix the bugs I’m all out of ideas_ (SO \#76266905) & & \\ \hline To improve coding ability & _I’m improving my rasty skills lately and saw (in some Copilot suggestions) the question mark operator as a prefix of variables_ (SO \#74008676) & & 4 & 6.7\% \\ \hline To provide ideas for writing code & _With the input from jns_ (_AES is actually OK for_ & 4 & 6.7\% \\ & _encrypted tokens, but not signed) and Github Copilot 1 to I came up with a working solution using HACC-_ & & \\ & _SHA256_ (SO \#72812667) & & \\ \hline For educational purposes & _For the past year have used taught my students how could benefit from co-pilot while coding_ (GitHub \#19410) & 3 & 5.0\% \\ \hline To generate code comments & _Using ChatGPT and Copilot, I've commented it to_ & 3 & 5.0\% \\ & _understand its functionality_ (SO \#75624961) & & \\ \hline To check the code & _I've checked myself by stepping through, I've checked_ & 2 & 3.3\% \\ & _with GitHub Copilot, I've checked with ChatGPT, and they all say this is correct._ (SO \#7631788) & & \\ \hline For research purposes & _I am working on a scientific study testing how Copilot will effect the academic setting_ (GitHub \#8324) & & 3.3\% \\ \hline \end{tabular}
\end{table}
Table 3: Purposes of using GitHub Copilot (results of RQ1.5)
the other products out there, it is a lot more fun to use and does not annoy me like some other AI systems_" (GitHub #7254), without providing the names of the other products.
**RQ2.2: What are the limitations & challenges of using GitHub Copilot?** Table 5 lists 15 limitations and challenges of using Copilot. Most developers pointed out the _difficulty of integration_ between Copilot and IDEs or other plug-ins. After Copilot was installed in developers' IDEs, certain plug-ins did not work and Copilot may conflict with some shortcut settings of the editors. Moreover, Copilot cannot be successfully integrated with some IDEs as Copilot does not support these editors yet. Due to the instability of Copilot servers, no support for proxies, and access restriction of some regions, developers may have _difficulties of accessing Copilot_. The code suggested by Copilot has constraints as well, and sometimes it just offers few solutions, which are not enough for users, which brings _limitation to code generation_, as one developer said "_multiple solution is too little_" (GitHub #37304). Practitioners also complained about the _poor quality of generated code_ by Copilot. Some practitioners said that "_GitHub Copilot suggest solutions that don't work_" (SO #73701039), and some practitioners found that when the code files became larger, the quality of the code suggested by Copilot "_becomes unacceptable_" (GitHub #9282). When using Copilot, developers pay much attention to _code privacy threat_ as well. They were worried that Copilot may use their code information without permission. Contrary to the developers who mentioned that Copilot gave them a _better user experience_ than other AI-assisted programming tools, some practitioners said they had an _unfriendly user experience_ when coding with Copilot. Compared to the results of our previous work [9], the number of _difficulty of subscription_ increases significantly. This may be caused by the restrictions for free users, as one user complained "_so looks like people with a free plan are stuck on a rate-limit for now with no way out_" (GitHub #43893).
\begin{table}
\begin{tabular}{l l r r} \hline
**Limitation \& Challenge** & **Example** & **Count** & **\%** \\ \hline Difficulty of integration & _Copilot only works with VSCode, VSCadium is not_ & 114 & 28.1\% \\ & _supported at the moment_ (GitHub \#14837) & & 69 & 17.0\% \\ \hline Difficulty of accessing Copilot & _I cannot connect to the GitHub account and the_ & & 69 & 17.0\% \\ lot & _Copilot server in VSCode, also cannot use the_ & & \\ & _Copilot plugin_ (SO #74398521) & & & \\ \hline Limitation to code generation & _Copilot is limited to around 1000 characters in the_ & 48 & 11.8\% \\ & _response_ (GitHub \#15122) & & & \\ Poor quality of generated code & _GitHub \#51322_ & & & 8.9\% \\ \hline Code privacy threat & _Copilot does collect personal data so just take prevention_ & 29 & 7.1\% \\ & _vation when working in private repos_ & (GitHub \#7163) & & \\ \hline Unfriendly user experience & _I had the same problem today, an amazing tool_ & 25 & 6.2\% \\ & _with poor user experience_ (GitHub \#8468) & & & \\ Difficulty of subscription & _My copilot subscription suddenly stopped. Tried_ & 22 & 5.4\% \\ & _log out and an. Never have reply on support ticket_ & & \\ & _over 10 days_ (GitHub \#36190) & & & \\ \hline \end{tabular}
\end{table}
Table 5: Limitations and challenges of using Copilot (results of RQ2.2)
\begin{table}
\begin{tabular}{l c c} \hline
**Expected Feature** & **Count** & **\%** \\ \hline Can be integrated with more IDEs & 32 & 28.8\% \\ \hline Allow customization of shortcuts for suggestions & 12 & 10.8\% \\ \hline Give suggestions when requested & 8 & 7.2\% \\ \hline A team version & 7 & 6.3\% \\ \hline Support access proxies & 7 & 6.3\% \\ Allow customization of the format of generated code & 5 & 4.5\% \\ \hline Accept the needed part of the suggestions & 4 & 3.6\% \\ \hline Compatible with other code generation tools & 4 & 3.6\% \\ \hline Allow setting filters for suggestions & 3 & 2.7\% \\ \hline Allow self-aligned certificates & 3 & 2.7\% \\ \hline Can be used with more development frameworks & 2 & 1.8\% \\ \hline Ability to turn off data collection & 2 & 1.8\% \\ \hline Suggestions in IDE UI can be configured & 2 & 1.8\% \\ \hline Code explanation & 2 & 1.8\% \\ \hline Free for certain type of users & 2 & 1.8\% \\ \hline Provide more suggestions at a time & 2 & 1.8\% \\ \hline Provide more complete suggestions & 2 & 1.8\% \\ \hline Ability to draw UML diagrams & 1 & 0.9\% \\ \hline Enable a dialog to accept or deny suggestions & 1 & 0.9\% \\ \hline A version for CLI (Command-Line Interface) & 1 & 0.9\% \\ \hline A on-premises version & 1 & 0.9\% \\ \hline Ability to select the training sources for suggestions & 1 & 0.9\% \\ \hline Can be used in remote servers & 1 & 0.9\% \\ \hline Provide a getting started guide & 1 & 0.9\% \\ \hline Provide a security rating for generated code & 1 & 0.9\% \\ \hline Remind users when it has no suggestions & 1 & 0.9\% \\ \hline Show acceptance rate of suggestions & 1 & 0.9\% \\ \hline View the code-related data shared by Copilot & 1 & 0.9\% \\ \hline Disable notification sounds of suggestions & 1 & 0.9\% \\ \hline \end{tabular}
\end{table}
Table 6: Expected features of users about Copilot (results of RQ2.3)
Table 6 presents 29 features that users expected to use about Coplicot. The most often mentioned feature by users is that they hope Coplicot _can be integrated with more IDEs_ (28.8%). According to the results of RQ2.2, the dominant limitation and challenge of using Copilot is _difficulty of integration_ as Copilot cannot be used in some editors. It is then reasonable that many users want Coplicot to support more IDEs. 10.8% users remarked that they wanted Coplicot to _allow customization of shortcuts for suggestions_, as one post wrote "_Github CoPilot should give us an option to assign a custom key instead of a [TAB] or should change to something like [SHIFT + TAB] instead of TAB_"(GitHub #7036). Eight users (7.2%) expected Coplicot to _give suggestions when requested_. They did not want Copilot to suggest code all the time because it was interruptive, and they just wanted to get suggestions from Copilot when they requested. For example, one user asked "_Is it possible to not have GitHub Copilot automatically suggest code, instead only showing its suggestions when using the 'trigger inline suggestion' shortcut?_" (SO #76147937). Seven developers expected Coplicot to lunch _a team version_ and _support access proxies_, respectively. _Allow customization of the format of generated code_ was mentioned by 5 developers. They wanted this feature to "_configure suggestion appearance_" (GitHub #7234) so that Copilot suggestions could be "_more distinguishable with normal code_" and thus "_improve the accessibility_" (GitHub #7628). Few developers indicated that they only wanted to "_accept one line of several_" of Copilot suggested code (SO #75183662), and they called for the feature that they can _accept the needed part of the suggestions_. Besides, four developers also mentioned that they hoped Copilot could be _compatible with other code generation tools_ like ReSharper.
## 5 Implications
This section presents the key implications of our study, which represent empirically grounded findings that could help researchers and developers to understand the effective usage of Copilot.
**Integration of Copilot with IDEs**: According to the results of RQ1.2 and RQ2.2, we found that most developers choose to integrate the Copilot plug-in with mainstream IDEs (including _Visual Studio Code_, _Visual Studio_, _IntelliJ IDEA_, _NeoVim_, and _PyCharm_), and the percentage of mainstream IDEs used with Copilot by practitioners reaches 86.2%. When developers choose the lesser known IDEs (e.g., _Sublime Text_), they often find it hard to integrate the Copilot plug-in and thus have _difficulty of integration_. In addition to the reason that developers may install Copilot in their chosen IDEs incorrectly, another reason for the _difficulty of integration_ is that Copilot does not support certain IDEs at the moment. When developers choose to use Copilot in mainstream IDEs, they can install it smoothly, and even if problems arise during the installation or use, they can easily find a solution on SO or GitHub Discussions as many other developers may have encountered similar issues. On the contrary, when developers choose to use Copilot in unpopular IDEs, they may not be able to install it because the IDEs are not officially supported by Copilot, and
they cannot find solutions as few developers use Copilot with these IDEs. To reduce the _difficulty of integration_, we recommend practitioners to use mainstream IDEs with Copilot. Besides, the most expected feature of users is that they hope Copilot _can be integrated with more IDEs_, which confirms with the results of RQ2.2 (the main limitation of using Copilot is _difficulty of integration_). We suggest GitHub can consider integrating Copilot with more IDEs in the future, which can meet the needs of diverse developers.
**Support for Front-end and Machine Learning Development**: As shown in the results of RQ1.1, RQ1.3, and RQ1.4, practitioners often write _JavaScript_ and _Python_ code when using Copilot, and they tend to use Copilot with front-end and machine learning related technologies (including frameworks, APIs, and libraries) to implement front-end (e.g., _front-end element control_) and machine learning functions (e.g., _data processing_ and _image processing_). _JavaScript_ is the foundation language of many popular front-end frameworks and most of Websites use _JavaScript_ on the client side. _Python_ is the first choice when it comes to the development of machine learning solutions with the help of rich libraries, e.g., _OpenCV_. It is consequently reasonable that developers tend to use Copilot with _JavaScript_ to generate code for front-end and _Python_ for developing machine learning applications.
**Different Versions of Copilot**: From the results of RQ2.3, we can find that different versions of Copilot are needed (i.e., _a team version_, _a version for CLI (Command-Line Interface)_, and _a on-premises version_). In different development environments, developers may have specific requirements when using Copilot. If GitHub can release different versions of Copilot, it would increase the usability and acceptance of Copilot and thus make it available to a wider variety of users. Besides, GitHub has officially launched some versions of Copilot, e.g., Copilot Labs, Copilot X, and Copilot Nightly. Copilot Labs [25] is used to experiment with new ideas before taking them into real production, Copilot X [26] provides an enhancement with new features, and Copilot Nightly contains experimental and less well tested changes. Developers can choose the version of Copilot according to their needs.
**Potentials and Perils of Using Copilot in Software Development**: Trained on billions of lines of code, Copilot can turn natural language prompts into coding suggestions across dozens of programming languages and make developers code faster and easier [4]. The results of RQ2.1 and RQ2.2 show that many benefits of using Copilot contradict its limitations and challenges, e.g., _useful code generation_ vs. _limitation to code generation_. When deciding to use Copilot, developers should consider tool integration, user experience, budget, code privacy, and some other aspects, and make trade-offs between these factors. In short, using Copilot is like a double-edged sword, and practitioners need to consider various aspects carefully when deciding whether or not to use it. If Copilot can be used with appropriate programming languages and technologies to implement functions required by users correctly in developers' IDEs, it will certainly optimize developers' coding workflow and do what matters most - building software by letting AI do the redundant work. Otherwise, it will bring difficulties and restrictions to development, making
developers feel frustrated and constrained. The study results can help practitioners being aware of the potential advantages and disadvantages of using Copilot and thus make an informed decision whether to use it for programming activities.
**Understanding the Code Generated by Copilot**: From the results of RQ2.1, RQ2.2, and RQ2.3, we can see that some practitioners think one of the benefits of using Copilot is that it has _powerful code interpretation_ feature, but some practitioners complained about _the difficulty of understanding the generated code_ by Copilot and called for _code explanation_ feature. It would be interesting to investigate why developers have opposing attitude towards understanding the code generated by Copilot and how the generated code by Copilot can be better explained to and understood by developers. Besides, GitHub Copilot Labs which is dependent on Copilot extension has been released for experimental purposes, and Copilot Labs has the feature to provide explanations of the code generated by Copilot for developers [25]. The latest Copilot X also has the code explanation feature [26], but we do not know the reason why developers do not use Copilot Labs or Copilot X to interpret Copilot-generated code. We suggest that Copilot can provide the features for developers to better understand the generated code directly, such as generating code comments with the generated code in IDEs.
**Users' customization on suggestions by Copilot**: According to the results of RQ2.2, some developers thought that one of the limitations and challenges of Copilot is _lack of customization_, and it is expected that a number of developers called for features of Copilot to _allow customization of shortcuts for suggestions_, _allow customization of the format of generated code_, and _accept the needed part of the suggestions_ (see the results of RQ2.3). The existing features of Copilot cannot satisfy users' needs on code suggestions. Users pointed out that they had difficulty in setting up shortcuts for actions on suggestions (e.g., accepting suggestions). Based on the feedback from users, they can only accept suggestions via the tab key, however, they want "_an option to change the keybinding for accepting the suggestions_" instead (GitHub #6919). Besides, a few users also reflected that it was hard to distinguish between the code wrote by themselves and the code suggested by Copilot. They wanted to customize the color, font, and format of Copilot suggestions. Another expected feature of users about Copilot is that they hope they can _accept the needed part of the suggestions_. Some users want to accept Copilot suggestions line by line or accept only the next word of Copilot suggestions each time, while they do not want accept the entire suggestions by Copilot. As a result, there is a need for users to customize the way of accepting suggestions by Copilot. Some other expected features (e.g., _allow setting filters for suggestions_, _suggestions in IDE UI can be configured_, and _ability to select the training sources for suggestions_) also relate to customization on suggestions by Copilot. On the basis of the above feedback from users, we believe that it is necessary for Copilot to allow customization for suggestions, which will give developers better user experience when using Copilot as they are able to use it in the way they want.
**Towards an Effective Use of Copilot**: Further investigation about the practices
of Copilot can be conducted by questionnaire and interview. Under what conditions the challenges of using Copilot will show up as advantages or disadvantages, and how to use Copilot to convert its disadvantages into advantages are also worth further exploration. Besides, although we have investigated various aspects of using Copilot (e.g., limitations and challenges), we did not looked in depth at what types of users (e.g., developers, educators, and students) who use Copilot, when and how they use Copilot. By exploring these aspects, researchers can get meaningful information which would help guide towards an effective use of Copilot.
## 6 Threats to Validity
The validity threats are discussed according to the guidelines in [16], and internal validity is not considered, since we did not investigate the relationships between variables and results. The three validity threats presented below highlight potential limitations of this research that may invalidate some of the results. Future research can focus on minimizing these threats to ensure methodological rigor of the study.
**Construct validity** indicates whether the theoretical and conceptual constructs are correctly measured and interpreted. We conducted data labelling, extraction, and analysis manually, which may lead to personal bias. To reduce this threat, the data labelling of SO posts was performed after the pilot labelling to reach an agreement between the authors. The data extraction and analysis was also conducted by two authors, and the first author rechecked all the results produced by the third author. During the whole process, the first author continuously consulted with the second author to ensure there are no divergences.
**External validity** indicates the the degree of generalization of the study results, i.e., the extent to which the results can be generalized to other contexts. We chose two popular developer communities (SO and GitHub Discussions) because SO has been widely used in software engineering studies and GitHub Discussions is a new feature of GitHub for discussing specific topics [17]. These two data sources can partially alleviate the threat to external validity. However, we admit that our selected data sources may not be representative enough to understand all the practices, challenges, and expected features of using Copilot.
**Reliability** indicates the replicability of a study yielding the same or similar results. We conducted a pilot labelling before the formal labelling of SO posts with two authors, and the Cohen's Kappa coefficient is 0.773, indicating a decent consistency. We acknowledge that this threat might still exist due to the small number of posts used in the pilot. All the steps in our study, including manual labelling, extraction, and analysis of data were conducted by three authors. During the process, the three authors discussed the results until there was no any disagreements in order to produce consistent results. In addition, the dataset of this study that contains all the extracted data and labelling results from the SO posts and GitHub discussions has been provided online for validation and replication purposes [24].
## 7 Conclusions
We conducted an empirical study on SO and GitHub discussions to understand the practices, challenges, and expected features of using GitHub Copilot from the practitioners' perspective. We used "_copilot_" as the search term to collect data from SO and collected all the discussions under the "Copilot" category in GitHub discussions. Finally, we got 303 SO posts and 927 GitHub discussions related to Copilot. Our results identified the programming languages, IDEs, technologies used with Copilot, functions implemented by Copilot, and the benefits, limitations, and challenges of using Copilot, which are first-hand information for developers. The main results are that: (1) _JavaScript_ and _Python_ are the most frequently discussed programming languages by developers with Copilot. (2) _Visual Studio Code_ is the dominant IDE used with Copilot. (3) _Node.js_ is the major technology used with Copilot. (4) _Data processing_ is the main function implemented by Copilot. (5) _To help generate code_ is the leading purpose of users using Copilot. (6) _Useful code integration_ is the most common benefit mentioned by developers when using Copilot. (7) _Difficulty of integration_ is the most frequently encountered limitations and challenges when developers use Copilot. (8) Copilot _can be integrated with more IDEs_ is the most expected feature of users.
In the next step, we plan to explore the practices of using Copilot by conducting interviews or an online survey to get practitioners' perspectives on using Copilot, which can supplement our existing data collected from repository mining. Besides, we also plan to further explore various aspects of Copilot, especially how to improve the understanding of developers on the generated code (see Section 5).
## Acknowledgements
This work has been supported by the Natural Science Foundation of China (NSFC) under Grant No. 62172311 and the Special Fund of Hubei Luojia Laboratory.
|
2306.17406 | Radar Cross Section Reduction of Microstrip Patch Antenna using
Metamaterial Techniques | Radar cross section (RCS) reduction has become one of the critical research
areas in recent years. The RCS of the target should be small to avoid
detection. Different methods are used to reduce RCS, but the major challenge
with many RCS minimization methodologies is that, it may deteriorate some
antenna parameters. When antenna mode RCS is considered; structural mode RCS,
and antenna parameters are critical, as the structure should be an antenna and
a RCS reducing structure simultaneously. The techniques like applying Radar
Absorption Material (RAM) entirely over the target, deployment of Energy Band
Gap (EBG) structures, the use of passive, active cancellation, and polarization
conversion are prevalent methods to reduce RCS. The manifestation of
metamaterial property in an antenna results in the antenna's electromagnetic
characteristics becoming negative for a particular bandwidth. Thus the RCS of
the antenna can be reduced to a minimum range by loading the metamaterial
structures. This paper discusses the application of polarization conversion
method (PCM), L-structured and Square-structured fractal metamaterial antenna
for RCS reduction. This paper reports the simulation, fabrication, and testing
of the above antennas with their performance comparison. The antennas are
designed for 4.3GHz frequency with a total dimension of 80mmx80mmx1.6mm.
Antenna parameters like return loss, gain, radiation pattern, and bandwidth are
analyzed along with the RCS. The L-structured metamaterial antenna implemented
has a 29.37% larger bandwidth than the reference patch antenna with a gain of
2.94dB with a return loss of -28.28dB. | Syamly S. B, Job Chunkath | 2023-06-30T05:22:59Z | http://arxiv.org/abs/2306.17406v1 | # Radar Cross Section Reduction of Microstrip Patch Antenna using Metamaterial Techniques
###### Abstract
_Radar cross section (RCS) reduction has become one of the critical research areas in recent years. The RCS of the target should be small to avoid detection. Different methods are used to reduce RCS, but the major challenge with many RCS minimization methodologies is that, it may deteriorate some antenna parameters. When antenna mode RCS is considered; structural mode RCS, and antenna parameters are critical, as the structure should be an antenna and a RCS reducing structure simultaneously. The techniques like applying Radar Absorption Material (RAM) entirely over the target, deployment of Energy Band Gap (EBG) structures, the use of passive, active cancellation, and polarization conversion are prevalent methods to reduce RCS. The manifestation of metamaterial property in an antenna results in the antenna's electromagnetic characteristics becoming negative for a particular bandwidth. Thus the RCS of the antenna can be reduced to a minimum range by loading the metamaterial structures. This paper discusses the application of polarization conversion method (PCM), L- structured and Square - structured fractal metamaterial antenna for RCS reduction. This paper reports the simulation, fabrication, and testing of the above antennas with their performance comparison. The antennas are designed for 4.3GHz frequency with a total dimension of 80mm\(\times\)80mm\(\times\)1.6mm. Antenna parameters like return loss, gain, radiation pattern, and bandwidth are analyzed along with the RCS. The L- structured metamaterial antenna implemented has a 29.37% larger bandwidth than the reference patch antenna with a gain of 2.94dB with a return loss of -28.28dB._
Radar Cross Section Reduction; Polarization Conversion Metamaterial; Fractal Antenna
## I Introduction
The investigations on Radar Cross Section (RCS) reduction method is rapidly establishing into a well-known research area. In the military application, RCS reduction has received significant attention. Radar cross section (RCS) reduction of an antenna has gained more attention in recent years due to the increased reliance on communication for airborne systems. This is due to the increased use of antennas which also causes scattering of incident radiation. Thus almost all systems, whether it is civilian or military, require antennas with low RCS to avoid detection.
There are different methods conventionally used for RCS reduction, which is based on stealth shaping, use of low-observable Radar Absorbing Material (RAM), bionics principle [13], biased ferrite substrate [1], Energy Band Gap (EBG) structure, and Frequency Selective Structure (FSS). FSS and EBG structures are commonly used nowadays to reduce RCS. The antenna polarization can also be utilized for reducing RCS. All methods result in better RCS reduction but lead to performance degradation of some antenna parameters.
At present, the focus of the research is on microstrip patch antenna, because of its low profile characteristics. The latest research in metamaterials has resulted in a more innovative antenna design. The metamaterial structure used in antenna reduces the antenna RCS because it reduces antenna size, increases its efficiency, reduces surface waves and mutual coupling between antenna elements. It also eliminates grating lobes in antenna arrays and leads to improve the overall antenna performance.
The RCS reduction of the antenna has gained extensive research interest due to the development of electronic countermeasures technology for the military. Over the past decades, many solutions have been presented to reduce RCS. All of these methods help to decrease RCS, but often the antenna performance gets degraded while minimizing RCS. Ideally, the antenna gain, efficiency, directivity, and reflection coefficient should be maintained along with low RCS. The RCS is expressed in decibel square meter (dBsm).
In this paper, new structures for metamaterial patch antennas are proposed. These have a similar planar size as that of the conventional reference patch antenna but have different metamaterial structures. The analyses of these antennas have demonstrated an improvement in the antenna parameters and RCS reduction. The gain and bandwidth of square-structured fractal antenna increased to 3.2dB and 201.9MHz, respectively. A return loss of -28.28dB and RCS reduction of -36.71dBsm, was achieved by the proposed L-structured metamaterial antenna.
The paper describes the antenna design using metamaterial structures to obtain low RCS and return loss for microstrip patch antennas. This paper is organized as follows; Section II describes different Radar Cross Section Reduction Methods. In Section III, the Proposed Antenna Design is discussed. Section IV details the Fabrication & Analysis of these antennas. A discussion on Results and Conclusion are included in Section V and Section VI, respectively.
## II Radar Cross Section Reduction Methods
The major factors that affect the RCS reduction are, the material used in target fabrication, Radar transmitter frequency, the direction of illuminating Radar, incident angle, reflected angle, polarization, and airframe physical geometry. RCS reduction is mainly investigated in military applications. The targets can be military aircraft, missiles, and ships. These targets are manufactured using different kinds of materials.
The Radar Absorption Materials (RAM) is mainly used in stealth technology to avoid vehicles or structure detection by Radar. The RAM is designed and shaped in such a way as to absorb the incident Radio Frequency (RF) energy [2].These materials are designed to resist the reflection or transmission of electromagnetic radiations. There are resonant and broadband absorbers. The resonant absorbers utilize the resonant property of the material to function and its effectiveness depends on the frequency of the incident radiation.
The broadband absorber soaks up the incident energy that comes from the Radar [3]. The incident RF energy is converted into heat. Thus it decreases the energy scattered or reflected towards the Radar. The RAM is designed and shaped in such a way as to absorb the incident Radio Frequency (RF) energy and reduce RCS. The absorption level depends upon the frequency in which the Radar is operated. A significant factor deciding the magnitude of the radiation absorbed is the composition of the material used, no composition available can absorb the complete range of Radar frequencies. Thus RAM does not provide invisibility but lowers the Radar cross-section for a specific range of frequencies.
The military vehicles, such as ships or aircraft, size cannot be changed beyond their operational capabilities. Hence, the cross-section of the system cannot be decreased beyond a limit. The only way is to reduce other parameters which cause the RCS of the system. The shape of the target can be changed so that the scattered energy can be reflected in a direction away from the Radar. For example, a replacement of a flat surface with a curved surface can minimize narrow and intense specular lobes.
The shaping method is usually challenging to exploit or expensive to implement for vehicles or objects in service [3]. Also, if the objects are not electrically large, then shaping is not very useful.
\[\text{Overall~{}~{}RCS},~{}\sigma=\left|\sqrt{\sigma_{\text{s}}}+\sqrt{\sigma_ {\text{a}}}\text{e}^{\prime\theta}\right|^{2} \tag{1}\]
Where,
\(\mathcal{G}=\text{phase difference between two modes}\).
\(\sigma_{\text{s}}=\text{structural mode RCS}\).
\(\sigma_{\text{a}}=\text{antenna mode RCS}\).
When electromagnetic waves are incident on the antenna surface, some energy is scattered back to space, this is called structural mode scattering. Due to the antenna effect remaining part of the energy is absorbed, and some part of the absorbed energy is again scattered into space due to impedance mismatches. The scattering mentioned above is called antenna mode scattering. The RCS due to these scattering is, respectively, known as structural mode RCS and antenna mode RCS. The expression for overall RCS is given by Eqn. (1).
A method of using Energy Band Gap (EBG) structures to minimize antenna RCS is described in [4]. The EBG structures act as an artificial magnetic conductor (AMC) in a frequency band. The AMC has a reflection coefficient of \(+1\) and -1 for the perfect electric conductor (PEC). It is observed that it has low in-band RCS. When these two conductors are combined, the reflected waves get canceled, and thus the backward scattering is decreased. In the X-band of frequencies, the EBG structure reduces the RCS while the antenna array operates in the S-band of frequencies.
In [4], it is seen that RCS is diminished using EBG structures shaped like a mushroom. This is realized by an array of metallic patches with periodic structures connected with the ground plane. The microstrip patch antennas make use of this kind of EBG structure for making low-profile antennas with improved performance. These are also used in-phase reflection bandgap and surface-wave rejection bandgap. The method of reducing RCS of the patch array antenna at in-band and out-of-band frequencies using mushroom-like EBG structures are discussed in [5].
A performance improvement compared to all the above methods is achieved, with the advent of the polarization conversion technique. In papers [6], [7], and [8] discuss the effect of the polarization method. It is observed using the polarization effect, the RCS minimization and antenna performance can be attained as per the requirement. In [7], the energy bandgap (EBG) structure along with the polarization method is used. The polarization rotation mechanism along with passive EBG cancellation is utilized in [8]. In [9], the method describes a flexible cylindrically curved ground plane on which an AMC structure is created to lower the RCS. The magnetic absorption material proposed in [10] decreases RCS but has a narrow bandwidth.
A metamaterial structure has multiple similar units, which can alter the antenna parameters. Hence, the free space impedance matching with impedance due to the electric and magnetic component of antenna can be achieved, thus resulting in minimum scattering of the electromagnetic waves at the interface.
The metamaterials are implemented from ordinary materials by designing different shapes, so they function as an artificial material with parameters that can be used in improving antenna performance. The magnitudes,
phase, and polarization of electromagnetic waves are controlled in the metamaterial [11]. Different methods of metamaterial loading techniques like superstrate loading [16], coplanar loading, and cavity ground loading are explained in [11]. This paper discusses enhancing the RCSR (Radar Cross Section Reduction) and gains through a Chessboard-Like Metamaterial Surface (CLMS). It is observed that CLMS cancels the back-scattering effect caused by the AMC phase difference [14]. The RCS is reduced by metamaterial structure introduced in the antenna, instead of a single patch and thus improves the antenna performance.
## III Proposed Antenna Design
The most commonly used model for RCS reduction is the Polarization Conversion Metamaterial (PCM) structure, this is a chessboard-like structure having four sections that are separated by a distance. In this method polarization conversion metamaterial develops a polarization rotation mechanism, which results in RCSR. The RCSR is achieved when the polarization of reflected waves gets controlled [6].
The antenna structure has a dimension of 80mmx 80mm using Flame Retardant (FR4) material as the substrate. The FR4 material has a permittivity of 4.4 (\(\mathrm{e}_{r}=4.4\)), with a loss tangent of 0.025 and 2mm thickness. A coaxial feeding is used for signal coupling to the antenna. Coaxial feeding is used to decrease the chance of spurious radiation, and also it is easy to fabricate and match with the antenna, but it is difficult in the case of thick substrates. The structural dimensions of designed antennas are mentioned in Table 1.
The antenna is designed to operate with a resonant frequency of 4.3GHz. As per the simulation, the antenna operates at 4.3GHz without any shift. The antenna is loaded with a metamaterial structure. This has properties like negative relative permittivity (\(\mathrm{e}_{r}\)), relative permeability (\(\mathrm{\mu}_{\mathrm{\mu}}\)), and negative refractive index. The properties are analyzed using Ansys(r) High-Frequency Structure Simulator (HFSS) software and MATLAB(r).
The PCM unit cells are arranged around the radiating patch. Coaxial feeding is given to this radiating patch. The four sections in the antenna contribute reflections, and adjacent sections have reflections with a 180\({}^{\circ}\) phase shift, which cancels each other. This cancellation results in RCS reduction while enhancing other antenna parameters.
### _Microstrip Patch Antenna Design_
The primary stage of designing an antenna is based on the equations from [12], and then dimension analysis is done. A reference antenna, as shown in Fig. 1(a), consists of a center patch. The center patch dimension is designed first, and all the parameters were analyzed. The antenna is designed with a resonant frequency of 4.3GHz on the FR4 substrate with relative permittivity of 4.4 with a thickness of 1.6mm. The first step is to design a single element and then do the analysis. The radiation pattern, RCS, return loss, and gain of the reference antenna are obtained from Ansys(r) HFSS and shown in Table 2.
The proposed antenna structures are shown in Fig. 1(b), Fig.2 (a), and Fig.2 (b). Each antenna consists of a center patch, ground, substrate, metamaterial structure, and feed port. The proposed antennas were designed with similar dimensions and the parameters attained are investigated. All these antennas are fed using the coaxial feeding technique which produces low spurious radiation [15]. Coaxial feeding is done by a matching 50\(\Omega\) impedance adapter.
In an attempt to improve antenna performance, the metamaterial is loaded in all three structures. It is observed that the metamaterial structure reduced the RCS value to a great extent in all the cases by canceling the scattering due to the incident wave.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Parameter} & \multicolumn{3}{c|}{Antenna Structure Dimensions (mm)} \\ \cline{2-4} & Antenna & Antenna & Antenna \\ & A & B & C \\ \hline Center Patch & 16 & 16 & 16 \\ Width & 15.6 & 14.5 & 14.8 \\ \hline Ground Width & 80 & 80 & 80 \\ \hline Ground Length & 80 & 80 & 80 \\ \hline Thickness & 1.6 & 1.6 & 1.6 \\ \hline \end{tabular} A: PCM structured antenna, B: L - structured metamaterial antenna, C: Square - structured fractal metamaterial antenna.
\end{table}
Table 1: The dimension of the antennas implemented
Figure 1: Antenna Structures
Figure 2: Proposed Antenna Structures [11]
## IV Fabrication & Analysis
The antennas investigated are fabricated and fed by a coaxial feeding technique. A microstrip patch antenna consists of ground, radiating patch, and a substrate. The PCM structured antenna is shown in Fig. 3. Fig. 3(a) shows the patch and Fig. 3(b) is the ground structure with adaptor for coaxial feed.
The L - structured metamaterial antenna's patch is shown in Fig. 4(a) and the ground structure with feed is depicted in Fig. 4(b). In Fig. 5 (a) and Fig. 5(b) the patch and ground structure with the feed of Square structured fractal metamaterial antenna is displayed.
The antenna parameters such as gain, bandwidth, and radiation pattern are measured. The return loss of the antennas is measured using a Vector Network Analyzer (VNA) and the radiation measurements were carried out inside an anechoic chamber.
## V Results
The Ansys(r) HFSS software is used to design and simulate the antenna. Among the available simulation software, Ansys(r) HFSS is chosen because it gives a result that is very close to the experimental results. The antenna is designed for a center frequency of 4.3GHz. The antenna is designed for reducing the Radar cross section (RCS) and also to achieve a return loss (S\({}_{11}\) parameter) of less than -10dB at the center frequency.
In HFSS, all the antenna parameters are obtained and validated. An analysis is done based on comparing three antennas. The main feature of these antennas is that they are loaded with a metamaterial structure. The overall structure and dimensions of these antennas are similar. The metamaterial properties of these antennas are analyzed.
All three antennas have negative electromagnetic characteristics. This means that they have negative relative permittivity, permeability, and refractive index. These characteristics obtained through simulation are verified with the help of VNA and an anechoic chamber.
### _Reference Patch Antenna Characteristics_
A simulation of the reference patch antenna design yields a bandwidth of 143MHz with a gain of 2.8dB. The antenna has an RCS value of -27.98dB and a Return loss (S\({}_{11}\)) of -16.88dB. These values are shown in the first row of Table 2. From these values, it is clear that the antenna with a simple patch structure cannot provide adequate Radar cross section reduction, return loss, and gain. The return loss and RCS values are plotted in Fig.6 and Fig.7, respectively.
### _Characteristics of PCM Structured Antenna_
The PCM structured antenna is realized on an FR4 substrate with a thickness of 1.6mm. The antenna is loaded with a metamaterial structure around the center patch. The results obtained after testing for the return loss, radiation pattern, and RCS plot is shown in Fig. 8, and Fig. 9, respectively.
### _Square - Structured Fractal Metamaterial Antenna_.
Characteristics
The test results of Square - structured fractal metamaterial antenna are plotted in Fig. 12 and Fig. 13, respectively.
The test results of the three antenna structures fabricated are plotted. The comparison of values with the reference patch antenna shows that the return loss has a significant improvement while reducing the RCS. The bandwidth of the antennas has improved, while the gain has only increased marginally. The detailed results are given in Table 2.
## VI Conclusion
An effective method for Radar cross section reduction is vital in the communication system and defense applications. The suitability of three different techniques for RCS reduction was investigated in this paper. It is observed that each method has its advantages as well as disadvantages. A single method capable of achieving objectives, like low return loss, high gain, and high bandwidth along with good RCS minimization was difficult to realize.
The performance evaluation of three antennas with RCS reduction methods compared in this paper has the following significant results,
* The metamaterial loading has resulted in the decrease of Radar cross-section in all three fabricated antennas.
* A sharp decrease in return loss was achieved by all the antenna designs in comparison with the reference patch antenna.
* A significant increase in bandwidth was reported for all the antennas with a marginal increase in gain.
* The PCM structured antenna achieved the best RCS value of -37.14dBsm.
* The lowest return loss of -28.28dB was achieved by an L-structured metamaterial antenna.
* The highest bandwidth of 201.9MHz was attained with a Square-structured fractal metamaterial antenna.
Thus it can be inferred that the metamaterial design imparts an overall improvement in antenna parameters while achieving RCS reduction.
## Acknowledgment
The authors would like to thank the Head, faculty, and staff associated with the Department of Electronics at Cochin University of Science and Technology (CUSAT) for providing the laboratory facilities for the antenna characterization.
|
2309.11194 | The level matrix of a tree and its spectrum | Given a rooted tree $T$ with vertices $u_1,u_2,\ldots,u_n$, the level matrix
$L(T)$ of $T$ is the $n \times n$ matrix for which the $(i,j)$-th entry is the
absolute difference of the distances from the root to $v_i$ and $v_j$. This
matrix was implicitly introduced by Balaji and Mahmoud~[{\em J. Appl. Prob.} 54
(2017) 701--709] as a way to capture the overall balance of a random class of
rooted trees. In this paper, we present various bounds on the eigenvalues of
$L(T)$ in terms of other tree parameters, and also determine the extremal
structures among trees with a given order. Moreover, we establish bounds on the
mutliplicity of any eigenvalue in the level spectrum and show that the bounds
are best possible. Furthermore, we provide evidence that the level spectrum can
characterise some trees. In particular, we provide an affirmative answer to a
very recent conjecture on the level energy (sum of absolute values of
eigenvalues). | Audace A. V. Dossou-Olory | 2023-09-20T10:28:21Z | http://arxiv.org/abs/2309.11194v1 | # The level matrix of a tree and its spectrum
###### Abstract.
Given a rooted tree \(T\) with vertices \(u_{1},u_{2},\ldots,u_{n}\), the level matrix \(L(T)\) of \(T\) is the \(n\times n\) matrix for which the \((i,j)\)-th entry is the absolute difference of the distances from the root to \(v_{i}\) and \(v_{j}\). This matrix was implicitly introduced by Balaji and Mahmoud [_J. Appl. Prob._ 54 (2017) 701-709] as a way to capture the overall balance of a random class of rooted trees. In this paper, we present various bounds on the eigenvalues of \(L(T)\) in terms of other tree parameters, and also determine the extremal structures among trees with a given order. Moreover, we establish bounds on the mutliplicity of any eigenvalue in the level spectrum and show that the bounds are best possible. Furthermore, we provide evidence that the level spectrum can characterise some trees. In particular, we provide an affirmative answer to a very recent conjecture on the level energy (sum of absolute values of eigenvalues).
Key words and phrases:Rooted tree, level matrix, spectrum, spectral radius, level energy 2020 Mathematics Subject Classification: Primary 05C05, 05C50; Secondary 05C12, 05C35
## 1. Introduction
A common suitable means for storing graphs in computers, or applying mathematical methods to study their properties is the use of matrices to specify them. Spectral graph theory is concerned with the relationship between graph properties and the spectrum (set of all the eigenvalues) of matrices associated with graphs. It is known that no spectrum of a single matrix adequately describes all the facets of a graph. That is why there is generally a need to introduce new matrices. These include the adjacency matrix, the distance matrix, the signed and unsigned Laplacian, the maximum degree matrix [1], and the much recent notion of ancestral matrix [2], hoping that combinations of these matrices can provide more structural information. The main aim is to reveal the properties of graphs that are characterised by the spectra of these matrices, and an extensive study has been conducted over the past five decades by several researchers.
This paper focuses on the spectrum of the level matrix associated with a rooted tree. Given a rooted tree \(T\) with vertices \(u_{1},u_{2},\ldots,u_{n}\), the level matrix of \(T\), denoted by \(L(T)\), is the \(n\times n\) matrix whose entry \(l_{ij}\) in the \(i\)-th row and \(j\)-th column equals the absolute difference of the levels (distances from the root) of \(u_{i}\) and \(u_{j}\). We will study several properties of the eigenvalues of this matrix and place a particular attention to the spectral radius, which is the largest of the absolute values of the eigenvalues of a given matrix associated with a graph. We denote the level of a vertex \(v\) in a rooted tree \(T\) by \(l(v)\). The absolute levels' difference \(|l(v)-l(w)|\) is a way to measure how close vertices \(v\) and \(w\) are in a rooted tree, and it is worth pointing out that the level matrix is similar in nature to
the usual distance matrix. In particular, the level matrix is symmetric and its diagonal entries are all equal to \(0\). So all eigenvalues of \(L(T)\) are necessarily real numbers and they sum up to \(0\), since the trace of \(T\) is equal to \(0\). For example, we depict in Figure 1 a tree rooted at \(v_{1}\) and its level matrix.
The level matrix has already found an application to applied probability as showed by Balaji and Mahmoud [3]. The authors of [3] employed the absolute levels' differences to define both what they called the _level index_ of a rooted tree and the _Gini index_ of a random class of rooted trees. The Gini index is an analogue of the standard Gini coefficient commonly used in economics to study the distribution of income within a nation. This is also a motivation to study the properties of the level matrix of rooted trees. We will establish various bounds on the eigenvalues of the level matrix in terms of other tree parameters, and also describe the tree structures attaining the bounds. Moreover, we will show that the level spectrum can characterise the structure of some rooted trees. As a corollary to our results, we will determine the maximum level energy among trees with a given order, thus providing a positive answer to a very recent conjecture.
## 2. Main results
In analogy to other graph spectra, the level spectrum of a rooted tree \(T\) is defined as the spectrum (set of all eigenvalues) of the level matrix \(L(T)\). In the example of Figure 1, the characteristic polynomial, \(\det(xI_{9}-L(T))\), is \(x^{9}-80x^{7}-276x^{6}-216x^{5}\) and its zeros (the eigenvalues), rounded to few decimals, are given by:
\[10.415812724,\ 0,\ 0,\ 0,\ 0,\ -1.1775860608,\ -2.6888645876,\ -6.5493620755\,.\]
We denote the maximum level of a vertex in a rooted tree \(T\) by \(l_{\max}(T)\). By \(M^{T}\) we mean the transpose of a matrix \(M\). A simple upper bound for an eigenvalue of a level matrix can be obtained using the maximum level of the vertices.
**Proposition 1**.: _Let \(T\) be a rooted tree with \(n\) vertices. We have_
\[|\lambda|\leq(n-1)l_{max}(T)\]
Figure 1. A rooted tree and its level matrix (indexed by vertices \(v_{1},v_{2},\ldots,v_{9}\) in this order).
_for every eigenvalue \(\lambda\) of \(L(T)\). Equality holds for \(n\leq 2\)._
Proof.: Let \(\boldsymbol{x}=(x_{1}\ x_{2}\ \ldots\ x_{n})^{T}\) be an eigenvector corresponding to an eigenvalue \(\lambda\) of \(L(T)\). By the definition of an eigenvalue,
\[\lambda x_{i}=(l_{i1}\ l_{i2}\ \ldots\ l_{in})\,\boldsymbol{x} =l_{i1}x_{1}+l_{i2}x_{2}+\cdots+l_{in}x_{n}\,,\] \[|\lambda|\cdot|x_{i}| \leq l_{i1}|x_{1}|+l_{i2}|x_{2}|+\cdots+l_{in}|x_{n}|\] \[\leq(l_{i1}+l_{i2}+\cdots+l_{in})\max_{1\leq j\leq n}|x_{j}|\,.\]
Since \(l_{ii}=0\) and \(\max_{1\leq j\leq n}|x_{j}|\neq 0\) by definition of eigenvector, this implies that
\[|\lambda|\cdot|x_{i}|\leq(n-1)l_{\max}(T)\max_{1\leq j\leq n}|x_{j}|\]
for all \(1\leq i\leq n\). In particular, we obtain \(|\lambda|\leq(n-1)l_{\max}(T)\).
A basic identity can be obtained for the eigenvalues of a level matrix as follows.
**Proposition 2**.: _Let \(T\) be a rooted tree with \(n\) vertices and \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) the eigenvalues of \(L(T)\). We have_
\[\lambda_{1}^{2}+\lambda_{2}^{2}+\cdots+\lambda_{n}^{2}=\sum_{i=1}^{n}(l_{i1}^{ 2}+l_{i2}^{2}+\cdots+l_{in}^{2})=2\sum_{1\leq i<j\leq n}l_{ij}^{2}\,.\]
Proof.: By assumption, all the eigenvalues of \(L(T)^{2}\) are \(\lambda_{1}^{2},\lambda_{2}^{2},\ldots,\lambda_{n}^{2}\). Thus, the quantity \(\lambda_{1}^{2}+\lambda_{2}^{2}+\cdots+\lambda_{n}^{2}\) corresponds to the trace de \(L(T)^{2}\). By definition, the \((i,i)\)-th entry of \(L(T)^{2}\) is given by \(l_{i1}l_{1i}+l_{i2}l_{2i}+\cdots+l_{in}l_{ni}=l_{i1}^{2}+l_{i2}^{2}+\cdots+l_{ in}^{2}\), from which we obtain
\[\lambda_{1}^{2}+\lambda_{2}^{2}+\cdots+\lambda_{n}^{2}=\sum_{i=1}^{n}(l_{i1}^ {2}+l_{i2}^{2}+\cdots+l_{in}^{2})\,.\]
In order to study further properties of the eigenvalues of the level matrix, we need to recall some important tools for linear algebra. For a symmetric matrix \(M\) and a nonzero vector \(\boldsymbol{x}\), the Rayleigh quotient, denoted by \(R(M,\boldsymbol{x})\), is given by
\[R(M,\boldsymbol{x})=\frac{\boldsymbol{x}^{T}\,M\,\boldsymbol{x}}{\boldsymbol {x}^{T}\,\boldsymbol{x}}\,.\]
It is well-known that \(R(M,\boldsymbol{x})=\lambda\) if \(\boldsymbol{x}\) is an eigenvector for the eigenvalue \(\lambda\). Moreover, the smallest and the greatest eigenvalues of \(M\) can be computed by:
\[\inf_{\|\,\boldsymbol{x}\,\|=1}R(M,\boldsymbol{x})=\inf_{\|\,\boldsymbol{x}\, \|=1}\boldsymbol{x}^{T}\,M\,\boldsymbol{x}\quad\text{ and }\quad\sup_{\|\,\boldsymbol{x}\,\|=1}R(M,\boldsymbol{x})=\sup_{\|\,\boldsymbol{x }\,\|=1}\boldsymbol{x}^{T}\,M\,\boldsymbol{x}\,,\]
respectively (infirum and supremum taken over all unit vectors) [5] (see also [18, p. 456]).
### Level spectral radius
We continue our considerations with the level spectral radius, i.e. the largest absolute value of the eigenvalues of the level matrix of a rooted tree \(T\), which we denote by \(\rho_{L}(T)\).
Invoking Proposition 2 and using the fact that \(\rho_{L}(T)^{2}\) is also the spectral radius of \(L^{2}(T)\), we obtain:
**Corollary 1**.: _Let \(T\) be a rooted tree with \(n\) vertices. We have_
\[\rho_{L}(T)^{2}\geq\frac{1}{n}\sum_{i=1}^{n}(l_{i1}^{2}+l_{i2}^{2}+\cdots+l_{ in}^{2})=\frac{2}{n}\sum_{1\leq i<j\leq n}l_{ij}^{2}\,.\]
_Equality holds for \(n\leq 2\)._
**Theorem 1** (Perron-Frobenius Theorem, [13]).: _If a matrix \(M\) is irreducible and has non-negative entries, then its spectral radius is an eigenvalue of multiplicity one. It corresponds to a unique positive unit eigenvector._
The Perron-Frobenius Theorem is frequently used as a tool in computing the spectral radii of some of the matrices associated with graphs. In particular, if a positive vector is an eigenvector then its corresponding positive eigenvalue is the spectral radius.
**Lemma 1**.: _The level matrix \(L(T)\) of a rooted tree \(T\) is irreducible._
Proof.: Suppose that \(L(T)\) is reducible. Then by the definition of reducible matrices, the vertex set of \(T\) can be partitioned into two non-empty subsets \(U\) and \(V\) such that \(|l(u)-l(v)|=0\) for all \(u\in U\) and \(v\in V\). In particular, \(l(u)=l(r)=0\) for all \(u\in U\), where \(r\in V\) is the root of \(T\). However, this is a contradiction to the fact that \(r\) is the only vertex of \(T\) whose level is \(0\).
Since \(L(T)\) is an irreducible matrix, by the Perron-Frobenius Theorem, its level spectral radius is actually the largest positive eigenvalue and there exists an eigenvector with positive entries for \(\rho_{L}(T)\). We call any such eigenvector a Perron vector of \(L(T)\).
The level index of a rooted tree \(T\), denoted by \(LI(T)\), is defined as half the sum of all entries of \(L(T)\)[3]. To simplify notation, let us set
\[L_{i}(T):=\sum_{j=1}^{n}l_{ij}\,,\text{ where }l_{ij}=|l_{i}-l_{j}|\text{ and }l_{j}=l(v_{j})\,.\]
**Lemma 2**.: _Let \(T\) be a rooted tree with vertex set \(\{v_{1},v_{2},\ldots,v_{n}\}\) such that \(n>1\). Assume that \(l(v_{1})\geq l(v_{2})\geq\cdots\geq l(v_{n})\). Then_
\[L_{i}-L_{k}=(n-2i)l_{i}-2\sum_{j=i+1}^{k-1}l_{j}-(n-2k+2)l_{k}\]
_holds for all \(1\leq i<k\leq n\). In particular,_
\[L_{i}-L_{k}\geq(n-2k+2)(l_{i}-l_{k})\]
_for all \(1\leq i<k\leq n\), with equality if and only if \(l_{i}=l_{i+1}=\cdots=l_{k-1}\)._
Proof.: By definition,
\[L_{i}=\sum_{j=1}^{i-1}(l_{j}-l_{i})+\sum_{j=i+1}^{n}(l_{i}-l_{j})=(n-2i+1)l_{i}+ \sum_{j=1}^{i-1}l_{j}-\sum_{j=i+1}^{n}l_{j}\,,\]
which implies that
\[L_{i} =(n-2k+1+2(k-i))l_{i}+\sum_{j=1}^{i-1}l_{j}-\sum_{j=i+1}^{k}l_{j}- \sum_{j=k+1}^{n}l_{j}\,,\] \[L_{k} =(n-2k+1)l_{k}+\sum_{j=1}^{i-1}l_{j}+\sum_{j=i}^{k-1}l_{j}-\sum_{ j=k+1}^{n}l_{j}\,,\] \[L_{i}-L_{k} =(n-2k+1)(l_{i}-l_{k})+2(k-i)l_{i}-(l_{i}+l_{k})-2\sum_{j=i+1}^{k- 1}l_{j}\]
for all \(1\leq i<k\leq n\), where an empty sum is treated as \(0\). Thus, the identity
\[L_{i}-L_{k}=(n-2i)l_{i}-(n-2k+2)l_{k}-2\sum_{j=i+1}^{k-1}l_{j}\]
follows. Furthermore, we have \(\sum_{j=i+1}^{k-1}l_{j}\leq(k-i-1)l_{i}\) with equality if and only if \(l_{i}=l_{i+1}=\cdots=l_{k-1}\). This implies that \(L_{i}-L_{k}\geq(n-2k+2)(l_{i}-l_{k})\).
By \(\mathbf{1}\) we mean a column vector whose entries are all equal to \(1\). The lower bound in the following result relates the level index of a rooted tree to its level spectrum.
**Theorem 2**.: _Let \(T\) be a \(n\)-vertex rooted tree. We have_
\[\frac{2}{n}LI(T)\leq\rho_{L}(T)\leq\max_{1\leq i\leq n}L_{i}(T)\,.\]
_Equality holds in the lower bound if and only if \(n\leq 2\)._
Proof.: The statement of the theorem is trivial for \(n\leq 2\). So we assume \(n\geq 3\). By definition, the quantity \(\mathbf{1}^{T}\,L(T)\,\mathbf{1}\) is the sum of all entries of \(L(T)\). Using the Rayleigh quotient, we obtain
\[\rho_{L}(T)\geq R(L(T),\mathbf{1}\,/\sqrt{n})=2LI(T)/n\,, \tag{1}\]
proving the lower bound. If equality holds, then by the Perron-Frobenius theorem, \(\mathbf{1}\,/\sqrt{n}\) is the unique positive unit eigenvector corresponding to \(\rho_{L}(T)\). Hence \(L(T)\,\mathbf{1}=\rho_{L}(T)\,\mathbf{1}\), i.e. the row sums of \(L(T)\) are all equal to \(\rho_{L}(T)\), which implies that \(L_{i}=L_{k}\) for all \(1\leq i<k\leq n\). Using Lemma 2, we obtain
\[L_{k-1}-L_{k}=(n-2k+2)(l_{k-1}-l_{k})\]
for all \(2\leq k\leq n\). We choose (always possible) \(k\) such that \(l_{k-1}\neq l_{k}\). Also, recall that \(l_{k-1}\geq l_{k}\). We consider two scenarios:
* \(k-1\neq n/2\). In this case, \(L_{k-1}-L_{k}\neq 0\) which is a contradiction.
* \(k-1=n/2\). In this case, \(n\) must be even and \(k\geq 3\) is the sole index for which \(l_{k-1}\neq l_{k}\). Thus, we have \(l_{1}=\cdots=l_{k-1}>l_{k}=\cdots=l_{n}=0\), which is absurd.
We conclude that equality never holds in (1) for \(n>2\).
For the upper bound, let \(\boldsymbol{y}=(y_{1}\ y_{2}\ \ldots\ y_{n})^{T}\) be a Perron vector of \(L(T)\). We have
\[\rho_{L}(T)y_{i} =(l_{i1}\ l_{i2}\ \ldots\ l_{in})\,\boldsymbol{y}=l_{i1}y_{1}+l_{i2 }y_{2}+\cdots+l_{in}y_{n}\] \[\leq(l_{i1}+l_{i2}+\cdots+l_{in})\max_{1\leq j\leq n}y_{j}\]
for all \(1\leq i\leq n\). In particular, we get \(\rho_{L}(T)\leq l_{k1}+l_{k2}+\cdots+l_{kn}\) with \(k\) being an index of the maximum value of the entries of \(\boldsymbol{y}\). Therefore,
\[\rho_{L}(T)\leq\max_{1\leq i\leq n}\sum_{j=1}^{n}l_{ij}=\max_{1\leq i\leq n}L_ {i}(T)\]
follows immediately.
Since \((\sum_{i=1}^{n}L_{i}/n)^{2}\leq n\sum_{i=1}^{n}(L_{i}/n)^{2}\) by Cauchy-Schwartz inequality, we can get improvement for the lower bound stated in Theorem 2.
**Theorem 3**.: _Let \(T\) be an \(n\)-vertex rooted tree. Then it holds that_
\[\rho_{L}(T)\geq\sqrt{\frac{1}{n}\sum_{j=1}^{n}L_{j}(T)^{2}}\,.\]
Proof.: Recall that \(\rho_{L}(T)^{2}\) is the spectral radius of \(L(T)^{2}\). Let \(\boldsymbol{x}\) be the unit Perron vector of \(L(T)\). Then \(\boldsymbol{x}\) is an eigenvector for the eigenvalue \(\rho_{L}(T)^{2}\) and using the Rayleigh quotient, we obtain
\[\rho_{L}(T)^{2}=\boldsymbol{x}^{T}\,L(T)^{2}\,\boldsymbol{x}\geq\boldsymbol{ 1}^{T}\,L(T)^{2}\,\boldsymbol{1}\,/n=(\boldsymbol{1}^{T}\,L(T))(L(T)\, \boldsymbol{1})/n\,.\]
Moreover, \(\boldsymbol{1}^{T}\,L(T)\) equals the vector \((L_{1}\ L_{2}\ \ldots\ L_{n})\) and \(L(T)\,\boldsymbol{1}=(L_{1}\ L_{2}\ \ldots\ L_{n})^{T}\). Thus, we have \((\boldsymbol{1}^{T}\,L(T))(L(T)\,\boldsymbol{1})=(L_{1}^{2}+L_{2}^{2}+\cdots+ L_{n}^{2})\), which implies the inequality
\[\rho_{L}(T)^{2}\geq\frac{1}{n}(L_{1}^{2}+L_{2}^{2}+\cdots+L_{n}^{2})\]
as stated.
Let us mention that it is still possible to improve the lower bound stated in the previous theorem. Taking the unit vector
\[\frac{1}{\sqrt{L_{1}^{2}+L_{2}^{2}+\cdots+L_{n}^{2}}}(L_{1}\ L_{2}\ \ldots\ L_{n})\]
instead of \(\boldsymbol{1}\,/\sqrt{n}\) in the proof of the previous result, we can establish the following.
**Theorem 4**.: _Let \(T\) be an \(n\)-vertex rooted tree. Then_
\[\rho_{L}(T)\geq\sqrt{\frac{\sum_{1\leq i\leq n}Q_{i}^{2}}{\sum_{1\leq j\leq n}L_{ j}^{2}}}=\sqrt{\frac{\sum_{1\leq i\leq n}Q_{i}^{2}}{\sum_{1\leq i\leq n}Q_{i}}}\]
_holds, where \(Q_{i}=\sum_{1\leq j\leq n}l_{ij}L_{j}\)._
The details are omitted.
We define a rooted star to be a star whose root is the central vertex, and by rooted path we mean a path rooted at one of its endvertices.
**Theorem 5**.: _Among all rooted trees with \(n\) vertices, the rooted star \(S_{n}\) uniquely minimises the level spectral radius. Moreover, \(\rho_{L}(S_{n})=\sqrt{n-1}.\)_
Proof.: Let \(T\) be an \(n\)-vertex rooted tree and \(\boldsymbol{y}=(y_{1}\ y_{2}\ \dots\ y_{n})^{T}\) the unit Perron vector of \(L(S_{n})\). We have
\[R(L(T),\boldsymbol{y})-R(L(S_{n}),\boldsymbol{y}) =\boldsymbol{y}^{T}\,L(T)\,\boldsymbol{y}-\boldsymbol{y}^{T}\,L(S _{n})\,\boldsymbol{y}\] \[=\sum_{1\leq j\leq n}\sum_{1\leq i\leq n}\big{(}l_{ij}(T)-l_{ij}( S_{n})\big{)}y_{i}y_{j}\,.\]
Note that \(l_{ij}(T)\geq 0\) for all non-root vertices of \(T\) and that \(l_{ij}(T)\geq 1\) if one of these vertices (not both) is the root of \(T\). Thus, we have \(l_{ij}(T)\geq l_{ij}(S_{n})\) for a suitable ordering of the vertices, for all \(1\leq i,j\leq n\). Equality holds if and only if all non-root vertices of \(T\) have level \(1\), i.e. if \(T\) is the rooted star \(S_{n}\). Therefore, \(R(L(T),\boldsymbol{y})-R(L(S_{n}),\boldsymbol{y})\geq 0\) and
\[\rho_{L}(T)\geq R(L(T),\boldsymbol{y})>R(L(S_{n}),\boldsymbol{y})=\rho_{L}(S _{n})\]
for \(T\neq S_{n}\). It can be noted that \(L(S_{n})\) is the same as the adjacency matrix of \(S_{n}\). Hence, we have \(\rho_{L}(S_{n})=\sqrt{n-1}\).
The matrix comparison argument used in the previous proof leads us to the following observation.
**Lemma 3**.: _For a rooted tree \(T\), we have \(l_{ij}\leq d_{ij}\), where \(d_{ij}\) is the \((i,j)\)-th entry of the distance matrix of \(T\)._
Proof.: Consider two vertices \(v_{i}\) and \(v_{j}\) of \(T\). Let \(u\) be the last vertex on the common subpath from the root (possibly, \(u\) can coincide with the root of \(T\)) to \(v_{i}\) and \(v_{j}\). We have
\[l(v_{i})-l(v_{j})=d(v_{i},u)-d(v_{j},u)\,.\]
If \(v_{i}\) and \(v_{j}\) lie on the same path to the root of \(T\), then one of these vertices, say \(v_{j}\) coincides with \(u\), implying that \(d_{ij}=d(v_{i},v_{j})=d(v_{i},u)\). Thus, \(l(v_{i})-l(v_{j})=d_{ij}\) holds.
If \(v_{i}\) and \(v_{j}\) do not lie on the same path to the root of \(T\), then none of these vertices coincides with \(u\), implying that \(d_{ij}=d(v_{i},v_{j})=d(v_{i},u)+d(u,v_{j})\). Thus, we obtain
\[l(v_{i})-l(v_{j})=d(v_{i},u)-d(v_{j},u)<d(v_{i},u)+d(u,v_{j})=d_{ij}\,.\]
For a tree \(T\), we denote by \(D(T)\) its distance matrix.
**Theorem 6**.: _Among all rooted trees with \(n\) vertices, the rooted path \(P_{n}\) uniquely maximises the level spectral radius. Moreover, \(\rho_{L}(P_{n})=1/(\cosh(t)-1)\), where \(t>0\) satisfies_
\[\tanh(t/2)\tanh(n\cdot t/2)=1/n\,.\]
Proof.: Let \(T\) be an \(n\)-vertex rooted tree and \(\boldsymbol{y}=(y_{1}\ y_{2}\ \dots\ y_{n})^{T}\) the unit Perron vector of the level matrix \(L(T)\). We have
\[R(D(T),\boldsymbol{y})-R(L(T),\boldsymbol{y})=\sum_{1\leq i\leq n}\sum_{1\leq j \leq n}(d_{ij}-l_{ij})y_{i}y_{j}\geq 0\,,\]
where the inequality follows from Lemma 3. Consequently, we get
\[R(D(T),\boldsymbol{y})\geq R(L(T),\boldsymbol{y})=\rho_{L}(T)\]
with equality if and only if \(d_{ij}=l_{ij}\) for all \(i,j\), in which case \(T\) must coincide with the rooted path \(P_{n}\) (see the proof of Lemma 3). In [16] Ruzieh and Powers proved that the \(n\)-vertex path has the maximum distance spectral radius among all \(n\)-vertex trees. Using this result, we obtain
\[\rho_{L}(T)\leq R(D(T),\boldsymbol{y})\leq\rho_{D}(T)\leq\rho_{D}(P_{n})\,.\]
The formula for \(\rho_{L}(P_{n})=\rho_{D}(P_{n})\) can also be found in [16], thus completing the proof.
Another bound can be obtained for the level spectral using a quotient matrix of \(L(T)\). This is shown in the next theorem.
**Theorem 7**.: _Let \(T\) be a rooted tree with \(n>1\) vertices. We have_
\[\rho_{L}(T)\geq\frac{1}{n-1}\max_{1\leq i\leq n}\left(LI-L_{i}+\sqrt{(LI-L_{i}) ^{2}+(n-1)L_{i}^{2}}\right).\]
Proof.: Let \(T\) be a rooted tree with vertex set \(\{v_{1},v_{2},\dots,v_{n}\}\). Fix \(1\leq i\leq n\) and reorder the vertices of \(T\) starting from \(v_{i}\) to construct \(L(T)\). Then \(L(T)\) can be partitionned into blocks/submatrices \(A_{1,1},A_{1,2},A_{2,1},A_{2,2}\) as follows:
\begin{tabular}{c|c|c c c c c c c} & \(v_{i}\) & \(v_{1}\) & \(\cdots\) & \(v_{i-1}\) & \(v_{i+1}\) & \(v_{i+2}\) & \(\cdots\) & \(v_{n}\) \\ \hline \(v_{i}\) & \(A_{1,1}=0\) & \multicolumn{6}{c}{\(A_{1,2}\)} \\ \hline \(v_{1}\) & & & & & & & \\ \(\vdots\) & & & & & & & \\ \(v_{i-1}\) & & & & & & & \\ \(v_{i+1}\) & \(A_{2,1}\) & & & & & & \\ \(v_{i+2}\) & & & & & & & \\ \(\vdots\) & & & & & & & \\ \(v_{n}\) & & & & & & & \\ \hline \end{tabular} The average row sum of \(A_{1,1},A_{1,2},A_{2,1},A_{2,2}\) gives
\[0,\ L_{i},L_{i}/(n-1),2(LI-L_{i})/(n-1)\,,\]
respectively. Then the quotient matrix corresponding to this partition of \(L(T)\) is given by
\[Q(T):=\begin{pmatrix}0&L_{i}\\ L_{i}/(n-1)&2(LI-L_{i})/(n-1)\end{pmatrix}\,.\]
The two eigenvalues of \(Q(T)\) are solutions of the quadratic equation in variable \(X\):
\[X^{2}-2X(LI-L_{i})/(n-1)-L_{i}^{2}/(n-1)=0\,,\]
of which the largest one is given by
\[\frac{1}{n-1}\Big{(}LI-L_{i}+\sqrt{(LI-L_{i})^{2}+(n-1)L_{i}^{2}}\Big{)}\,.\]
It is known (see [8]) that the eigenvalues of \(Q(T)\) interlace those of \(L(T)\), and the result follows.
To simplify notation, let us set
\[H(T):=\sum_{i=1}^{n}(l_{i1}^{2}+l_{i2}^{2}+\cdots+l_{in}^{2})=2\sum_{1\leq i<j \leq n}l_{ij}^{2}\]
for a rooted tree \(T\) with level matrix \(L(T)=(l_{ij})_{1\leq i,j\leq n}\). We can obtain further bounds for the eigenvalues of the level matrix as follows.
**Theorem 8**.: _Let \(T\) be a \(n\)-vertex rooted tree and \(\lambda\) an eigenvalue of \(L(T)\). Then it holds that_
\[\lambda^{2}\leq\frac{n-1}{n}H(T)\,.\]
Proof.: Let \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) be all the eigenvalues of \(L(T)\). Since \(\sum_{i=1}^{n}\lambda_{i}=0\), we have
\[|\lambda_{j}|=\Big{|}\sum_{i=1,i\neq j}^{n}\lambda_{i}\Big{|}\leq\sum_{i=1,i \neq j}^{n}|\lambda_{i}|\,.\]
The Cauchy-Schwartz inequality yields
\[\Big{(}\sum_{i=1,i\neq j}^{n}|\lambda_{i}|\Big{)}^{2}\leq(n-1)\sum_{i=1,i\neq j }^{n}\lambda_{i}^{2}\,,\]
which implies
\[\lambda_{j}^{2}\leq\Big{(}\sum_{i=1,i\neq j}^{n}|\lambda_{i}|\Big{)}^{2}\leq( n-1)\sum_{i=1,i\neq j}^{n}\lambda_{i}^{2}\,.\]
Furthermore, using the identity \(H(T)=\sum_{i=1}^{n}\lambda_{i}^{2}\) established in Proposition 2, we obtain
\[\lambda_{j}^{2}\leq(n-1)\sum_{i=1,i\neq j}^{n}\lambda_{i}^{2}=(n-1)(H(T)- \lambda_{j}^{2})\,,\]
or equivalently, \(n\cdot\lambda_{j}^{2}\leq(n-1)H(T)\). This completes the proof.
The previous bound on \(\lambda\) can be improved further as shown in the next theorem.
**Theorem 9**.: _Let \(T\) be a rooted tree with \(n>2\) vertices and \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) all the eigenvalues of \(L(T)\). Then the following inequalities hold:_
\[\sqrt{\frac{H(T)}{n(n-1)}}\leq\ \lambda_{1}\leq\sqrt{\frac{n-1}{n}H(T)}\,,\quad- \sqrt{\frac{n-1}{n}H(T)}\leq\ \lambda_{n}\leq-\sqrt{\frac{H(T)}{n(n-1)}}\,,\]
_and_
\[-\sqrt{\frac{(j-1)H(T)}{n(n-j+1)}}\leq\ \lambda_{j}\leq\sqrt{\frac{(n-j)H(T)}{j \cdot n}}\]
_for all \(j\in\{2,\ldots,n-1\}\)._
To prove Theorem 9, we will employ certain inequalities provided by A. Lupas [12].
**Lemma 4** (Theorem 2, [12]).: _Let \(n>2\) be an integer and \(P_{n}(x)\in\mathbb{R}[x]\) a monic polynomial of degree \(n\) and only with real roots. If \(x_{1}\geq x_{2}\geq\cdots\geq x_{n}\) are all the roots of \(P_{n}(x)\), then the following relations hold:_
\[x_{1}\in\left[y+\frac{1}{n}\sqrt{\frac{z}{n-1}},\ y+\frac{1}{n} \sqrt{(n-1)z}\right],\quad x_{n}\in\left[y-\frac{1}{n}\sqrt{(n-1)z},\ y-\frac{1}{n} \sqrt{\frac{z}{n-1}}\right],\]
_and_
\[x_{j}\in\left[y-\frac{1}{n}\sqrt{\frac{(j-1)z}{n-j+1}},\ y+\frac {1}{n}\sqrt{\frac{(n-j)z}{j}}\right]\]
_for all \(j\in\{2,\ldots,n-1\}\), where_
\[y=(x_{1}+x_{2}+\cdots+x_{n})/n\quad\text{and}\quad z=n(x_{1}^{2} +x_{2}^{2}+\cdots+x_{n}^{2})-(x_{1}+x_{2}+\cdots+x_{n})^{2}\,.\]
Proof of Theorem 9.: We apply Lemma 4 to the polynomial
\[P_{n}(x)=(x-\lambda_{1})(x-\lambda_{2})\ldots(x-\lambda_{n})\,.\]
Since \(\lambda+\lambda_{2}+\cdots+\lambda_{n}=0\) and \(\lambda_{1}^{2}+\lambda_{2}^{2}+\cdots+\lambda_{n}^{2}=H(T)\) by Proposition 2, we obtain
\[\lambda_{1}\in\left[\frac{1}{n}\sqrt{\frac{nH(T)}{n-1}},\ \frac{1}{n} \sqrt{(n-1)nH(T)}\right],\quad\lambda_{n}\in\Big{[}-\frac{1}{n}\sqrt{(n-1)nH( T)},\ -\frac{1}{n}\sqrt{\frac{nH(T)}{n-1}}\Big{]}\,,\]
and
\[\lambda_{j}\in\Big{[}-\frac{1}{n}\sqrt{\frac{(j-1)nH(T)}{n-j+1}},\ \frac{1}{n}\sqrt{\frac{(n-j)nH(T)}{j}}\Big{]}\]
for all \(j\in\{2,\ldots,n-1\}\). This completes the proof.
In particular, Theorem 9 shows that the interval
\[\Big{[}-\sqrt{\frac{n-1}{n}H(T)},\ \sqrt{\frac{n-1}{n}H(T)}\Big{]} \tag{2}\]
contains the spectrum of \(L(T)\) for any rooted tree \(T\) with \(n\) vertices.
### Level energy and eigenvalue's multiplicity
The energy of a graph, defined by Ivan Gutman [7] in 1978, is a much studied quantity in the mathematical literature. In a manner fully analogous to other matrices associated with a graph, the level energy \(E_{L}(T)\) of a rooted tree \(T\) is the sum of the absolute values of the eigenvalues of \(L(T)\)[6]. We can apply Lemma 4 to the polynomial
\[(x-|\lambda_{1}|)(x-|\lambda_{2}|)\ldots(x-|\lambda_{n}|)\]
to establish bounds for the level energy \(E_{L}(T)\) similar to those in Theorem 9.
Since the trace of the level matrix of \(T\) equals \(0\), the level energy \(E_{L}(T)\) is precisely twice the sum of those positive eigenvalues. In particular, we have \(E_{L}(T)\geq 2\rho_{L}(T)\) and any lower bound for the level spectral radius implies a lower bound for the level energy. Moreover, \(E_{L}(T)=2\rho_{L}(T)\) if and only if \(L(T)\) has precisely one positive eigenvalue. Thus, a natural question is to characterise rooted trees with only one positive level eigenvalue. In the sequel, we first manage some particular cases of this problem.
The Cauchy interlacing theorem is quite often a key result in the approach of studying the spectrum of matrices associated with graphs. It captures the relationship between the spectrum of a symmetric matrix and that of its principal submatrices, see [4, 8, 15].
**Theorem 10**.: _Let \(T\) be a rooted tree with \(n>1\) vertices and maximum level \(l_{\max}\). Then the multiplicity of \(0\) as an eigenvalue for \(L(T)\) is at most \(n-1-l_{\max}\). If equality holds, then \(L(T)\) must have only one positive eigenvalue. In particular, equality holds for all rooted paths and all rooted versions of stars._
Proof.: Let \(T\) be a tree with \(n>1\) vertices and with root \(r\). Denote by \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}\) all the eigenvalues of \(L(T)\). Consider a subtree rooted at \(r\) of \(T\) which is a path \(P_{m}\). We choose \(m=1+l_{\max}\leq n\), which is always possible. Let \(M\) be the submatrix of \(L(T)\) obtained by deleting all rows and columns not indexed by the vertices of \(P_{m}\). Note that the deletion of these vertices does not affect the levels of the other vertices in \(T\). This means that \(M\) is the level matrix of \(P_{m}\). Let the eigenvalues of \(M\) be \(\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{m}\). Applying the Cauchy's interlacing theorem [4, 8], we get
\[\lambda_{n-m+2}\leq\mu_{2}\,,\ \lambda_{n-m+3}\leq\mu_{3}\,,\ \ldots,\ \lambda_{n-1}\leq\mu_{m-1},\ \lambda_{n}\leq\mu_{m}\,.\]
Since \(L(P_{m})=D(P_{m})\) has precisely \(m-1\) negative eigenvalues [4, 16], it follows that
\[\lambda_{n-m+2},\lambda_{n-m+3},\ldots,\lambda_{n-1},\lambda_{n}<0\,.\]
Thus, the number of nonnegative eigenvalues of \(L(T)\) is at most \(n-m+1\). In partciular, the multiplicity of \(0\) as an eigenvalue for \(L(T)\) is at most \(n-m=n-1-l_{\max}\). If equality holds, then \(L(T)\) must have only one positive eigenvalue.
The level matrix of the rooted star \(S_{n}\) is the same as the adjacency matrix of \(S_{n}\). Then it has only two nonzero eigenvalues given by \(\pm\sqrt{n-1}\). Since the maximum level of \(S_{n}\) is \(1\), we see that the multiplicity of \(0\) as an eigenvalue for \(L(S_{n})\) is precisely \(n-2=n-1-l_{\max}\).
Let \(R_{n}\) stand for the \(n\)-vertex star rooted at one of its non-central vertices. It was shown in [6] that \(L(R_{n})\) has precisely three nonzero eigenvalues, namely the zeros of \(x^{3}+(-5n+9)x-4n+8\). Since the maximum level of \(R_{n}\) is \(2\), we see that the multiplicity of \(0\) as an eigenvalue for \(L(R_{n})\) is precisely \(n-3=n-1-l_{\max}\).
Recall that rooted paths are the only trees for which \(0\) is not a level eigenvalue (see also [6]). This fact is further confirmed by this theorem since \(l_{\max}=n-1\) for the rooted path \(P_{n}\).
Theorem 11 below shows that besides rooted paths, the level spectrum can also specify the topological structure of other trees.
**Theorem 11**.: _Let \(n>2\) and \(B\) be a \(n\times n\) level matrix. Then \(0\) is an eigenvalue of multiplicity \(n-2\) for \(B\) if and only if \(B\) is associated with the rooted star \(S_{n}\)._
Proof.: We already know that \(L(S_{n})\) has only two nonzero eigenvalues.
Conversely, let \(T\) be a tree with \(n>2\) vertices and with root \(r\) such that \(0\) is an eigenvalue of multiplicity \(n-2\) for \(L(T)\). Suppose (to the contrary) that \(T\) is not the rooted star \(S_{n}\). Denote by \(\lambda_{1}>\lambda_{2}=\cdots=\lambda_{n-1}>\lambda_{n}\) all the eigenvalues of \(L(T)\). Consider a subtree rooted at \(r\) of \(T\) which is a path \(P_{m}\). We choose \(m\geq 3\), which is always possible since \(T\neq S_{n}\). Let \(M\) be the submatrix of \(L(T)\) obtained by deleting all rows and columns not indexed by the vertices of \(P_{m}\). Then \(M\) is the level matrix of \(P_{m}\) and its eigenvalues can be denoted by \(\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{m}\). By the Cauchy's interlacing theorem [4, 8], we have
\[\lambda_{m-1}\geq\mu_{m-1}\geq\lambda_{n-1}\,.\]
Since \(\lambda_{n-1}=0\) by assumption, we obtain \(\mu_{m-1}\geq 0\). This is a contradiction for \(L(P_{m})=D(P_{m})\) since \(m\geq 3\). Hence \(T\) must be the rooted star \(S_{n}\).
We finish this paper by providing a result on the multiplicity of an arbitrary (not only \(0\)) eigenvalue in the spectrum of the level matrix of a tree. For a rooted tree \(T\) and a leaf (pendant vertex) \(v\) of \(T\), let us denote by \(\text{mul}_{T}(\lambda)\) the multiplicity of \(\lambda\) as an eigenvalue of \(L(T)\) and by \(T-v\) the subtree obtained from \(T\) by deleting \(v\).
**Theorem 12**.: _Let \(T\) be a rooted tree and \(\lambda\) any eigenvalue of \(L(T)\). Then_
\[|\text{mul}_{T}(\lambda)-\text{mul}_{T-v}(\lambda)|\leq 1\]
_holds for all leaves \(v\) of \(T\)._
Proof.: The statement is trivial for \(n=2\). So we assume \(n>2\). Let \(T\) be a rooted tree with \(n\) vertices and \(v\) a leaf of \(T\). Denote by \(\lambda_{1}>\lambda_{2}\geq\cdots\geq\lambda_{n}\) and \(\mu_{1}>\mu_{2}\geq\cdots\geq\mu_{n-1}\) all the level eigenvalues of \(T\) and \(T-v\), respectively. Assume that \(\lambda=\lambda_{k+1}\) has multiplicity \(m\), i.e.
\[\lambda_{1}\geq\cdots\geq\lambda_{k}>\lambda_{k+1}=\cdots=\lambda_{k+m}> \lambda_{k+m+1}\geq\cdots\geq\lambda_{n}\,.\]
By the Cauchy interlacing theorem [4, 8], we have
\[\lambda_{k}\geq\mu_{k}\geq\lambda_{k+1}=\mu_{k+1}=\cdots=\mu_{k+m-1}=\lambda_{k+m }\geq\mu_{k+m}\geq\lambda_{k+m+1}\,.\]
We observe four possible scenarios:
* If \(\mu_{k}=\lambda_{k+1}\) and \(\lambda_{k+m}=\mu_{k+m}\), then \(\lambda_{k}>\mu_{k}\) and \(\mu_{k+m}>\lambda_{k+m+1}\). In this case, we get \[\mu_{k-1}\geq\lambda_{k}>\mu_{k}=\cdots=\mu_{k+m}>\lambda_{k+m+1}\geq\mu_{k+m+1 }\,.\] Thus \(\lambda=\lambda_{k+1}=\mu_{k}\) is an eigenvalue of multiplicity \(m+1\) for \(L(T-v)\). Hence \(\operatorname{mul}_{T}(\lambda)-\operatorname{mul}_{T-v}(\lambda)=-1\).
* If \(\mu_{k}>\lambda_{k+1}\) and \(\lambda_{k+m}>\mu_{k+m}\), then \[\mu_{k}>\lambda_{k+1}=\mu_{k+1}=\cdots=\mu_{k+m-1}=\lambda_{k+m}>\mu_{k+m}\,,\] which implies that \(\lambda=\lambda_{k+1}=\mu_{k+1}\) is an eigenvalue of multiplicity \(m-1\) for \(L(T-v)\). Hence \(\operatorname{mul}_{T}(\lambda)-\operatorname{mul}_{T-v}(\lambda)=1\).
* If \(\mu_{k}=\lambda_{k+1}\) and \(\lambda_{k+m}>\mu_{k+m}\), then \(\lambda_{k}>\mu_{k}\) and we obtain \[\mu_{k-1}\geq\lambda_{k}>\mu_{k}=\cdots=\mu_{k+m-1}>\mu_{k+m}\,.\] Hence \(\lambda=\lambda_{k+1}=\mu_{k}\) is an eigenvalue of multiplicity \(m\) for \(L(T-v)\), i.e. \(\operatorname{mul}_{T}(\lambda)-\operatorname{mul}_{T-v}(\lambda)=0\).
* If \(\mu_{k}>\lambda_{k+1}\) and \(\lambda_{k+m}=\mu_{k+m}\), then \(\mu_{k+m}>\lambda_{k+m+1}\) and we get \[\mu_{k}>\lambda_{k+1}=\mu_{k+1}=\cdots=\mu_{k+m}>\lambda_{k+m+1}\geq\mu_{k+m+1 }\,.\] Thus \(\lambda=\lambda_{k+1}=\mu_{k+1}\) is an eigenvalue of multiplicity \(m\) for \(L(T-v)\), i.e. \(\operatorname{mul}_{T}(\lambda)-\operatorname{mul}_{T-v}(\lambda)=0\).
This completes the proof.
For the special case of level eigenvalue \(0\), Theorem 12 can be strengthened as follows.
**Theorem 13**.: _Let \(T\) be a rooted tree with \(n>2\) vertices and maximum level \(l_{\max}\). Then the multiplicity of \(0\) as an eigenvalue for \(L(T)\) is precisely \(n-1-l_{\max}\). Moroever, \(L(T)\) always admits only one positive eigenvalue. Furthemore,_
\[\text{mult}_{T}(0)-\text{mult}_{T-v}(0)\in\{0,1\}\]
_holds for all leaves \(v\) of \(T\)._
Proof.: Let \(\mathcal{S}_{j}\) represent the set of those vertices of \(T\) having level \(j\). For every \(0\leq j\leq l_{\max}\), the rows of \(L(T)\) indexed by the elements of \(\mathcal{S}_{j}\) are all identical. Hence the rank of \(L(T)\) is at most
\[n-\sum_{j=0}^{l_{\max}}(|\mathcal{S}_{j}|-1)=1+l_{\max}\,.\]
With this, \(L(T)\) has \(0\) as an eigenvalue with multiplicity at least \(n-1-l_{\max}\). Invoking Theorem 10 finishes the first part of the proof.
Let \(v\) be a leaf of \(T\). If \(v\) is not on the maximum level in \(T\), or if there are two vertices (including \(v\)) on the maximum level in \(T\), then the deletion of \(v\) does not affect
the maximum level, i.e. \(l_{\max}(T-v)=l_{\max}(T)\). If \(v\) is the only vertex lying on the maximum level in \(T\), then the deletion of \(v\) decreases the maximum level by exactly \(1\), i.e. \(l_{\max}(T-v)=l_{\max}(T)-1\). Therefore, \(l_{\max}(T)-l_{\max}(T-v)\in\{0,1\}\) holds. Now using the first part of the theorem, we obtain
\[\operatorname{mul}_{T}(0)-\operatorname{mul}_{T-v}(0) =(n-1-l_{\max}(T))-(n-2-l_{\max}(T-v))\] \[=1-(l_{\max}(T)-l_{\max}(T-v))\,,\]
which implies that \(\operatorname{mul}_{T}(0)-\operatorname{mul}_{T-v}(0)\in\{0,1\}\).
As a consequence of Theorem 13 and the starting discussion of Subsection 2.2, we obtain an identity between the level energy and the level spectral.
**Corollary 2**.: _For a rooted tree \(T\), it holds that \(E_{L}(T)=2\rho_{L}(T)\)._
In particular, we have proved a very recent conjecture from [6].
**Theorem 14**.: _Among all rooted trees with \(n\) vertices, the rooted path \(P_{n}\) uniquely maximises the level energy._
Proof.: Simply combine Theorem 6 with Corollary 2.
The following result appears in [6]:
\[E_{L}(T)\leq\sqrt{2n\sum_{1\leq i<j\leq n}l_{ij}^{2}} \tag{3}\]
holds for any rooted tree \(T\) with \(n\) vertices. We provide a new upper bound that is better than (3), as shown in Theorem 15 below.
**Theorem 15**.: _For a \(n\)-vertex rooted tree \(T\) different from a rooted path, it holds that_
\[E_{L}(T)\leq\sqrt{2(n-1)\sum_{1\leq i<j\leq n}l_{ij}^{2}}\,.\]
Proof.: Let \(x_{1},x_{2},\ldots,x_{n}\) be non-negative numbers and
\[X=\frac{1}{n}\sum_{i=1}^{n}x_{i}-\Big{(}\prod_{i=1}^{n}x_{i}\Big{)}^{1/n}\,.\]
It is shown in [10, 19] that
\[nX\leq n\sum_{i=1}^{n}x_{i}-\Big{(}\sum_{i=1}^{n}\sqrt{x_{i}}\Big{)}^{2}\leq n (n-1)X\,.\]
Using the first inequality with \(x_{i}=\lambda_{i}^{2}\), where \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\) denote all the eigenvalues of \(L(T)\), we obtain
\[nX\leq n\sum_{i=1}^{n}\lambda_{i}^{2}-\Big{(}\sum_{i=1}^{n}|\lambda_{i}|\Big{)} ^{2}\,.\]
Equivalently,
\[nX\leq 2n\sum_{1\leq i<j\leq n}l_{ij}^{2}-E_{L}(T)^{2}\,,\] \[X=\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}^{2}-\Big{(}\prod_{i=1}^{n} \lambda_{i}^{2}\Big{)}^{1/n}=\frac{2}{n}\sum_{1\leq i<j\leq n}l_{ij}^{2}-|\det( L(T))|^{2/n}\]
by virtue of Proposition 2. Since \(T\) is different from a rooted path, we have \(\det(L(T))=0\). Therefore,
\[E_{L}(T)^{2}\leq 2n\sum_{1\leq i<j\leq n}l_{ij}^{2}-nX=2n\sum_{1\leq i<j\leq n }l_{ij}^{2}-2\sum_{1\leq i<j\leq n}l_{ij}^{2}\,.\]
This completes the proof of the theorem.
## 3. Concluding comments
Recall that for the distance matrix, the spectrum of any tree contains exactly one positive eigenvalue [4, p. 104], and this is also the case for the level matrix of a rooted tree as shown in Theorem 13. The level matrix and the distance matrix coincide only for rooted paths [6] (see also the proof of Lemma 3). On the other hand, \(0\) is always an eigenvalue of \(L(T)\) provided \(T\) is not a rooted path, while this is not the case for the distance matrix. Since the main aim of spectral graph theory is to reveal the properties of graphs that are characterised by the spectrum of a matrix associated with it, we therefore propose some questions and problems, which we think may be interesting.
The problem that characterises graphs having a number of distinct eigenvalues has been paid much attention for some matrices associated with graphs. In our current context, the level spectrum of any rooted tree which is not a rooted path contains at least two distinct nonzero eigenvalues. In Theorem 11 we have shown that the rooted star \(S_{n}\) is the only tree having precisely two distinct nonzero level eigenvalues.
_Question 1_.: Can we establish explicit conditions on the number of distinct nonzero level eigenvalues for a rooted tree? Can the rooted trees having exactly \(k\) distinct level eigenvalues be characterised for any \(k>3\)?
As a corollary to [11, Theorem 2.1], we can state the following result with regard to Question 1.
**Corollary 3**.: _Let \(T\) be a rooted tree with \(n>1\) vertices and \(\boldsymbol{x}\) the unit Perron vector of \(L(T)\). For any integer \(2\leq k\leq n\), \(L(T)\) has \(k\) distinct eigenvalues if and only if there exists \(k-1\) pairwise distinct real numbers \(\alpha_{2},\alpha_{3},\ldots,\alpha_{k}\) such that_
\[\rho_{L}(T)>\max_{2\leq j\leq k}\alpha_{j}\quad\text{and}\quad\prod_{2\leq j \leq k}(L(T)-\alpha_{j}I_{n})=\prod_{2\leq j\leq k}(\rho_{L}(T)-\alpha_{j}) \,\boldsymbol{x}\,\boldsymbol{x}^{T}\.\]
_Moreover, \(\rho_{L}(T),\alpha_{2},\alpha_{3},\ldots,\alpha_{k}\) are precisely the \(k\) distinct eigenvalues of \(L(T)\)._
The inverse problem of which real numbers can be realisable as eigenvalues of the level matrix of a rooted tree can also be studied. Since \(L(S_{n})\) has only two nonzero eigenvalues given by \(\pm\sqrt{n-1}\), we see that the square root of any positive integer is a realisable level eigenvalue. In particular, since \(E_{L}(T)=2\rho_{L}(T)\), we have that every even positive integer can be realised as the level energy of some rooted tree. Moreover, we know that the level eigenvalues (thus, the level energy) are all algebraic integers. It is well-known that if an algebraic integer is a rational number, then it must be an ordinary integer. The square root of any nonnegative integer \(m\) is an algebraic integer, but an irrational number unless \(m\) is a perfect square.
_Problem 1_.: Explore the inverse problem of which real numbers can be realisable as eigenvalues of the level matrix of a rooted tree different from a rooted star.
_Question 2_.: Given any rooted tree \(T_{1}\), can we always construct from \(T_{1}\) a new rooted tree \(T_{2}\) whose level spectrum contains that of \(T_{1}\)?
Let \(d\) be a positive integer. By a \(d\)-ary tree, we mean a rooted tree in which every vertex has precisely \(d\) children.
_Question 3_.: Are complete (full) \(d\)-ary trees characterised by their level spectra?
The computation of the characteristic polynomial of a tree is quite an old mathematical problem, and there are some known recursive formulae (for as eg. the adjacency and Laplacian matrices) that deal with it [9, 14, 17].
_Question 4_.: Can we establish a recursive formula that computes the level characteristic polynomial of any rooted tree?
Further research may include finding extremal trees for the level spectral radius among all rooted trees with a prescribed: (1) outdegree sequence, (2) number of vertices and number of leaves, (3) number of vertices and maximum outdegree.
We hope to see more work related to the level matrix in the future.
## Acknowledgments
The author thanks Dr. Bunyamin Sahin (Selcuk University, Turkey) for introducing him to the level matrix. This work is dedicated to Professor Stephan Wagner (Uppsala University, Sweden) on the occasion of his birthday.
|
2305.00364 | $φ$-$(k,n)$-absorbing and $φ$-$(k,n)$-absorbing primary
hyperideals in a krasner $(m,n)$-hyperring | Various expansions of prime hyperideals have been studied in a Krasner
$(m,n)$-hyperring $R$. For instance, a proper hyperideal $Q$ of $R$ is called
weakly $(k,n)$-absorbing primary provided that for $r_1^{kn-k+1} \in R$,
$g(r_1^{kn-k+1}) \in Q-\{0\}$ implies that there are $(k-1)n-k+2$ of the
$r_i^,$s whose $g$-product is in $Q$ $g(r_1^{(k-1)n-k+2}) \in Q$ or a
$g$-product of $(k-1)n-k+2$ of $r_i^,$s ,except $g(r_1^{(k-1)n-k+2})$, is in
${\bf r^{(m,n)}}(Q)$. In this paper, we aim to extend the notions to the
concepts of $\phi$-$(k,n)$-absorbing and $\phi$-$(k,n)$-absorbing primary
hyperideals. Assume that $\phi$ is a function from $ \mathcal{HI}(R)$ to
$\mathcal{HI}(R) \cup \{\varnothing\}$ such that $\mathcal{HI}(R)$ is the set
of hyperideals of $R$ and $k$ is a positive integer. We call a proper
hyperideal $Q$ of $R$ a $\phi$-$(k,n)$-absorbing primary hyperideal if for
$r_1^{kn-k+1} \in R$, $g(r_1^{kn-k+1}) \in Q-\phi(Q)$ implies that there are
$(k-1)n-k+2$ of the $r_i^,$s whose $g$-product is in $Q$ $g(r_1^{(k-1)n-k+2})
\in Q$ or a $g$-product of $(k-1)n-k+2$ of $r_i^,$s ,except
$g(r_1^{(k-1)n-k+2})$, is in ${\bf r^{(m,n)}}(Q)$. Several properties and
characterizations of them are presented. | Mahdi Anbarloei | 2023-04-30T01:44:48Z | http://arxiv.org/abs/2305.00364v1 | (\phi\)-\((k,n)\)-absorbing and \(\phi\)-\((k,n)\)-absorbing primary hyperideals in a Krasner \((m,n)\)-hyperring
###### Abstract.
Various expansions of prime hyperideals have been studied in a Krasner \((m,n)\)-hyperring \(R\). For instance, a proper hyperideal \(Q\) of \(R\) is called weakly \((k,n)\)-absorbing **(primary)** provided that for \(r_{1}^{kn-k+1}\in R\), \(g(r_{1}^{kn-k+1})\in Q-\{0\}\) implies that there are \((k-1)n-k+2\) of the \(r_{i}\)s whose \(g\)-product is in \(Q\)\(\big{(}g(r_{1}^{(k-1)n-k+2})\in Q\) or a \(g\)-product of \((k-1)n-k+2\) of \(r_{i}\)'s,except \(g(r_{1}^{(k-1)n-k+2})\), is in \(\mathbf{r}^{(m,n)}(Q)\big{)}\). In this paper, we aim to extend the notions to the concepts of \(\phi\)-\((k,n)\)-absorbing and \(\phi\)-\((k,n)\)-absorbing primary hyperideals. Assume that \(\phi\) is a function from \(\mathcal{H}\mathcal{I}(R)\) to \(\mathcal{H}\mathcal{I}(R)\cup\{\varnothing\}\) such that \(\mathcal{H}\mathcal{I}(R)\) is the set of hyperideals of \(R\) and \(k\) is a positive integer. We call a proper hyperideal \(Q\) of \(R\) a \(\phi\)-\((k,n)\)-absorbing **(primary )** hyperideal if for \(r_{1}^{kn-k+1}\in R\), \(g(r_{1}^{kn-k+1})\in Q-\phi(Q)\) implies that there are \((k-1)n-k+2\) of the \(r_{i}\)s whose \(g\)-product is in \(Q\)\(\big{(}g(r_{1}^{(k-1)n-k+2})\in Q\) or a \(g\)-product of \((k-1)n-k+2\) of \(r_{i}\)'s,except \(g(r_{1}^{(k-1)n-k+2})\), is in \(\mathbf{r}^{(m,n)}(Q)\) ). Several properties and characterizations of them are presented.
Key words and phrases: \(\phi\)-\((k,n)\)-absorbing hyperideal, \(\phi\)-\((k,n)\)-absorbing primary hyperideal 2010 Mathematics Subject Classification: 20N20, 16Y99, 20N15, 06E20
## 1. Introduction
Extensions of prime and primary ideals to the context of \(\phi\)-prime and \(\phi\)-primary ideals are studied in [7, 12]. Afterwards, Khaksari in [20] and Badawi et al. in [9] introduced \(\phi\)-2-prime and \(\phi\)-2-primary ideals, respectively. Let \(R\) be a commutative ring. Suppose that \(\phi\) is a function from \(\mathcal{I}(R)\) to \(\mathcal{I}(R)\cup\{\varnothing\}\) where \(\mathcal{I}(R)\) is the set of ideals of \(R\). A proper ideal \(I\) of \(R\) is said to be a \(\phi\)-2-absorbing ideal if whenever \(x,y,z\in R\), with \(xyz\in I-\phi(I)\) implies that \(xy\in I\) or \(xz\in I\) or \(yz\in I\). Also, A proper ideal \(I\) of \(R\) is called a \(\phi\)-2-absorbing primary ideal if for every \(x,y,z\in R\), \(xyz\in I-\phi(I)\) implies that \(xy\in I\) or \(xz\in\mathbf{r}(I)\) or \(yz\in\mathbf{r}(I)\).
Hyperstructures are algebraic structures equipped with at least one multi-valued operation, called a hyperoperation. A hyperoperation on a nonempty set is a mapping from to the nonempty power set. Hundreds of papers and several books have been written on this topic (for more details see [2, 10, 11, 13, 17, 21, 26, 30, 32, 33, 34]). An \(n\)-ary extension of algebraic structures is the most natural method for deeper understanding of their fundamental properties. Mirvakili and Davvaz in [28] introduced \((m,n)\)-hyperrings and gave several results in this respect. They defined and described a generalization of the notion of a hypergroup and a generalization of an \(n\)-ary group, which is called \(n\)-ary hypergroup [14]. Some review of the \(n\)-ary structures can be found in in [22, 23, 24, 25, 31]. One important class of hyperrings, where the addition is a hyperoperation, while the multiplication is an ordinary binary operation, is Krasner hyperring. An extension of the Krasner
hyperrings, which is a subclass of \((m,n)\)-hyperrings, was presented by Mirvakili and Davvaz [27], which is called Krasner \((m,n)\)-hyperring. Some important hyperideals namely Jacobson radical, nilradical, \(n\)-ary prime and primary hyperideals and \(n\)-ary multiplicative subsets of Krasner \((m,n)\)-hyperrings were defined by Ameri and Norouzi in [1]. Afterward, the concept of \((k,n)\)-absorbing (primary) hyperideals was studied by Hila et al. [18]. Norouzi et al. gave a new definition for normal hyperideals in Krasner \((m,n)\)-hyperrings, with respect to that one given in [27] and they showed that these hyperideals correspond to strongly regular relations [29]. Direct limit of a direct system was defined and analysed by Asadi and Ameri in the category of Krasner \((m,n)\)-hyperrigs [8]. The notion of \(\delta\)-primary hyperideals in Krasner \((m,n)\)-hyperrings, which unifies the prime and primary hyperideals under one frame, was presented in [4]. Recently, Davvaz et al. introduced new expansion classes, namely weakly \((k,n)\)-absorbing ( primary ) hyperideals in a Krasner \((m,n)\)-hyperring [16].
In this paper, we introduce and study the notions of \(\phi\)-\((k,n)\)-absorbing and \(\phi\)-\((k,n)\)-absorbing primary hyperideals in a commutative Krasner \((m,n)\)-hyperring. A number of main results are given to explain the general framework of these structures. Among many results in this paper, it is shown (Theorem 3.4) that if \(Q\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\), then \(Q\) is a \(\phi\)-\((s,n)\)-absorbing hyperideal for all \(s\geq k\). Although every \(\phi\)-\((k,n)\)-absorbing of a Krasner \((m,n)\)-hyperring is \(\phi\)-\((k,n)\)-absorbing primary, Example 4.3 shows that the converse may not be always true. It is shown (Theorem 4.12) that \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) if and only if \(Q/\phi(Q)\) is a weakly \((k,n)\)-absorbing primary hyperideal of \(R/\phi(Q)\). In Theorem 4.15, we show that if \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) but is not a \((k,n)\)-absorbing primary, then \(g(Q^{k(n-1)+1})\subseteq\phi(Q)\). As a result of the theorem we conclude that if \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) that is not a \((k,n)\)-absorbing primary hyperideal of \(R\), then \(\mathbf{r^{(m,n)}}(Q)=\mathbf{r^{(m,n)}}(\phi(Q))\).
## 2. Krasner \((m,n)\)-hyperrings
In this section, we summarize the preliminary definitions which are related to Krasner \((m,n)\)-hyperrings.
Let \(A\) be a non-empty set and \(P^{*}(A)\) the set of all the non-empty subsets of \(A\). An \(n\)-ary hyperoperation on \(A\) is a \(\text{map}f:A^{n}\longrightarrow P^{*}(A)\) and the couple \((A,f)\) is called an \(n\)-ary hypergroupoid. The notation \(a_{i}^{j}\) will denote the sequence \(a_{i},a_{i+1},...,a_{j}\) for \(j\geq i\) and it is the empty symbol for \(j<i\). If \(G_{1},...,G_{n}\) are non-empty subsets of \(A\), then we define \(f(G_{1}^{n})=f(G_{1},...,G_{n})=\bigcup\{f(a_{1}^{n})\mid a_{i}\in G_{i},1\leq i \leq n\}.\) If \(b_{i+1}=...=b_{j}=b\), we write \(f(a_{1}^{i},b_{i+1}^{j},c_{j+1}^{n})=f(a_{1}^{i},b^{(j-i)},c_{j+1}^{n})\). If \(f\) is an \(n\)-ary hyperoperation, then \(t\)-ary hyperoperation \(f_{(l)}\) is given by
\[f_{(l)}(a_{1}^{l(n-1)+1})=f\bigg{(}f(...,f(f(a_{1}^{n}),a_{n+1}^{2n-1}),...), a_{(l-1)(n-1)+1}^{l(n-1)+1}\bigg{)}.\]
where \(t=l(n-1)+1\).
**Definition 2.1**.: [27]\((R,f,g)\), or simply \(R\), is defined as a Krasner \((m,n)\)-hyperring if the following statements hold:
(1) \((R,f)\) is a canonical \(m\)-ary hypergroup;
(2) \((R,g)\) is a \(n\)-ary semigroup;
(3) The \(n\)-ary operation \(g\) is distributive with respect to the \(m\)-ary hyperoperation
\(f\), i.e., for every \(a_{1}^{i-1},a_{i+1}^{n},x_{1}^{m}\in R\), and \(1\leq i\leq n\),
\[g\bigg{(}a_{1}^{i-1},f(x_{1}^{m}),a_{i+1}^{n}\bigg{)}=f\bigg{(}g(a_{1}^{i-1},x_{ 1},a_{i+1}^{n}),...,g(a_{1}^{i-1},x_{m},a_{i+1}^{n})\bigg{)};\]
(4) \(0\) is a zero element of the \(n\)-ary operation \(g\), i.e., for each \(a_{1}^{n}\in R\), \(g(a_{1}^{i-1},0,a_{i+1}^{n})=0\).
Throughout this paper, \(R\) denotes a commutative Krasner \((m,n)\)-hyperring with the scalar identity \(1\).
A non-empty subset \(T\) of \(R\) is called a subhyperring of \(R\) if \((T,f,g)\) is a Krasner \((m,n)\)-hyperring. The non-empty subset \(I\) of \(R\) is a hyperideal if \((I,f)\) is an \(m\)-ary subhypergroup of \((R,f)\) and \(g(x_{1}^{i-1},I,x_{i+1}^{n})\subseteq I\), for each \(x_{1}^{n}\in R\) and \(1\leq i\leq n\).
**Definition 2.2**.: [1] Let \(I\) be a proper hyperideal of \(R\). \(I\) refers to a prime hyperideal if for hyperideals \(I_{1}^{n}\) of \(R\), \(g(I_{1}^{n})\subseteq P\) implies \(I_{i}\subseteq I\) for some \(1\leq i\leq n\).
Lemma 4.5 in [1] shows that the proper hyperideal \(I\) of \(R\) is prime if for all \(a_{1}^{n}\in R\), \(g(a_{1}^{n})\in I\) implies \(a_{i}\in I\) for some \(1\leq i\leq n\).
**Definition 2.3**.: [1] The radical of the proper the hyperideal \(I\) of \(R\), denoted by \(\mathbf{r}^{(m,n)}(I)\) is the intersection of all prime hyperideals of \(R\) containing \(I\). If the set of all prime hyperideals which contain \(I\) is empty, then \(\mathbf{r}^{(m,n)}(I)=R\).
It was shown (Theorem 4.23 in [1]) that if \(a\in\sqrt{I}^{(m,n)}\) then there exists \(s\in\mathbb{N}\) with \(g(a^{(s)},1_{R}^{(n-s)})\in I\) for \(s\leq n\), or \(g_{(l)}(a^{(s)})\in I\) for \(s=l(n-1)+1\).
**Definition 2.4**.: [1] A proper hyperideal \(I\) of \(R\) is primary if \(g(a_{1}^{n})\in I\) and \(a_{i}\notin I\) implies \(g(a_{1}^{i-1},1_{R},a_{i+1}^{n})\in\mathbf{r}^{(m,n)}(I)\) for some \(1\leq i\leq n\).
Theorem 4.28 in [1] shows that the radical of a primary hyperideal of \(R\) is prime.
**Definition 2.5**.: [18] Let \(I\) be a proper hyperideal of \(R\). \(I\) refers to an
(1) \((k,n)\)-absorbing hyperideal if for \(r_{1}^{kn-k+1}\in R\), \(g(r_{1}^{kn-k+1})\in I\) implies that there exist \((k-1)n-k+2\) of the \(r_{i}\)'s whose \(g\)-product is in \(I\). In this case, if \(k=1\), then \(I\) is an \(n\)-ary prime hyperideal of \(R\). If \(n=2\) and \(k=1\), then \(I\) is a classic prime hyperideal of \(R\).
(2) \((k,n)\)-absorbing primary hyperideal if for \(r_{1}^{kn-k+1}\in R\), \(g(r_{1}^{kn-k+1})\in I\) implies that \(g(r_{1}^{(k-1)n-k+2})\in I\) or a \(g\)-product of \((k-1)n-k+2\) of the \(r_{i}\)'s, except \(g(r_{1}^{(k-1)n-k+2})\), is in \(\mathbf{r}^{(m,n)}(I)\).
## 3. \(\phi\)-\((k,n)\)-absorbing hyperideals
In his paper [16], Davvaz et al. introduced a generalization of \(n\)-ary prime hyperideals in a Krasner \((k,n)\)-hyperring, which they defined as weakly \((k,n)\)-absorbing hyperideals. In this section, we generalize this notion to the context of \(\phi\)-\((k,n)\)-absorbing hyperideals.
**Definition 3.1**.: Assume that \(\mathcal{HI}(R)\) is the set of hyperideals of \(R\) and \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) is a function. Let \(k\) be a positive integer. A proper hyperideal \(Q\) of \(R\) is said to be \(\phi\)-\((k,n)\)-absorbing provided that for \(r_{1}^{kn-k+1}\in R\), \(g(r_{1}^{kn-k+1})\in Q-\phi(Q)\) implies that there are \((k-1)n-k+2\) of the \(r_{i}\)'s whose \(g\)-product is in \(Q\).
**Example 3.2**.: Consider the Krasner \((2,2)\)-hyperring \(R=\{0,1,x\}\) with the hyperaddition and multiplication defined by
\[\begin{array}{|c||c|c|c|}\hline+&0&1&x\\ \hline\hline 0&0&1&x\\ \hline 1&1&R&1\\ \hline x&x&1&\{0,x\}\\ \hline\end{array}\qquad\begin{array}{|c||c|c|c|}\hline\cdot&0&1&x\\ \hline\hline 0&0&0&0\\ \hline 1&0&1&x\\ \hline x&0&x&0\\ \hline\end{array}\]
Assume that \(\phi\) is a function from \(\mathcal{HI}(R)\) to \(\mathcal{HI}(R)\cup\{\varnothing\}\) defined \(\phi(I)=g(I^{(2)})\) for \(I\in\mathcal{HI}(R)\). Then the hyperideal \(Q=\{0,x\}\) is a \(\phi\)-\((2,2)\)-absorbing hyperideal of \(R\).
**Theorem 3.3**.: _Let \(\phi_{1},\phi_{2}:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be two functions such that for all \(I\in\mathcal{HI}(R)\), \(\phi_{1}(I)\subseteq\phi_{2}(I)\). If \(Q\) is a \(\phi_{1}\)-\((k,n)\)-absorbing hyperideal of \(R\), then \(Q\) is a \(\phi_{2}\)-\((k,n)\)-absorbing hyperideal._
Proof.: Suppose that \(g(r_{1}^{kn-k+1})\in Q-\phi_{2}(Q)\) for \(r_{1}^{kn-k+1}\in R\). From \(\phi_{1}(Q)\subseteq\phi_{2}(Q)\), it follows that \(g(r_{1}^{kn-k+1})\in Q-\phi_{1}(Q)\). Since \(Q\) is a \(\phi_{1}\)-\((k,n)\)-absorbing hyperideal of \(R\), we conclude that there are \((k-1)n-k+2\) of the \(r_{i}\)s whose \(g\)-product is in \(Q\), as needed.
**Theorem 3.4**.: _Let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function. If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\), then \(Q\) is a \(\phi\)-\((s,n)\)-absorbing hyperideal for all \(s\geq k\)._
Proof.: Let us use the induction on \(k\) that if \(Q\) is \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\), then \(Q\) is \(\phi\)-\((k+1,n)\)-absorbing. Assume that \(Q\) is \(\phi\)-\((2,n)\)-absorbing and \(g(r_{1}^{2n-2},g(r_{2n-1}^{3n-2}))\in Q-\phi(Q)\) for some \(r_{1}^{3n-2}\in R\). Since \(Q\) is \(\phi\)-\((2,n)\)-absorbing, then there are \(n\) of the \(r_{i}\)s except \(g(r_{2n-1}^{3n-2})\) whose \(g\)-product is in \(Q\) and so there are \(2n-1\) of the \(r_{i}^{\prime}\)s whose \(g\)-product is in \(Q\). This shows that \(Q\) is \(\phi\)-\((3,n)\)-absorbing. Assume that \(Q\) is \(\phi\)-\((k,n)\)-absorbing and \(g(g(r_{1}^{2n-2}),r_{2n-1}^{(k+1)n-(k+1)+1})\in Q-\phi(Q)\) for some \(r_{1}^{(k+1)n-(k+1)+1}\in R\). Since \(Q\) is \(\phi\)-\((k,n)\)-absorbing, we conclude that \(g(g(r_{1}^{2(n-1)}),r_{2n-1},\cdots,\widehat{r_{i}},\cdots,r_{(k+1)n-(k+1)+1})\in Q\) for some \(2(n-1)\leq i\leq(k+1)n-(k+1)+1\) or \(g(r_{2n-1}^{(k+1)n-(k+1)+1})\in Q\). The former case shows that \(Q\) is \(\phi\)-\((k+1,n)\)-absorbing. In the latter case, we obtain \(g(r_{1}^{n-1},r_{2n-1}^{(k+1)n-(k+1)+1})\in Q\) since \(g(r_{1}^{2(n-1)})\in Q\). Thus \(Q\) is \(\phi\)-\((k+1,n)\)-absorbing.
Recall from [15] that if \((R_{1},f_{1},g_{1})\) and \((R_{2},f_{2},g_{2})\) are two Krasner \((m,n)\)-hyperrings such that \(1_{R_{1}}\) and \(1_{R_{2}}\) are scalar identitis of \(R_{1}\) and \(R_{2}\), respectively, then \((R_{1}\times R_{2},f_{1}\times f_{2},g_{1}\times g_{2})\) is a Krasner \((m,n)\)-hyperring where
\(f=f_{1}\times f_{2}((a_{1},b_{1}),\cdots,(a_{m},b_{m}))=\{(a,b)\ |\ \ a\in f_{1}(a_{1} ^{m}),b\in f_{2}(b_{1}^{m})\}\)
\(g=g_{1}\times g_{2}((x_{1},y_{1}),\cdots,(x_{n},y_{n}))=(g_{1}(x_{1}^{n}),g_{2 }(y_{1}^{n}))\),
for all \(a_{1}^{m},x_{1}^{n}\in R_{1}\) and \(b_{1}^{m},y_{1}^{n}\in R_{2}\)
**Theorem 3.5**.: _Let \((R_{i},f_{i},g_{i})\) be a commutative Krasner \((m,n)\)-hyperring for each \(1\leq i\leq kn-k+1\) and \(\phi_{i}:\mathcal{HI}(R_{i})\longrightarrow\mathcal{HI}(R_{i})\cup\{\varnothing\}\) be a function. Let \(Q_{i}\) be a hyperideal of \(R_{i}\) for each \(1\leq i\leq kn-k+1\) and \(\phi=\phi_{1}\times\cdots\times\phi_{kn-k+1}\). If \(Q=Q_{1}\times\cdots\times Q_{kn-k+1}\) is a \(\phi\)-\((k+1,n)\)-absorbing hyperideal of \(R=R_{1}\times\cdots\times R_{kn-k+1}\), then \(Q_{i}\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R_{i}\) and \(Q_{i}\neq R_{i}\) for all \(1\leq i\leq kn-k+1\)._
Proof.: Let \(r_{1}^{kn-k+1}\in R_{i}\) such that \(g(r_{1}^{kn-k+1})\in Q_{i}-\phi_{i}(Q_{i})\). Suppose by contradiction that \(Q_{i}\) is not a \(\phi_{i}\)-\((k,n)\)-absorbing hyperideal of \(R_{i}\). Define
\(a_{1}=(1_{R_{1}},\cdots,1_{R_{i-1}},r_{1},1_{R_{i+1}},\cdots,1_{R_{kn-k+1}})\),
\(a_{2}=(1_{R_{1}},\cdots,1_{R_{i-1}},r_{2},1_{R_{i+1}},\cdots,1_{R_{kn-k+1}})\),
\(\vdots\)
\(a_{kn-k+1}=(1_{R_{1}},\cdots,1_{R_{i-1}},r_{kn-k+1},1_{R_{i+1}},\cdots,1_{R_{kn-k +1}})\),
\(a_{kn-k}=(1_{R_{1}},\cdots,1_{R_{i-1}},1_{R_{i}},1_{R_{i+1}},\cdots,1_{R_{kn-k+1 }})\),
\(a_{(k+1)n-(k+1)+1}=(0,\cdots,0,1_{R_{i}},0,\cdots,0)\).
Hence \(g(a_{1}^{(k+1)n-(k+1)+1})\in Q-\phi(Q)\) but \(g(a_{1}^{kn-k+1})\notin Q\). Since \(Q\) is a \(\phi\)-\((k+1,n)\)-absorbing hyperideal of \(R\), we conclude that one of \(g\)-productions of \(kn-k+1\) of \(a_{i}\)'s except \(g(a_{1}^{(k+1)n-(k+1)+1})\) is in \(Q\). This implies that there exist \((k-1)n-k+2\) of \(r_{i}\)'s whose \(g\)-product is in \(Q_{i}\) which is a contradiction. Consequently, \(Q_{i}\) is a \(\phi_{i}\)-\((k,n)\)-absorbing hyperideal of \(R_{i}\).
Assume that \((R_{1},f_{1},g_{1})\) and \((R_{2},f_{2},g_{2})\) are two Krasner \((m,n)\)-hyperrings. Recall from [27] that a mapping \(h:R_{1}\longrightarrow R_{2}\) is called a homomorphism if for all \(a_{1}^{m}\in R_{1}\) and \(b_{1}^{n}\in R_{1}\) we have \((1)h(f_{1}(a_{1},...,a_{m}))=f_{2}(h(a_{1}),...,h(a_{m}))\), \((2)h(g_{1}(b_{1},...,b_{n}))=g_{2}(h(b_{1}),...,h(b_{n})).\) Moreover, recall from [19] that a function \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) is called a reduction function of \(\mathcal{H}\mathcal{I}(R)\) if \(\phi(P)\subseteq P\) and \(P\subseteq Q\) implies that \(\phi(P)\subseteq\phi(Q)\) for all \(P,Q\in\mathcal{H}\mathcal{I}(R)\). Now, assume that \(R_{1}\) and \(R_{2}\) are two Krasner \((m,n)\)-hyperring such that \(h:R_{1}\longrightarrow R_{2}\) is a homomorphism. Suppose that \(\phi_{1}\) and \(\phi_{2}\) are two reduction functions of \(\mathcal{H}\mathcal{I}(R_{1})\) and \(\mathcal{H}\mathcal{I}(R_{2})\), respectively. If \(\phi_{1}(h^{-1}(I_{2}))=h^{-1}(\phi_{2}(I_{2}))\) for all \(I_{2}\in\mathcal{H}\mathcal{I}(R_{2})\), then we say \(h\) is a \(\phi_{1}\)-\(\phi_{2}\)-homomorphism. Let \(h\) be a \(\phi_{1}\)-\(\phi_{2}\)-epimorphism from \(R_{1}\) to \(R_{2}\) and let \(I_{1}\) be a hyperideal of \(R_{1}\) with \(Ker(h)\subseteq I_{1}\). It is easy to see that \(\phi_{2}(h(I_{1}))=h(\phi_{1}(I_{1}))\).
**Example 3.6**.: Let \(R_{1}\) and \(R_{2}\) be two Krasner \((m,n)\)-hyperring and \(\phi_{1}\) and \(\phi_{2}\) be two empty reduction functions of \(\mathcal{H}\mathcal{I}(R_{1})\) and \(\mathcal{H}\mathcal{I}(R_{2})\), respectively. Then every homorphism \(h\) from \(R_{1}\) to \(R_{2}\) is a \(\phi_{1}\)-\(\phi_{2}\)-homomorphism.
**Theorem 3.7**.: _Let \(h:R_{1}\longrightarrow R_{2}\) be a \(\phi_{1}\)-\(\phi_{2}\)-homomorphism, where \(\phi_{1}\) and \(\phi_{2}\) are two reduction functions of \(\mathcal{H}\mathcal{I}(R_{1})\) and \(\mathcal{H}\mathcal{I}(R_{2})\), respectively. Then:_
1. _If_ \(Q_{2}\) _is a_ \(\phi_{2}\)_-_\((k,n)\)_-absorbing hyperideal of_ \(R_{2}\)_, then_ \(h^{-1}(Q_{2})\) _is a_ \(\phi_{1}\)_-_\((k,n)\)_-absorbing of_ \(R_{1}\)_._
2. _If_ \(h\) _is surjective and_ \(Q_{1}\) _is a_ \(\phi_{1}\)_-_\((k,n)\)_-absorbing hyperideal of_ \(R_{1}\) _with_ \(Ker(h)\subseteq Q_{1}\)_, then_ \(h(Q_{1})\) _is a_ \(\phi_{2}\)_-_\((k,n)\)_-absorbing hyperideal of_ \(R_{2}\)_._
Proof.: (1) Let \(Q_{2}\) be a \(\phi_{2}\)-\((k,n)\)-absorbing hyperideal of \(R_{2}\) and \(g(r_{1}^{kn-k+1})\in h^{-1}(Q_{2})-\phi_{1}(h^{-1}(Q_{2}))\) for some \(r_{1}^{kn-k+1}\in R_{1}\). Then we get \(h(g(r_{1}^{kn-k+1}))=g(h(r_{1}),\cdots,h(r_{kn-k+1}))\in Q_{2}-\phi_{2}(Q_{2})\). Since \(Q_{2}\) is a \(\phi_{2}\)-\((k,n)\)-absorbing hyperideal of \(R_{2}\), we conclude that the image of \(h\) of \((k-1)n-k+2\) of \(r_{i}\)'s whose \(g\)-product is in \(Q_{2}\). Then there exist \((k-1)n-k+2\) of \(r_{i}\)'s whose \(g\)-product is in \(h^{-1}(Q_{2})\). Thus \(h^{-1}(Q_{2})\) is a \(\phi_{1}\)-\((k,n)\)-absorbing of \(R_{1}\).
(2) Suppose that \(Q_{1}\) is a \(\phi_{1}\)-\((k,n)\)-absorbing hyperideal of \(R_{1}\) with \(Ker(h)\subseteq Q_{1}\) and \(h\) is surjective. Let \(g(s_{1}^{kn-k+1})\in h(Q_{1})-\phi_{2}(h(Q_{1}))\) for some \(s_{1}^{kn-k+1}\in R_{2}\). Then there exist \(r_{i}\in R_{1}\) for every \(1\leq i\leq kn-k+1\) such that \(h(r_{i})=s_{i}\). Hence we get \(h(g(r_{1}^{kn-k+1})=g(h(r_{1}),\cdots,h(r_{kn-k+1}))=g(s_{1}^{kn-k+1})\in h(Q_{1})\). Since \(Ker(h)\subseteq Q_{1}\) and \(h\) is a \(\phi_{1}\)-\(\phi_{2}\)-epimorphism, we have \(g(r_{1}^{kn-k+1})\in Q_{1}-\phi_{1}(Q_{1})\). Since \(Q_{1}\) is a \(\phi_{1}\)-\((k,n)\)-absorbing hyperideal of \(R_{1}\), there are \((k-1)n-k+2\) of \(r_{i}\)'s whose \(g\)-product is in \(Q_{1}\). Now, since \(h\) is a homomorphism, we are done.
Let \(P\) be a hyperideal of \(R\). Then the set \(R/P=\{f(a_{1}^{i-1},P,a_{i+1}^{m})\mid a_{1}^{i-1},a_{i+1}^{m}\in R\}\) with \(m\)-ary hyperoperation \(f\) and \(n\)-operation \(g\) is the quotient Krasner \((m,n)\)-hyperring of \(R\) by \(P\). Theorem 3.2 in [1] shows that the projection map \(\pi\) from \(R\) to \(R/P\), defined by \(\pi(r)=f(r,P,0^{(m-2)})\), is homomorphism. Let \(P\) be a hyperideal of \(R\) and \(\phi\) be a reduction function of \(\mathcal{HI}(R)\). Then the function \(\phi_{q}\) from \(\mathcal{HI}(R/P)\) to \(\mathcal{HI}(R/P)\cup\{\varnothing\}\) defined by \(\phi_{q}(I/P)=\phi(I)/P\) is a reduction function. Now, we have the following theorem as a result of Theorem 3.7 that is easily verified, and hence we omit the proof.
**Theorem 3.8**.: _Let \(Q\) and \(P\) be two hyperideals of \(R\) and \(\phi\) be a reduction function of \(\mathcal{HI}(R)\) such that \(P\subseteq\phi(Q)\subseteq Q\). If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\), then \(Q/P\) is a \(\phi_{q}\)-\((k,n)\)-absorbing hyperideal of \(R/P\)._
## 4. \(\phi\)-\((k,n)\)-absorbing primary hyperideals
**Definition 4.1**.: Suppose that \(\mathcal{HI}(R)\) is the set of hyperideals of \(R\) and \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) is a function. Let \(k\) be a positive integer. A proper hyperideal \(Q\) of \(R\) is called \(\phi\)-\((k,n)\)-absorbing primary if \(g(r_{1}^{kn-k+1})\in Q-\phi(Q)\) for \(r_{1}^{kn-k+1}\in R\) implies that \(g(r_{1}^{(k-1)n-k+2})\in Q\) or a \(g\)-product of \((k-1)n-k+2\) of \(r_{i}\)s,except \(g(r_{1}^{(k-1)n-k+2})\), is in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\).
**Example 4.2**.: Every \(\phi\)-\((k,n)\)-absorbing of a Krasner \((m,n)\)-hyperring is \(\phi\)-\((k,n)\)-absorbing primary.
The converse may not be always true as it is shown in the following example.
**Example 4.3**.: Consider the Krasner \((2,2)\)-hyperring \(R=[0,1]\) with the \(2\)-ary hyperoperation defined as \(a\oplus b=\{max\{a,b\}\}\) if \(a\neq b\) and \(a\oplus b=[0,a]\) if \(a=b\) and multiplication is the usual multiplication on real numbers. Suppose that \(\phi\) is a function from \(\mathcal{HI}(R)\) to \(\mathcal{HI}(R)\cup\{\varnothing\}\) defined \(\phi(I)=\cap_{i=1}^{\infty}g(I^{(i)})\) for \(I\in\mathcal{HI}(R)\).Then the hyperideal \(Q=[0,0.5]\) is a \(\phi\)-\((2,2)\)-primary hyperideal of \(R\) but it is not \(\phi\)-\((2,2)\)-absorbing.
The next theorem provides us how to determine \(\phi\)-\((k,n)\)-absorbing primary hyperideal to be \((k,n)\)-absorbing primary.
**Theorem 4.4**.: _Assume that \(Q\) is a hyperideal of \(R\) and \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) is a reduction function such that \(\phi(Q)\) is a \((k,n)\)-absorbing primary hyperideal of \(R\). If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(Q\) is a \((k,n)\)-absorbing primary hyperideal of \(R\)._
Proof.: Let \(r_{1}^{kn-k+1}\in R\) such that \(g(r_{1}^{kn-k+1})\in Q\) such that \(g(r_{1}^{(k-1)n-k+2})\notin Q\). Assume that \(g(r_{1}^{kn-k+1})\in\phi(Q)\). Since \(\phi(Q)\) is a \((k,n)\)-absorbing primay huperideal of \(R\) and \(g(r_{1}^{(k-1)n-k+2})\notin\phi(Q)\), we conclude that a \(g\)-product of \((k-1)n-k+2\) of the \(r_{i}\)s, except \(g(r_{1}^{(k-1)n-k+2})\) is in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(\phi(Q))\subseteq\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\), as needed. Suppose that \(g(r_{1}^{kn-k+1})\notin\phi(Q)\). Since \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), we are done.
In the following, the relationship between a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) and its radical is considered.
**Theorem 4.5**.: _Let \(Q\) be a hyperideal of \(R\) and \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function such that \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(\phi(Q))=\phi(\mathbf{r}^{(\mathbf{m}, \mathbf{n})}(Q))\). If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\)._
Proof.: Let \(r_{1}^{kn-k+1}\in R\) such that \(g(r_{1}^{kn-k+1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)-\phi(\mathbf{r}^{( \mathbf{m},\mathbf{n})}(Q))\). Assume that all products of \((k-1)n-k+2\) of the \(r_{i}\)s except \(g(r_{1}^{(k-1)n-k+2})\) are not in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\). Since \(g(r_{1}^{kn-k+1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\), then there exist \(s\in\mathbb{Z}^{+}\) with \(g(g(r_{1}^{kn-k+1})^{(s)},1^{(n-s)})\in Q\), for \(s\leq n\) or \(g_{(l)}(g(r_{1}^{kn-k+1})^{(s)})\in Q\), for \(s>n\), \(s=l(n-1)+1\). In the former case, we get \(g(g(r_{1})^{(s)},g(r_{2})^{(s)},\cdots,g(r_{kn-k+1})^{(s)},1^{(n-s)})\in Q\). If \(g(g(r_{1})^{(s)},g(r_{2})^{(s)},\cdots,g(r_{kn-k+1})^{(s)},1^{(n-s)})\in\phi(Q)\), we obtain \(g(r_{1}^{kn-k+1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(\phi(Q))=\phi(\mathbf{ r}^{(\mathbf{m},\mathbf{n})}(Q))\), a contradiction. Since \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(g(g(r_{1})^{(s)},g(r_{2})^{(s)},\cdots,g(r_{(k-1)n-k+2})^{(s)}),1^{(n-s)})=g(g( r_{1}^{(k-1)n-k+2})^{(s)},1^{(n-s)})\in Q\) which means \(g(r_{1}^{(k-1)n-k+2})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\). For the other case, we have a similar argument. Consequently, \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) is a \(\phi\)-\((k,n)\)-absorbing hyperideal of \(R\).
**Theorem 4.6**.: _Assume that \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) is a function. If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(Q\) is a \(\phi\)-\((s,n)\)-absorbing primary hyperideal for all \(s\geq k\)._
Proof.: Let \(Q\) be a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\). Suppose that \(g(g(r_{1}^{n+2}),r_{n+3}^{(k+1)n-(k+1)+1})\in Q-\phi(Q)\) for some \(r_{1}^{(k+1)n-(k+1)+1}\in R\). Put \(g(r_{1}^{n+2})=a_{1}\). Then we conclude that \(g(a_{1},\cdots,r_{(k+1)n-(k+1)+1})\in Q\) or a \(g\)-product of \(kn-k+1\) of the \(r_{i}\)s, except \(g(a_{1},\cdots,r_{(k+1)n-(k+1)+1})\) is in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) as \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\). Since \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) is a hyperideal of \(R\) and \(r_{1}^{n+2}\in R\), we conclude that \(g(r_{1},r_{n+3},\cdots,r_{(k+1)n-(k+1)+1})\in\mathbf{r}^{(\mathbf{m},\mathbf{ n})}(Q)\) or \(\cdots\) or \(g(r_{n+2},r_{n+3},\cdots,r_{(k+1)n-(k+1)+1})\in\mathbf{r}^{(\mathbf{m},\mathbf{ n})}(Q)\) and so \(Q\) is \((k+1,n)\)-absorbing primary.
**Theorem 4.7**.: _Let \(\phi_{1},\phi_{2}:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{ I}(R)\cup\{\varnothing\}\) be two functions such that for all \(I\in\mathcal{H}\mathcal{I}(R)\), \(\phi_{1}(I)\subseteq\phi_{2}(I)\). If \(Q\) is a \(\phi_{1}\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(Q\) is a \(\phi_{2}\)-\((k,n)\)-absorbing primary hyperideal._
Proof.: It is proved in a similar way to Theorem 3.3.
**Theorem 4.8**.: _Let \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) be a function. If \(Q\) is a \(\phi\)-\((1,n)\)-absorbing primary hyperideal of \(R\), then \(Q\) is a \(\phi\)-\((2,n)\)-absorbing primary hyperideal._
Proof.: Let \(Q\) be a \(\phi\)-\((1,n)\)-absorbing primary hyperideal and \(g(g(r_{1}^{n}),\cdots,r_{2n-1})\in Q-\phi(Q)\) for some \(r_{1}^{2n-1}\in R\). Then we get \(g(r_{1}^{n})\in Q\) or \(g(r_{n+1}^{2n-1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\). By definition of hyperideal, we conclude that \(g(r_{1},r_{n+1},\cdots,r_{2n-1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) or \(\cdots\) or \(g(r_{1},r_{n+1},\cdots,r_{2n-1})\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) since \(r_{1}^{n}\in R\). Consequently, \(Q\) is a \(\phi\)-\((2,n)\)-absorbing primary hyperideal of \(R\).
Let \(Q\) be a proper hyperideal of \(R\) and \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) be a function. \(Q\) refers to a strongly \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) if \(g(Q_{1}^{kn-k+1})\subseteq Q-\phi(Q)\) for some hyperideals \(Q_{1}^{kn-k+1}\) of \(R\) implies that \(g(Q_{1}^{(k-1)n-k+2})\subseteq Q\) or a \(g\)-product of \((k-1)n-k+2\) of \(Q_{i}\)s, except \(g(Q_{1}^{(k-1)n-k+2})\), is a subset of \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\). In the sequel, we assume that all \(\phi\)-\((k,n)\)-absorbing primary hyperideals of \(R\) are strongly \(\phi\)-\((k,n)\)-absorbing primary hyperideals. Recall from [16] that a proper hyperideal \(Q\) of \(R\) is called weakly \((k,n)\)-absorbing primary if \(0\neq g(r_{1}^{kn-k+1})\in Q\) for \(r_{1}^{kn-k+1}\in R\) implies that \(g(r_{1}^{(k-1)n-k+2})\in Q\) or a \(g\)-product of \((k-1)n-k+2\) of \(r_{i}\)s,except \(g(r_{1}^{(k-1)n-k+2})\), is in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\).
**Theorem 4.9**.: _Suppose that \(Q\) is a proper hyperideal of a commutative Krasner \((m,2)\)-hyperring \(R\) and \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup\{\varnothing\}\) is a function. Then the followings are equivalent:_
1. \(Q\) _is a_ \(\phi\)_-_\((2,2)\)_-absorbing primary hyperideal of_ \(R\)_._
2. \(Q/\phi(Q)\) _is a weakly_ \((2,2)\)_-absorbing primary hyperideal of_ \(R/\phi(Q)\)_._
Proof.: \((1)\Longrightarrow(2)\) Let \(Q\) be \(\phi\)-\((2,2)\)-absorbing primary and for \(a_{11}^{1m},a_{21}^{2m},a_{31}^{3m}\in R\), \(\phi(Q)\neq g(f(a_{11}^{(i-1)},\phi(Q),a_{1(i+1)}^{1m}),f(a_{21}^{2(i-1)},\phi( Q),a_{2(i+1)}^{2m}),\) \(f(a_{31}^{3(i-1)},\phi(Q),a_{3(i+1)}^{3m}))\)
\(=f(g(a_{11}^{31}),\cdots,g(a_{1(i-1)}^{3(i-1)}),\phi(Q),g(a_{1(i+1)}^{3(i+1)}), \cdots,g(a_{1m}^{3m}))\)
\(\in Q/\phi(Q)\).
Then
\[f(g(a_{11}^{31}),\cdots,g(a_{1(i-1)}^{3(i-1)}),0,g(a_{1(i+1)}^{3(i+1)}), \cdots,g(a_{1m}^{3m}))\]
\(=g(f(a_{11}^{1(i-1)},0,a_{1(i+1)}^{1m}),f(a_{21}^{2(i-1)},0,a_{2(i+1)}^{2m}),f( a_{31}^{3(i-1)},0,a_{3(i+1)}^{3m}))\)
\(\in Q-\phi(Q)\).
Since \(Q\) is a \(\phi\)-\((2,2)\)-absorbing primary hyperideal of \(R\), we get
\[g(f(a_{11}^{1(i-1)},0,a_{1(i+1)}^{1m}),f(a_{21}^{2(i-1)},0,a_{2(i+1)}^{2m}))\]
\(=f(g(a_{11}^{21}),\cdots,g(a_{1(i-1)}^{2(i-1)}),0,g(a_{1(i+1)}^{2(i+1)}),\cdots,g(a_{1m}^{2m}))\subseteq Q\)
or
\[g(f(a_{21}^{2(i-1)},0,a_{2(i+1)}^{2m}),f(a_{31}^{3(i-1)},0,a_{3(i+1)}^{3m}))\]
\(=f(g(a_{21}^{31}),\cdots,g(a_{2(i-1)}^{3(i-1)}),0,g(a_{2(i+1)}^{3(i+1)}), \cdots,g(a_{2m}^{3m}))\subseteq\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\)
or
\[g(f(a_{11}^{1(i-1)},0,a_{1(i+1)}^{1m}),f(a_{31}^{3(i-1)},0,a_{3(i+1)}^{3m}))\]
\(=f(g(a_{11}^{31}),\cdots,g(a_{1(i-1)}^{3(i-1)}),0,g(a_{1(i+1)}^{3(i+1)}), \cdots,g(a_{1m}^{3m}))\subseteq\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\).
It implies that
\[f(g(a_{11}^{21}),\cdots,g(a_{1(i-1)}^{2(i-1)}),\phi(Q),g(a_{1(i+1)}^{2(i+1)}), \cdots,g(a_{1m}^{2m}))\]
\(=g(f(a_{11}^{1(i-1)},\phi(Q),a_{1(i+1)}^{1m}),f(a_{21}^{2(i-1)},\phi(Q),a_{2( i+1)}^{2m}))\in Q/\phi(Q)\)
or
\[f(g(a_{21}^{31}),\cdots,g(a_{2(i-1)}^{3(i-1)}),\phi(Q),g(a_{2(i+1)}^{3(i+1)}), \cdots,g(a_{2m}^{3m}))\]
\(=g(f(a_{21}^{2(i-1)},\phi(Q),a_{2(i+1)}^{2m}),f(a_{31}^{3(i-1)},\phi(Q),a_{3( i+1)}^{3m}))\)
\(\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)/\phi(Q)=\mathbf{r}^{(\mathbf{m}, \mathbf{n})}(Q/\phi(Q))\)
or
\[f(g(a_{11}^{31}),\cdots,g(a_{1(i-1)}^{3(i-1)}),\phi(Q),g(a_{1(i+1)}^{3(i+1)}), \cdots,g(a_{1m}^{3m}))\]
\(=g(f(a_{11}^{1(i-1)},\phi(Q),a_{1(i+1)}^{1m}),f(a_{31}^{3(i-1)},\phi(Q),a_{3( i+1)}^{3m}))\)
\(\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)/\phi(Q)=\mathbf{r}^{(\mathbf{m}, \mathbf{n})}(Q/\phi(Q))\).
\((2)\Longrightarrow(1)\) Let \(g(r_{1}^{3})\in Q-\phi(Q)\) for some \(r_{1}^{3}\in R\). Therefore we obtain \(f(g(r_{1}^{3}),\phi(Q),0^{(m-2)})\neq\phi(Q)\). It follows that
\(\phi(Q)\neq g(f(r_{1},\phi(Q),0^{(m-2)}),f(r_{2},\phi(Q),0^{(m-2)}),f(r_{3}, \phi(Q),0^{(m-2)}))\in Q/\phi(Q)\).
By the hypothesis, we get
\(g(f(r_{1},\phi(Q),0^{(m-2)}),f(r_{2},\phi(Q),0^{(m-2)}))=f(g(r_{1}^{2}),\phi(Q), 0^{(m-2)})\in Q/\phi(Q)\).
or
\[g(f(r_{2},\phi(Q),0^{(m-2)}),f(r_{3},\phi(Q),0^{(m-2)}))=f(g(r_{2}^{3}),\phi(Q), 0^{(m-2)})\]
or
\[\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)/\phi(Q).\]
\[g(f(r_{1},\phi(Q),0^{(m-2)}),f(r_{3},\phi(Q),0^{(m-2)})) =f(g(r_{1}^{3}),\phi(Q),0^{(m-2)})\] \[\in\mathbf{r^{(m,n)}}(Q)/\phi(Q).\]
This shows that \(g(r_{1}^{2})\in Q\) or \(g(r_{2}^{3})\in\mathbf{r^{(m,n)}}(Q)\) or \(g(r_{1}^{3})\in\mathbf{r^{(m,n)}}(Q)\). Consequently, \(Q\) is a \(\phi\)-\((2,2)\)-absorbing primary hyperideal of \(R\).
Suppose that \(I\) is a weakly \((2,2)\)-absorbing primary hyperideal of a commutative Krasner \((m,2)\)-hyperring \(R\). Recall from [16] that \((x,y,z)\) is said to be \((2,2)\)-zero primary of \(I\) for \(x,y,z\in R\), if \(g(x,y,z)=0\), \(g(x,y)\notin I\), \(g(y,z)\notin\mathbf{r^{(m,n)}}(I)\) and \(g(x,z)\notin\mathbf{r^{(m,n)}}(I)\). Now, assume that \(Q\) is a \(\phi\)-\((2,2)\)-absorbing primary hyperideal of a commutative Krasner \((m,2)\)-hyperring \(R\). Then we say \((x,y,z)\) is a \(\phi\)-\((2,2)\) primary of \(Q\) for some \(x,y,z\in R\) if \(g(x,y,z)\in\phi(Q)\), \(g(x,y)\notin Q\), \(g(y,z)\notin\mathbf{r^{(m,n)}}(Q)\) and \(g(x,z)\notin\mathbf{r^{(m,n)}}(Q)\). It is easy to see that a proper hyperideal \(Q\) of \(R\) is \(\phi\)-\((2,2)\)-absorbing primary that is not \((2,2)\)-absorbing primary if and only if \(Q\) has a \(\phi\)-\((2,2)\)\((x,y,z)\) for some \(x,y,z\in R\).
**Theorem 4.10**.: _Let \(R\) be a commutative Krasner \((m,2)\)-hyperring and let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function. Let \(Q\) be a \(\phi\)-\((2,2)\)-absorbing primary hyperideal of \(R\) and \(x,y,z\in R\). Then the followings are equivalent:_
\((1)\)__\((x,y,z)\) _is a_ \(\phi\)_-_\((2,2)\) _primary of_ \(Q\)_._
\((2)\)__\((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) _is a_ \((2,2)\)_-zero primary of_ \(Q/\phi(Q)\)_._
Proof.: \((1)\Longrightarrow(2)\) Let \((x,y,z)\) be a \(\phi\)-\((2,2)\) primary of \(Q\). This means that \(g(x,y,z)\in\phi(Q)\), \(g(x,y)\notin Q\), \(g(y,z)\notin\mathbf{r^{(m,n)}}(Q)\) and \(g(x,z)\notin\mathbf{r^{(m,n)}}(Q)\). This implies that \(f(g(x,y),Q,0^{(m-2)})\notin Q/\phi(Q)\), \(f(g(y,z),\phi(Q),0^{(m-2)})\notin\mathbf{r^{(m,n)}}(Q)/\phi(Q)\) and \(f(g(x,z),\phi(Q),0^{(m-2)})\notin\mathbf{r^{(m,n)}}(Q)/\phi(Q)\). By Theorem 4.9, we conclude that \((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) is a \((2,2)\)-zero primary of \(Q/\phi(Q)\).
\((2)\Longrightarrow(1)\) Assume that \((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) is a \((2,2)\)-zero primary of \(Q/\phi(Q)\). Thus \(g(x,y,z)\in\phi(Q)\) but \(f(g(x,y),Q,0^{(m-2)})\notin Q/\phi(Q)\), \(f(g(y,z),\phi(Q),0^{(m-2)})\notin\mathbf{r^{(m,n)}}(Q)/\phi(Q)\) and \(f(g(x,z),\phi(Q),0^{(m-2)})\notin\mathbf{r^{(m,n)}}(Q)/\phi(Q)\). Hence \(g(x,y,z)\in\phi(Q)\), \(g(x,y)\notin Q\), \(g(y,z)\notin\mathbf{r^{(m,n)}}(Q)\) and \(g(x,z)\notin\mathbf{r^{(m,n)}}(Q)\). It implies that \((x,y,z)\) is a \(\phi\)-\((2,2)\) primary of \(Q\).
**Theorem 4.11**.: _Let \(R\) be a commutative Krasner \((m,2)\)-hyperring and let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function. Let \(Q\) be a \(\phi\)-\((2,2)\)-absorbing primary hyperideal of \(R\). If \((x,y,z)\) is a \(\phi\)-\((2,2)\) primary of \(Q\) for some \(x,y,z\in R\), then_
\((1)\)__\(g(x,y,Q),g(y,z,Q),g(x,z,Q)\subseteq\phi(Q)\)_._
\((2)\)__\(g(x,Q^{(2)}),g(y,Q^{(2)}),g(z,Q^{(2)})\subseteq\phi(Q)\)_._
\((3)\)__\(g(Q^{(3)})\subseteq\phi(Q)\)_._
Proof.: \((1)\) Let \((x,y,z)\) be a \(\phi\)-\((2,2)\) primary of a \(\phi\)-\((2,2)\)-absorbing primary hyperideal \(Q\). By Theorem 4.10, \((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) is a \((2,2)\)-zero primary of \(Q/\phi(Q)\) since \((x,y,z)\) is a \(\phi\)-\((2,2)\) primary of \(Q\). Thus
\[f(g(x,y,Q),\phi(Q),0^{(m-2)}) =f(g(y,z,Q),\phi(Q),0^{(m-2)})\] \[=f(g(x,z,Q),\phi(Q),0^{(m-2)})\] \[=\phi(Q)\]
by Theorem 4.9 in [16], which implies \(g(x,y,Q)\), \(g(y,z,Q)\) and \(g(x,z,Q)\) are subsets of \(\phi(Q)\).
\((2)\) Theorem 4.10 shows that \((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) is a \((2,2)\)-zero primary of \(Q/\phi(Q)\). Moreover, Theorem 4.9 shows that \(Q/\phi(Q)\) is
a weakly \((2,2)\)-absorbing primary of \(R/\phi(Q)\). Then \(f(g(x,Q^{(2)}),\phi(Q),0^{(m-2)})=f(g(y,Q^{(2)}),\phi(Q),0^{(m-2)})=f(g(z,Q^{(2) }),\phi(Q),0^{(m-2)})=\phi(Q)\), by Theorem 4.9 of [16]. Consequently, \(g(x,Q^{(2)}),g(y,Q^{(2)}),g(z,Q^{(2)})\) are subsets of \(\phi(Q)\).
(3) Again, \((f(x,\phi(Q),0^{(m-2)}),f(y,\phi(Q),0^{(m-2)}),f(z,\phi(Q),0^{(m-2)})\) is a \((2,2)\)-zero primary of \(Q/\phi(Q)\) and \(Q/\phi(Q)\) is a weakly \((2,2)\)-absorbing primary of \(R/\phi(Q)\) by Theorem 4.10 and Theorem 4.9, respectively, then \(f(g(Q^{(3)}),\phi(Q),0^{(m-2)})=\phi(Q)\) by Theorem 4.10 in [16]. Thus \(g(Q^{(3)})\) is a subset of \(\phi(Q)\).
**Theorem 4.12**.: _Suppose that \(Q\) is a proper hyperideal of a commutative Krasner \((m,n)\)-hypering \(R\) and \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) is a function. Then the followings are equivalent:_
1. \(Q\) _is a_ \(\phi\)_-_\((k,n)\)_-absorbing primary hyperideal of_ \(R\)_._
2. \(Q/\phi(Q)\) _is a weakly_ \((k,n)\)_-absorbing primary hyperideal of_ \(R/\phi(Q)\)_._
Proof.: It can be easily proved in a similar manner to the proof of Theorem 4.9.
Suppose that \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\). Then we say \((r_{1}^{k(n-1)+1})\) is a \(\phi\)-\((k,n)\) primary of \(Q\) for some \(r_{1}^{k(n-1)+1}\in R\) if \(g(r_{1}^{k(n-1)+1})\in\phi(Q)\), \(g(r_{1}^{(k-1)n-k+2})\notin Q\) and a \(g\)-product of \((k-1)n-k+2\) of \(r_{i}\)s, except \(g(r_{1}^{(k-1)n-k+2})\), is not in \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\).
**Theorem 4.13**.: _Let \(R\) be a commutative Krasner \((m,2)\)-hypering and let \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) be a function. Let \(Q\) be a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) and \(r_{1}^{k(n-1)+1}\in R\). Then the followings are equivalent:_
1. \((r_{1}^{k(n-1)+1})\) _is a_ \(\phi\)_-_\((k,n)\) _primary of_ \(Q\)_._
2. \((f(r_{1},\phi(Q),0^{(m-2)}),\cdots,f(r_{k(n-1)+1},\phi(Q),0^{(m-2)})\) _is a_ \((k,n)\)_-zero primary of_ \(Q/\phi(Q)\)_._
Proof.: It is seen to be true in a similar manner to Theorem 4.10.
**Theorem 4.14**.: _Let \(R\) be a commutative Krasner \((m,n)\)-hypering and let \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) be a function. Let \(Q\) be a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\). If \((r_{1}^{k(n-1)+1})\) is a \(\phi\)-\((k,n)\) primary of \(Q\) for some \(r_{1}^{k(n-1)+1}\in R\), then \(g(r_{1},\cdots,\widehat{r_{i_{1}}},\cdots,\widehat{r_{i_{2}}},\cdots,\widehat{ r_{i_{s}}},\cdots,r_{k(n-1)+1},Q^{(s)})\subseteq\phi(Q)\) for every \(i_{1},\cdots,i_{s}\in\{1,\cdots,k(n-1)+1\}\) and \(1\leq s\leq(k-1)n-k+2\)._
Proof.: \((f(r_{1},\phi(Q),0^{(m-2)}),\cdots,f(r_{k(n-1)+1},\phi(Q),0^{(m-2)})\) is a \((k,n)\)-zero primary of \(Q/\phi(Q)\) by Theorem 4.13 and \(Q/\phi(Q)\) is a weakly \((k,n)\)-absorbing primary of \(R/\phi(Q)\) by Theorem 4.12. Then we conclude that
\(f(g(f(r_{1},\phi(Q),0^{(m-2)}),\cdots,f(\widehat{r_{i_{1}}},\phi(Q),0^{(m-2)} ),\cdots,f(\widehat{r_{i_{2}}},\phi(Q),0^{(m-2)}),\cdots,\)
\(f(\widehat{r_{i_{s}}},\phi(Q),0^{(m-2)}),\cdots,f(r_{k(n-1)+1},\phi(Q),0^{(m- 2)}),Q^{(s)}),\phi(Q),0^{(m-2)})=\phi(Q)\) for every \(i_{1},\cdots,i_{s}\in\{1,\cdots,k(n-1)+1\}\) and \(1\leq s\leq(k-1)n-k+2\), by Theorem 4.9 of [16]. Thus, \(g(r_{1},\cdots,\widehat{r_{i_{1}}},\cdots,\widehat{r_{i_{2}}},\cdots,\widehat{ r_{i_{s}}},\cdots,r_{k(n-1)+1},Q^{(s)})\subseteq\phi(Q)\).
**Theorem 4.15**.: _Let \(R\) be a commutative Krasner \((m,n)\)-hypering and let \(\phi:\mathcal{H}\mathcal{I}(R)\longrightarrow\mathcal{H}\mathcal{I}(R)\cup \{\varnothing\}\) be a function. Let \(Q\) be a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) but is not a \((k,n)\)-absorbing primary. Then \(g(Q^{k(n-1)+1})\subseteq\phi(Q)\)._
Proof.: This can be proved, by using Theorem 4.14, in a very similar manner to the way in which 4.11 was proved.
Now, let give some related corollaries.
**Corollary 4.16**.: Let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function. If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) such that \(g(Q^{k(n-1)+1})\nsubseteq\phi(Q)\), then \(Q\) is a \((k,n)\)-absorbing primary hyperideal of \(R\).
**Corollary 4.17**.: Let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function and let \(Q\) be a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\) that is not a \((k,n)\)-absorbing primary hyperideal of \(R\). Then \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)=\mathbf{r}^{(\mathbf{m},\mathbf{n})}( \phi(Q))\).
Proof.: By Theorem 4.15, we have \(g(Q^{k(n-1)+1})\subseteq\phi(Q)\) as \(Q\) is not a \((k,n)\)-absorbing primary. This means \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\subseteq\mathbf{r}^{(\mathbf{m}, \mathbf{n})}(\phi(Q))\). On the other hand, from \(\phi(Q)\subseteq Q\), it follows that \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(\phi(Q))\subseteq\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\). Hence \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)=\mathbf{r}^{(\mathbf{m},\mathbf{n})}( \phi(Q))\).
**Corollary 4.18**.: Let \(\phi:\mathcal{HI}(R)\longrightarrow\mathcal{HI}(R)\cup\{\varnothing\}\) be a function and let \(Q\) be a proper hyperideal of \(R\) such that \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(\phi(Q))\) is a \((k,n)\)-absorbing hyperideal of \(R\). Then \(Q\) is a \(\phi\)-\((k+1,n)\)-absorbing primary hyperideal of \(R\) if and only if \(Q\) is a \((k+1,n)\)-absorbing primary hyperideal of \(R\).
Proof.: (\(\Longrightarrow\)) Let \(Q\) be a \(\phi\)-\((k+1,n)\)-absorbing primary hyperideal of \(R\). If \(Q\) is not a \((k+1,n)\)-absorbing primary hyperideal of \(R\). Hence \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)=\mathbf{r}^{(\mathbf{m},\mathbf{n})}( \phi(Q))\) by Corollary 4.17. Then \(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q)\) is a \((k,n)\)-absorbing hyperideal of \(R\) which implies that \(Q\) is is a \((k+1,n)\)-absorbing primary hyperideal of \(R\) by Theorem 4.9 in [18].
(\(\Longleftarrow\)) It is clear.
**Theorem 4.19**.: _Let \(h:R_{1}\longrightarrow R_{2}\) be a \(\phi_{1}\)-\(\phi_{2}\)-homomorphism, where \(\phi_{1}\) and \(\phi_{2}\) are two reduction functions of \(\mathcal{HI}(R_{1})\) and \(\mathcal{HI}(R_{2})\), respectively. Then:_
1. _If_ \(Q_{2}\) _is a_ \(\phi_{2}\)_-_\((k,n)\)_-absorbing primary hyperideal of_ \(R_{2}\)_, then_ \(h^{-1}(Q_{2})\) _is a_ \(\phi_{1}\)_-_\((k,n)\)_-absorbing primary hyperideal of_ \(R_{1}\)_._
2. _If_ \(h\) _is surjective and_ \(Q_{1}\) _is a_ \(\phi_{1}\)_-_\((k,n)\)_-absorbing primary hyperideal of_ \(R_{1}\) _with_ \(Ker(h)\subseteq Q_{1}\)_, then_ \(h(Q_{1})\) _is a_ \(\phi_{2}\)_-_\((k,n)\)_-absorbing primary hyperideal of_ \(R_{2}\)_._
Proof.: (1) Let \(Q_{2}\) be a \(\phi_{2}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{2}\). Assume that \(r_{1}^{kn-k+1}\in R_{1}\) such that \(g(r_{1}^{kn-k+1})\in h^{-1}(Q_{2})-\phi_{1}(h^{-1}(Q_{2}))\). Then we get \(h(g(r_{1}^{kn-k+1}))=g(h(r_{1}),\cdots,h(r_{kn-k+1}))\in Q_{2}-\phi_{2}(Q_{2})\). Since \(Q_{2}\) is a \(\phi_{2}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{2}\), we obtain either \(g(h(r_{1}),\cdots,h(r_{(k-1)n-k+2}))=h(g(r_{1}^{(k-1)n-k+2}))\in Q_{2}\) which means \(g(r_{1}^{(k-1)n-k+2})\in h^{-1}(Q_{2})\), or
\(g(h(r_{1}),\cdots,\widehat{h(r_{i})},\cdots,h(r_{kn-k+1}))=h(g(r_{1},\cdots, \widehat{r_{i}},\cdots,r_{kn-k+1})))\in\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q_ {2})\) which means \(g(r_{1},\cdots,\widehat{r_{i}},\cdots,r_{kn-k+1}))\in h^{-1}(\mathbf{r}^{( \mathbf{m},\mathbf{n})}(Q_{2}))=\mathbf{r}^{(\mathbf{m},\mathbf{n})}(h^{-1}(Q_ {2}))\) for some \(1\leq i\leq n\). Hence \(h^{-1}(Q_{2})\) is a \(\phi_{1}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{1}\).
(2) Assume that \(h\) is surjective and \(Q_{1}\) is a \(\phi_{1}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{1}\) with \(Ker(h)\subseteq Q_{1}\). Let \(s_{1}^{kn-k+1}\in R_{2}\) such that \(g(s_{1}^{kn-k+1})\in h(Q_{1})-\phi_{2}(h(Q_{1}))\). Therefore there exist \(r_{1}^{kn-k+1}\in R_{1}\) with \(h(r_{1})=s_{1},\cdots,h(r_{kn-k+1})=s_{kn-k+1}\). Hence we get \(h(g(r_{1}^{kn-k+1})=g(h(r_{1}),\cdots,h(r_{kn-k+1}))=g(s_{1}^{kn-k+1})\in h(Q_ {1})\). Since \(h\) is a \(\phi_{1}\)-\(\phi_{2}\)-epimorphism and \(Ker(h)\subseteq Q_{1}\), we have \(g(r_{1}^{kn-k+1})\in Q_{1}-\phi_{1}(Q_{1})\). Since \(Q_{1}\) is a \(\phi_{1}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{1}\), we conclude that \(g(r_{1}^{(k-1)n-k+2}))\in Q_{1}\) which implies
\(h(g(r_{1}^{(k-1)n-k+2}))=g(h(r_{1}),\cdots,h(r_{(k-1)n-k+2}))\)
\(=g(s_{1}^{(k-1)n-k+2}))\in h(Q_{1})\),
or \(g(r_{1},\cdots,\widehat{r_{i}},\cdots,r_{kn-k+1})\in\mathbf{r}^{(\mathbf{m}, \mathbf{n})}(Q_{1})\) implies
\(g(h(r_{1}),\cdots,\widehat{h(r_{i})},\cdots,h(r_{kn-k+1}))=g(s_{1},\cdots,\widehat{ s_{i}},\cdots,s_{kn-k+1})\in h(\mathbf{r}^{(\mathbf{m},\mathbf{n})}(Q_{1}))\subseteq \mathbf{r}^{(\mathbf{m},\mathbf{n})}(h(Q_{1}))\) for some \(1\leq i\leq(k-1)n-k+2\). Consequently, \(h(Q_{1})\) is a \(\phi_{2}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{2}\).
As an instant consequence of the previous theorem, we get the following explicit result.
**Theorem 4.20**.: _Let \(Q\) and \(P\) be two hyperideals of \(R\) and \(\phi\) be a reduction function of \(\mathcal{H}\mathcal{I}(R)\) such that \(P\subseteq\phi(Q)\subseteq Q\). If \(Q\) is a \(\phi\)-\((k,n)\)-absorbing primary hyperideal of \(R\), then \(Q/P\) is a \(\phi_{q}\)-\((k,n)\)-absorbing primary hyperideal of \(R/P\)._
**Theorem 4.21**.: _Let \((R_{i},f_{i},g_{i})\) be a commutative Krasner \((m,n)\)-hyperring for each \(1\leq i\leq kn-k+1\) and \(\phi_{i}:\mathcal{H}\mathcal{I}(R_{i})\longrightarrow\mathcal{H}\mathcal{I} (R_{i})\cup\{\varnothing\}\) be a function. Let \(Q_{i}\) be a hyperideal of \(R_{i}\) for each \(1\leq i\leq kn-k+1\) and \(\phi=\phi_{1}\times\cdots\times\phi_{kn-k+1}\). If \(Q=Q_{1}\times\cdots\times Q_{kn-k+1}\) is a \(\phi\)-\((k+1,n)\)-absorbing primary hyperideal of \(R=R_{1}\times\cdots\times R_{kn-k+1}\), then \(Q_{i}\) is a \(\phi_{i}\)-\((k,n)\)-absorbing primary hyperideal of \(R_{i}\) and \(Q_{i}\neq R_{i}\) for all \(1\leq i\leq kn-k+1\)._
Proof.: By using an argument similar to that in the proof of Theorem 3.5, one can easily complete the proof.
## 5. Conclusion
In this paper, motivated by the research works on \(\phi\)-\(2\)-absorbing (primary) ideals of commutative rings, we propsed and investigated the notions of \(\phi\)-\((k,n)\)-absorbing and \(\phi\)-\((k,n)\)-absorbing primary hyperideals in a Krasner \((m,n)\)-hyperring. Some of their essential characteristics were analysed. Moreover, the stabilty of the notions were examined in some hyperring-theoretic constructions. As a new research subject, we suggest the concept of \(\phi\)-\((k,n)\)-absorbing \(\delta\)-primary hyperideals, where \(\delta\) is an expansion function of \(\mathcal{H}\mathcal{I}(R)\).
|
2306.17348 | Visualizing Geophylogenies -- Internal and External Labeling with
Phylogenetic Tree Constraints | A geophylogeny is a phylogenetic tree where each leaf (biological taxon) has
an associated geographic location (site). To clearly visualize a geophylogeny,
the tree is typically represented as a crossing-free drawing next to a map. The
correspondence between the taxa and the sites is either shown with matching
labels on the map (internal labeling) or with leaders that connect each site to
the corresponding leaf of the tree (external labeling). In both cases, a good
order of the leaves is paramount for understanding the association between
sites and taxa. We define several quality measures for internal labeling and
give an efficient algorithm for optimizing them. In contrast, minimizing the
number of leader crossings in an external labeling is NP-hard. We show
nonetheless that optimal solutions can be found in a matter of seconds on
realistic instances using integer linear programming. Finally, we provide
several efficient heuristic algorithms and experimentally show them to be near
optimal on real-world and synthetic instances. | Jonathan Klawitter, Felix Klesen, Joris Y. Scholl, Thomas C. van Dijk, Alexander Zaft | 2023-06-30T00:32:08Z | http://arxiv.org/abs/2306.17348v1 | # Visualizing Geephylogenies - Internal and External Labeling with Phylogenetic Tree Constraints
###### Abstract
A _geephylogeny_ is a phylogenetic tree where each leaf (biological taxon) has an associated geographic location (site). To clearly visualize a geephylogeny, the tree is typically represented as a crossing-free drawing next to a map. The correspondence between the taxa and the sites is either shown with matching labels on the map (internal labeling) or with _leaders_ that connect each site to the corresponding leaf of the tree (external labeling). In both cases, a good order of the leaves is paramount for understanding the association between sites and taxa. We define several quality measures for internal labeling and give an efficient algorithm for optimizing them. In contrast, minimizing the number of leader crossings in an external labeling is NP-hard. On the positive side, we show that crossing-free instances can be solved in polynomial time and give a fixed-parameter tractable (FPT) algorithm. Furthermore, optimal solutions can be found in a matter of seconds on realistic instances using integer linear programming. Finally, we provide several efficient heuristic algorithms and experimentally show them to be near optimal on real-world and synthetic instances.
+
Footnote †: Jonathan Klawitter was supported by the Beyond Prediction Data Science Research Programme (MBIE grant UOAX1932). Thomas C. van Dijk was partially supported by the DFG grant Di 2161/2-1. A preliminary version of this paper appeared in the proceedings of the 12th International Conference on Geographic Information Science (GIScience 2023) [31]. Implementations of the algorithms and the experiments are available online at github.com/joklawitter/geophylo.
## 1 Introduction
A _phylogeny_ describes the evolutionary history and relationships of a set of taxa such as species, populations, or individual organisms [42]. It is one of the main tasks in phylogenetics to infer a phylogeny for some given data and a particular model. Most often, a phylogeny is modelled and visualized with a _rooted binary phylogenetic tree_\(T\), that is, a rooted binary tree \(T\) where the leaves are bijectively labeled with a set of \(n\) taxa. For example, the phylogenetic tree in Fig. 0(a) shows the evolutionary species tree of the five present-day kiwi (_Apteryx_) species. The tree is conventionally drawn with all edges directed downwards to the leaves and without crossings (_downward planar_). There exist several other models for phylogenies such as the more general phylogenetic networks which can additionally model reticulation events such as horizontal gene transfer and hybridization [25], and unrooted phylogenetic trees, which only model the relatedness of the taxa [42]. Here we only consider rooted binary phylogenetic trees and refer to them simply as phylogenetic trees.
In the field of phylogeography, geographic data is used in addition to the genetic data to improve the inference of the phylogeny. We may thus have spatial data associated with each taxon of a phylogenetic tree such as the distribution range of each species or the sampling site of each voucher specimen used in a phylogenetic analysis. For example, Fig. 0(b) shows the distributions of the kiwi species from Fig. 0(a). We speak of a _geophylogeny_ (or _phylogeographic tree_) if we have a phylogenetic tree \(T\), a map \(R\), and a set \(P\) of features on \(R\) that contains one feature per taxon of \(T\); see Fig. 0(c) for a geophylogeny of the kiwi species. In this paper, we focus on the case where each element \(x\) of \(P\) is a point, called a _site_, on \(R\), and only briefly discuss the cases where \(x\) is a region, or a set of points or regions.
Visualizing Geophylogenies.When visualizing a geophylogeny, we may want to display its tree and its map together in order to show the connections (or the non-connections) between the leaves and the sites. For example, we may want to show that the taxa of a certain subtree are confined to a particular region of the map or that they are widely scattered. In the literature, we mainly find three types of drawings of geophylogenies that fall into two composition categories [22, 27]. In a _side-by-side (juxtaposition)_ drawing, the tree is drawn planar directly next to the map. To show the correspondences between the taxa and their sites, the sites on the maps are either labeled or color coded (as in Fig. 1(a) and Fig. 0(c), respectively), or the sites are connected with _leaders_ to the leaves of the tree (as in Fig. 1(b)). We call this _internal labeling_ and _external labeling_, respectively. There also exist _overlay (superimposition)_ illustrations where the phylogenetic tree is drawn onto the map in 2D or 3D with the leaves positioned at the sites [29, 41, 47]; see Fig. 3. While the
Figure 1: To visualize this geophylogeny of the five present-day kiwi species (Tokoeka/South Island Brown Kiwi – _Apteryx australis_, Rowi/Okarito Brown Kiwi – _A. rowi_, North Island Brown Kiwi – _A. mantelli_, Great Spotted Kiwi – _A. haastii_, Little Spotted Kiwi – _A. owenii_), we combine the phylogenetic tree (a) together with the distribution map (b) into a single figure (c). To this end, we may pick a rotation of the map and a placement of the tree as well as a leaf order that facilities easy association based on the colors between the leaves and the features on the map. (Phylogeny and map inspired by Weir et al. [45].)
association between the leaves and the sites is obvious in overlay illustrations, Page [36] points out that the tree and, in particular, the tree heights might be hard to interpret.
Drawing a geophyogeny involves various subtasks, such as choosing an orientation for the map, a position for the tree, and the placement of the labels. Several existing tools support drawing geophylogenies [38, 39, 14, 41, 36], but we suspect that in practice many drawings are made "by hand". The tools GenGIS by Parks et al. [38, 39], a tool by Page [36], and the R-package phytols by Revell [41] can generate side-by-side drawings with external labeling. The former two try to minimize leader crossings by testing random leaf orders and by rotating the phylogenetic tree around the map; Revell uses a greedy algorithm to minimize leader crossings. The R package phylogeo by Charlop-Powers and Brady [14] uses internal labeling via colors. Unfortunately, none of the articles describing these tools formally define a quality measure being optimized or studies the underlying combinatorial optimization problem from an algorithmic perspective. In this paper, we introduce a simple combinatorial definition for side-by-side drawings of geophylogenies and propose several quality measures (Section 2).
Labeling Geophylogenies.The problem of finding optimal drawings of geophylogenies can be considered a special case of map labeling. In this area, the term _labeling_ refers to the process of annotating _features_ such as points (sites), lines, or regions in maps, diagrams, and technical drawings with labels [7]. This facilitates that users understand what they see. As with geophylogenies, _internal labeling_ places the labels inside or in the direct vicinity of a feature; _external labeling_ places the labels in the margin next to the map and a label is then connected to the corresponding feature with a _leader_. An _s-leader_ is drawn using a single (straight) line segment as in Figs. 1(b) and 3(b). Alternatively, a _po-leader_ (for: parallel, orthogonal) consists of a horizontal segment at the site and a vertical segment at the leaf, assuming the the labels are above the drawing; see Fig. 3(c). In the literature, we have only encountered s-leaders in geophylogeny drawings, but argue below that po-leaders should be considered as well. In a user study on external labeling, Barth, Gemsa, Niedermann, and Nollenburg [3] showed that s-leaders perform well when users are asked to associate sites with their labels and vice versa, but that po-leaders (and "diagonal, orthogonal" do-leaders)
Figure 2: Side-by-side drawings of geophylogenies from the literature.
are among the aesthetic preferences. We thus consider drawings of geophylogenies that use external labeling with s- and po-leaders.
For internal labeling, a common optimization approach is to place the most labels possible such that none overlap; see Neyer [34] for a survey on this topic. Existing algorithms can be applied to label the sites in a geophylogeny drawing and it is geometrically straight-forward to place the labels for the leaves of \(T\). However, a map reader must also be aided in associating the sites on the map with the leaves at the border based on these labels (and potentially colors). Consider the drawing in Fig. 0(c), which uses color-based internal labeling: the three kiwi species _A. australis_, _A. rowi_, and _A. mantelli_ occur in this order from South to North. When using internal labeling, we would thus prefer, if possible, to have the three species in this order in the tree as well - as opposed to their order in Fig. 0(a).
External labeling styles conventionally forbid crossing the leaders as such crossings could be visually confusing (cf. Fig. 1(b)). Often the total length of leaders is minimized given this constraint. See Bekos, Niedermann, and Nollenburg [7] for an extensive survey on external labeling techniques. External labeling for geophylogenies is closely related to many-to-one external labeling, where a label can be connected to multiple features. In that case one typically seeks a placement that minimizes the number of crossings between leaders, which is an NP-hard problem [33]. The problem remains NP-hard even when leaders can share segments, so-called hyper-leaders [4]. Even though our drawings of geophylogenies have only a one-to-one correspondence, the planarity constraint on the drawing of the tree restricts which leaf orders are possible and it is not always possible to have crossing-free leaders in a geophylogeny. In order to obtain a drawing with low visual complexity, our task is thus to find a leaf order that minimizes the number of leader crossings.
Further Related Work.Since there exists a huge variety of different phylogenetic trees and networks, it is no surprise that a panoply of software to draw phylogenies has been developed [1, 24, 40]. Here we want to mention DensiTree by Bouckaert [10]. It draws multiple phylogenetic trees on top of each other for easy comparison in so-called cloudograms and, relevantly to us, has a feature to extend its drawing with a map for geophylogenies. Furthermore, the theoretical study of drawings of phylogenies is an active research area [2, 9, 12, 16, 17, 23, 30, 32, 44]. In many of these
Figure 3: Overlay drawings of geophylogenies from the literature.
graph drawing problems, the goal is to find a leaf order such that the drawing becomes optimal in a certain sense. This is also the case for _tanglegrams_, where two phylogenetic trees on the same taxa are drawn opposite each other (say, one upward and one downward planar). Pairs of leaves with the same taxon are then connected with straight-line segments and the goal is to minimize the number of crossings [11]. This problem is NP-hard if the leaf orders of both trees are variable, but can be solved efficiently when one side is fixed [19]. The latter problem is called the One-Sided Tanglegram problem and we make use of the efficient algorithm by Fernau et al. [19] later on.
Results and Contribution.We formalize several graph visualization problems in the context of drawing geophylogenies. We propose quality measures for drawings with internal labeling and show that optimal solutions can be computed in quadratic time (Section3). For external labeling (Section4), we prove that although crossing minimization of s- and po-leaders is NP-hard in general, it is possible to check in polynomial time if a crossing-free drawing exists. Moreover, we give a fixed-parameter tractable (FPT) algorithm and show that there exist instances with practical relevance that can be solved efficiently by the FPT algorithm. Furthermore, we introduce an integer linear program (ILP) and several heuristics for crossing minimisation. We evaluate these solutions on synthetic and real-world examples, and find that the ILP can solve realistic instances optimally in a matter of seconds and that the heuristics, which run in a fraction of a second, are often (near-)optimal as well (Section5). We close the paper with a discussion and open problems.
## 2 Definitions and Notation
For a phylogenetic tree \(T\), let \(V(T)\) be its vertex set, \(E(T)\) its edge set, \(L(T)\) its leaves, and \(I(T)\) its internal vertices. As size of an instance we let \(n=|L(T)|\) be the number of leaves. For an internal vertex \(v\) of \(T\), let \(T(v)\) be the subtree rooted at \(v\) and \(n(v)=|L(T(v))|\). The _clade_ of \(v\) is \(L(T(V))\), i.e. the set of leaves in the subtree rooted at \(v\). A _cherry_ of \(T\) is a subtree of \(T\) on exactly two leaves that have the same parent.
A _map_\(R\) is an axis-aligned rectangle and a _site_ is a point on \(R\). A _geophylogeny_\(G\) consists of a phylogenetic tree \(T(G)\), a map \(R(G)\), a set of point \(P(G)\) on \(R(G)\) as well as a 1-to-1 mapping between \(L(T(G))\) and \(P(G)\). Call the elements of \(L(T(G))=\{\ell_{1},\ldots,\ell_{n}\}\) and \(P(G)=\{p_{1},\ldots,p_{n}\}\) so that without loss of generality the mapping is given by the indices, that is, \(\ell_{i}\leftrightarrow p_{i}\), for \(i\in\{1,\ldots,n\}\). For further ease of notation, we only write \(T\), \(R\), and \(P\) instead of \(T(G)\), \(R(G)\), and \(P(G)\), respectively, if \(G\) clear from context.
We define a _drawing_\(\Gamma\) of \(G\) as consisting of drawings of \(R\) with \(P\) and \(T\) in the plane with the following properties; see Fig.4. We assume that \(T\) is always drawn at a fixed position above \(R\) such that the leaves of \(T\) lie at evenly spaced _positions_ on the upper boundary of \(R\). Furthermore, we require that \(T\) is drawn _downward planar_, that is, all edges of \(T\) point downwards from the root towards the leaves, and no two edges of \(T\) cross. (In our examples we draw \(T\) as a "rectangular cladogram", but the exact drawing style is irrelevant given downward planarity.) The points of \(P\) are marked on \(R\) and the drawing uses either internal labeling as in Fig.(a)a or external labeling with s- or po-leaders as in Figs.(b)b and (c)c. For drawings with external labeling, we let \(s_{i}\) denote the leader that connects \(\ell_{i}\) and \(p_{i}\). (We ignore the leaf labels as they do not effect the combinatorics: they can simply be added in a post-processing step where \(T\) can be moved upwards to create the necessary space.)
Since the tree is drawn without crossings and the sites have fixed locations, the only combinatorial freedom in the drawing \(\Gamma\) is the embedding of \(T\), i.e.which child is to the left and which is to
the right. Furthermore, since we fixed the relative positions of the map and the leaves, note that there is also no "non-combinatorial" freedom. Hence, an embedding of \(T\) corresponds one-to-one with a left-to-right order of \(L(T)\) and we call this the _leaf order_\(\pi\) of \(\Gamma\). For example, if a leaf \(\ell_{i}\) is at position \(4\) in \(\Gamma\), then \(\pi(\ell_{i})=4\). Further, let \(\mathrm{x}(v)\) and \(\mathrm{y}(v)\) denote the \(\mathrm{x}\)- and \(\mathrm{y}\)-coordinate, respectively, of a site or leaf \(v\) of \(T\) in \(\Gamma\). In a slight abuse of notation, we also call it a drawing of geophysogeny even when the leaf order has not been fixed yet.
## 3 Geophyslogenies with Internal Labeling
In this section, we consider drawings of geophyslogenies with internal labeling. While these drawings trivially have zero crossings - there are no leaders - a good order of the leaves is still crucial, since it can help the reader associate between \(L(T)\) and \(P\). It is in general not obvious how to determine which leaf order is best for this purpose; we propose three quality measures and a general class of measures that subsume them. Any measure in this class can be efficiently optimized by the algorithm described below. In practice one can easily try several quality measures and pick whichever suits the particular drawing; a user study of practical readability could also be fruitful.
Quality Measures.When visually searching for the site \(p_{i}\) corresponding to a leaf \(\ell_{i}\) (or the opposite direction), it is seems beneficial if \(\ell_{i}\) and \(p_{i}\) are close together. Our first quality measure, _Distance_, sums the Euclidean distances of all pairs \((p_{i},\ell_{i})\); see Fig.4(a).
Since the tree organizes the leaves from left to right along the top of the map, and especially if the distance of pairs \(\ell_{i}\) and \(p_{i}\) is dominated by the vertical component as in Fig.2(b), it might be better to consider only the horizontal distances, i.e. \(\sum_{i=1}^{n}\lvert\mathrm{x}(p_{i})-\mathrm{x}(\ell_{i})\rvert\), which we call _XOffset_; see Fig.4(b).
Finally, instead of the geometric offset, _IndexOffset_ considers how much the leaf order permutes the geographic left to right order of the sites. Assuming without loss of generality that the sites are in general position and indexed from left to right, we sum how many places each leaf \(\ell_{i}\) is away from leaf position \(i\), i.e. \(\sum_{i=1}^{n}\lvert\pi(\ell_{i})-i\rvert\); see Fig.4(c).
These measures have in common that they sum over some "quality" of the leaves, where the quality of a leaf depends only on its own position and that of the sites (but not the other leaves).
Figure 4: In a drawing of a geophyslogeny \(G\), we place \(T\) above \(R\) and use either internal or external labeling to show the mapping between \(P\) and \(L(T)\). Figures (b) and (c) minimize the number of crossings for their leader type. Note the difference in embedding of \(T\) and that not all permutations of leaves are possible.
We call such quality measures _leaf additive_. Unfortunately not all sensible quality measures are leaf additive (such as for example the number of inversions in \(\pi\)).
Algorithm for Leaf-Additive Quality Measures.Let \(f\colon L(T)\times\{1,\ldots,n\}\to\mathbb{R}\) be a quality measure for placing one particular leaf at a particular position; the location of the sites is constant for a given instance, so we do not consider it an argument of \(f\). This uniquely defines a leaf additive objective function on drawings by summing over the leaves; assume without loss of generality that we want to minimize this sum.
Now we naturally lift \(f\) to inner vertices of \(T\) by taking the sum over leaves in the subtree rooted at that vertex - in the best embedding of that subtree. More concretely, note that any drawing places the leaves of any subtree at consecutive positions and they take up a fixed width regardless of the embedding. Let \(F(v,i)\) be the minimum, taken over all embeddings of \(T(v)\) and assuming the leftmost leaf is placed at position \(i\), of the sum of quality of the leaves of \(T(v)\). Then by definition the optimal objective value for the entire instance is \(F(w,1)\), where \(w\) is the root of \(T\).
**Theorem 1**: _Let \(G\) be a geophysogeny with \(n\) taxa and let \(f\) be a leaf additive objective function. A drawing \(\Gamma\) with internal labeling of \(G\) that minimizes (or maximizes) \(f\) can be computed in \(\mathcal{O}(n^{2})\) time._
**Proof:** For an inner vertex \(v\) with children \(x\) and \(y\), we observe the following equality, since the embedding has only two ways of ordering the children and those subtrees are then independent; see also Fig. 6:
\[F(v,i)=\min\{\quad F(x,i)+F(y,i+n(x)),\quad F(y,i)+F(x,i+n(y))\quad\} \tag{1}\]
Using dynamic programming on \(F\), for example in postorder over \(T\), allows us to calculate \(F(\rho,1)\) in \(\mathcal{O}(n^{2})\) time and space, since there are \(2n\) vertices, \(n\) possible leaf positions, and Eq. (1) can be evaluated in constant time by precomputing all \(n(v)\). As is typical, the optimal embedding of \(T\) can be traced back through the dynamic programming table in the same running time. \(\Box\)
Adaptability.Note that we can still define leaf additive quality measures when \(P\) contains regions (rather than just points) as in Fig. 1. For example, instead of considering the distance
Figure 5: Orange arrows indicate what the three quality measures for internal labeling consider.
between \(\ell_{i}\) and \(p_{i}\), we could consider the smallest distance between \(\ell_{i}\) and any point in the region \(p_{i}\). Similarly, if each element of \(P\) is a set of sites, we could use the average or median distance to the sites corresponding to \(\ell_{i}\). For such a leaf additive quality measure \(f\), our algorithm fins an optimal leaf order in \(\mathcal{O}(n^{2}x)\) time where \(x\) is a bound on the time needed to compute \(f(\ell_{i},j)\) over all \(i,j\in\{1,\ldots,n\}\).
Interactivity.With the above algorithm, we can restrict leaves and subtrees to be in a certain position or a range of positions, simply by marking all other positions as prohibitively expensive in \(F\); the rotation of an inner vertex can also be fixed by considering only the corresponding term of Eq.1. This can be used if there is a conventional order for some taxa or to ensure that an outgroup-taxon (i.e. taxon only included to root and calibrate the phylogenetic tree) is placed at the leftmost or rightmost position. Furthermore, this enables an interactive editing experience where a designer can inspect the initial optimized drawing and receive re-optimized versions based on their feedback - for example "put the leaves for the sea lions only where there is water on the edge of the map". (This is leaf additive.)
## 4 Geophylogenies with External Labeling
In this section, we consider drawings of geophylogenies that use external labeling. Recall that for a drawing \(\Gamma\) of a given geophylogeny \(G\), we want to find a leaf order \(\pi\) such that the number of leader crossings in \(\Gamma\) is minimized. We show the following.
1. The problem is NP-hard in general.
2. A crossing-free solution can be found in polynomial time if it exists.
3. Some instances have a geometric structure that allows us to compute optimal solutions in polynomial time.
4. The problem is fixed parameter tractable (FPT) in a parameter based on this structure.
5. We give an integer linear program (ILP) to solve the problem.
6. We give several heuristic algorithms for the problem.
All results hold analogously for s- and po-leaders; only the parameter value of the FPT algorithm is different depending on the leader type.
Figure 6: Computing IndexOffset for the subtree \(T(v)\) at positions \(1\) and \(2\).
### NP-Hardness
In order to prove that the decision variant of our crossing minimization problem is NP-complete, we use a reduction from the classic Max-Cut problem, which is known to be NP-complete [20]. In an instance of Max-Cut, we are given a graph \(G\) and a positive integer \(c\), and have to decide if there exists a bipartition \((A,B)\) of \(V\) such that at least \(c\) edges have one endpoint in \(A\) and one endpoint in \(B\); see Fig. 7. The proof of the following theorem is inspired by the construction Bekos et al. [4] use to show NP-completeness of crossing-minimal labeling with hyperleaders (with a reduction from Fixed Linear Crossing Number).
**Theorem 2**: _Let \(G\) be a geophysogeny and \(k\) a positive integer. For both s- and po-leaders, deciding whether a drawing \(\Gamma\) of \(G\) with external labeling admits a leaf order \(\pi\) that induces at most \(k\) leader crossings is NP-complete._
**Proof:** The problem is in NP since, given \(G\), \(k\), and \(\pi\), we can check in polynomial time whether this yields at most \(k\) crossings. To prove NP-hardness, we use a reduction from Max-Cut as follows. The proof works the same for s- and po-leaders; we use po-leaders in the figures.
For an instance \((G,c)\) of Max-Cut, we construct an instance of our leader crossing minimisation problem by devising a geophysogeny \(G\) with phylogenetic tree \(T\), points \(P\) on a map \(R\) and a constant \(k\); see Fig. 8. Without loss of generality, we assume that each vertex in \(G\) has at least degree \(2\). Let \(V(G)=\{v_{1},\ldots,v_{n}\}\) and \(m=|E(G)|\). We consider each edge \(\{v_{i},v_{j}\}\) with \(i<j\) to be directed as \(v_{i}v_{j}\). Let \(E(G)\) be ordered \(e_{1},\ldots,e_{m}\) lexicographically on the indices \(i\) and \(j\). Throughout the following, let \((A,B)\) be some partition of \(V(G)\) and let \(R\) have height \(4m+4+d\) where we set \(d\) appropriately below.
We first describe the broad structure of the reduction and then give details on the specific gadgets. Each vertex is represented by a _vertex gadget_ in \(T\). For each edge \(v_{i}v_{j}\) in \(E(G)\), there is an _edge gadget_ that connects sites on the map to the vertex gadgets with four leaders. Using _fixing gadgets_ to restrict the possible positions for vertex gadget's leaves, we enforce that an edge gadget induces \(2\) crossing if \(v_{i}\) and \(v_{j}\) are both in \(A\) or both in \(B\); otherwise it will induce \(1\) crossing. The number of crossings between leaders of different edge gadgets is in total some constant \(k_{\mathrm{fix}}\). We set \(k=k_{\mathrm{fix}}+2m-c\). Consequently, if \(\langle T,R,P\rangle\) admits a drawing with at most \(k\) leader crossings, then \(G\) admits a cut with at least \(c\) edges, and vice versa.
Vertex Gadgets.Each vertex \(v_{i}\in V(G)\) is represented by two subtrees rooted at vertices \(i\) and \(i^{\prime}\) in \(T\) such that from \(i\) going two edges up and one down we reach \(i^{\prime}\). In \(T(i)\) there is a leaf labeled \(ij\) for each edge \(v_{i}v_{j}\) or \(v_{j}v_{i}\) incident to \(v_{i}\) in \(G\). Furthermore, \(T(i)\) has a planar embedding where the leaves can be in increasing (or decreasing) order based on the order of the corresponding edges in \(E(G)\); see again Fig. 8. \(T(i^{\prime})\) is built analogously, though we label the leaves with \(ij^{\prime}\). In \(T\), the vertex gadgets and fixing gadgets alternate; more precisely, the subtree of a central fixing gadget lies inside the subtree of the vertex gadget for \(v_{1}\), which in turn lies in the subtree of a
Figure 7: This partition of the triangle \(v_{1}v_{2}v_{3}\) cuts the two edges \(\{v_{1},v_{2}\}\), \(\{v_{2},v_{3}\}\).
fixing gadget, and so on. The fixing gadgets ensure that either \(T(i)\) is in the left half of the drawing and \(T(i^{\prime})\) in the right half, or vice versa (explained below). Furthermore, we interpret \(T(i)\) being in the left (right) half as \(v_{i}\) being in \(A\) (resp. \(B\)).
Edge Gadgets.For an edge \(e_{h}=v_{i}v_{j}\in E(G)\), we have four sites \(ij\), \(ji\), \(ij^{\prime}\), \(ji^{\prime}\) on the central axis of the drawing, which correspond to the leaves in \(T(i)\), \(T(j)\), \(T(i^{\prime})\), \(T(j^{\prime})\) with the same label. From bottom to top, we place the sites \(ij\) and \(ji\) at heights \(2h-1\) and \(2h\), respectively; we place the sites \(ij^{\prime}\) and \(ji^{\prime}\) at \(4m-2(h-1)-1\) and \(4m-2(h-1)\), respectively; see Fig. 9. Hence, in the bottom half the sites are placed in the order of the edges, while in the top half they are (as pairs) in reverse order. Note that while the order of the sites \(ij\), \(ji\), \(ij^{\prime}\), \(ji^{\prime}\) is fixed, the order of the leaves \(ij\), \(ji\), \(ij^{\prime}\), \(ji^{\prime}\) is not. Yet there are only four possible orders corresponding to whether \(v_{i}\) and \(v_{j}\) are in \(A\) or \(B\). Further note that whether the leaders of the edge gadget cross is therefore not based on the geometry or the type of the leaders but solely on the leaf order. In particular, if \(v_{i}v_{j}\) is cut by \((A,B)\) (as in Fig. 8(a)), then we have the leaf order \(ji^{\prime}\), \(ij\), \(ij^{\prime}\), \(ji\) with \(ji^{\prime}\) and \(ij\) left of the centre (up to reversal of the order). Therefore the leaders \(s_{ij}\) and \(s_{ji^{\prime}}\) cross while \(s_{ij^{\prime}}\) and \(s_{ji}\) do not. Hence, there is exactly one crossing. On the other hand, if \(v_{i}v_{j}\) is not cut by \((A,B)\) (as in Fig. 8(b)), then we have the leaf order \(ij\), \(ji\), \(ij^{\prime}\), \(ji^{\prime}\) with \(ij\) and \(ji\) left of the centre (up to reversal of the order). Hence we have two crossings as both \(s_{ij}\) and \(s_{ji}\) as well as \(s_{ij^{\prime}}\) and \(s_{ji^{\prime}}\) cross.
Edge Pairs.Let \(v_{i}v_{j},v_{k}v_{l}\in E(G)\). We assume an optimal leaf order in each vertex gadget. Then careful examination of the overall possible leaf orders (and partitions) shows that the leaders in the edge gadgets of \(v_{i}v_{j}\) and \(v_{k}v_{l}\) induce exactly three crossings if \(v_{i}v_{j}\) and \(v_{k}v_{l}\) share a vertex; see again Fig. 8. If the two edges are disjoint, then the leaders induce exactly four crossings; see Fig. 10. Note that changing the partition or the order of vertices does not change the number of crossings; it only changes which pairs among the eight leaders cross. We can thus set \(k_{\text{fix}}\) as three times the number of adjacent edge pairs plus four times the number of disjoint edge pairs.
Figure 8: Sketch of the reduction of the graph from Fig. 7 to a geophylogeny drawing with poleaders. We simplified \(v_{i}\) to \(i\); each edge gadget is drawn in the respective color; fixing gadgets are represented by triangles in the tree and hatched rectangles on the map.
Fixing Gadgets.To ensure that the two subtrees of each vertex gadget are distributed to the left and to the right, we add a fixing gadget in the center and one after each position allocated to a vertex gadget subtree. If both subtrees of a vertex gadget would be placed on the same side of a fixing gadget, then the fixing gadget would have to be translated and induce too many crossings. More precisely, each fixing gadget is composed of a series of _fixing units_. A fixing unit \(F\) consists of a four-leaf tree with cherries \(\{a,a^{\prime}\}\) and \(\{b,b^{\prime}\}\). Assuming \(F\) is to be centered at position \(x\), we place the sites for \(a\) and \(a^{\prime}\) (for \(b\) and \(b^{\prime}\)) at \(x\) at height \(4m+d+\) 1 and 4 (resp. plus 2 and 3), respectively. Thus if \(F\) is centred at \(x\), it can be drawn with 0 crossings; see Fig. (a)a. However, if \(F\) is translated by two or more then it induces 2 crossings; see Fig. (b)b. Since each vertex of \(G\) has at least degree two, the two trees \(T(i)\) and \(T(i^{\prime})\) of a vertex gadget have at least two leaves each. Hence, \(F\) cannot be translated by just one position. By using \(m-c\) fixing units per fixing gadget, it becomes too costly to move even one fixing gadget as the instance would immediately have to many crossings. Finally, we set \(d\) such that no leader of an edge gadget can cross a leader of a fixing gadget. In particular, \(d=4\) is sufficient for po-leaders, but we might need a larger \(d\) for \(\mathbf{s}\)-leaders.
Figure 10: Leaders for two edge gadgets of disjoint edges induce four crossings.
Figure 9: The edge gadget for \(v_{1}v_{2}\) connects the vertex gadgets for \(v_{i}\) and \(v_{j}\).
### Crossing-Free Instances
We now show how to decide whether a geophylogeny admits a drawing without leader crossings in polynomial time for both s- and po-leaders.
**Proposition 3**: _Let \(G\) be a geophylogeny on \(n\) taxa. For both s- and po-leaders, we can decide if a drawing \(\Gamma\) of \(G\) with external labeling admits a leaf order \(\pi\) that induces zero leader crossings in \(\mathcal{O}(n^{6})\) time._
**Proof:** To find a leaf order \(\pi\) for \(\Gamma\) that induces zero leader crossings, if it exists, we use a dynamic program similar to the one we used for internal labeling in Theorem1. Let \(i\in\{1,\ldots,n\}\) and let \(v\in V(T)\). Then we store in \(F(v,i)\) up to \(n(v)\) embeddings of \(T(v)\) for which \(T(v)\) can be placed with its leftmost leaf at position \(i\) such that the leaders to \(T(v)\) are pairwise crossing free. Note that \(F(v,i)\) always stores exactly one embedding when \(v\) is a leaf. For an inner vertex \(v\) with children \(x\) and \(y\), we combine pairs of stored embeddings of \(T(x)\) and \(T(y)\) and test whether they result in a crossing free embedding of \(T(v)\). For \(\rho\) the root of \(T\), we get a suitable \(\pi\) for each embedding stored in \(F(\rho,1)\). However, since combining embeddings of \(T(x)\) and \(T(y)\) can result in \(\mathcal{O}(n(v)^{2})\) many embeddings of \(T(v)\), we have to be more selective. We now describe when we have to keep multiple embeddings of \(T(v)\), how we select them, and show that at most \(n(v)\) embeddings for \(T(v)\) at position \(i\) suffice. We first describe the details for s-leaders and then for po-leaders.
s-Leaders.Suppose that we can combine an embedding of \(T(x)\) and an embedding of \(T(y)\) where \(T(v)\) is placed with its leftmost leaf at position \(i\) such that the leaders of \(T(v)\) pairwise do not cross. Consider the set \(P(v)\) of sites corresponding to \(L(T(v))\). In particular, let \(p_{k}\) have the lowest y-coordinate among the sites in \(P(v)\). Let \(H(v,i)\) be the convex hull of the sites \(P(v)\) and the leaf positions \(i\) and \(i+n(v)-1\); see Fig.12. We distinguish three cases:
* **there is no site of \(P\setminus P(v)\) inside \(H(v,i)\):** Then no leader of a site \(p_{o}\in P\setminus P(v)\) has to "leave" \(H(v,i)\). A leader that would need to intersect \(H(v,i)\) would cause a crossing with a leader of \(T(v)\) for any embedding of \(T(v)\). Hence it suffices to store only this one embedding of \(T(v)\) and not consider any further embeddings.
* **there is a site \(p_{o}\in P\setminus P(v)\) trapped in \(H(v,i)\):** More precisely, let \(H(v,i,p_{o})\) be the convex hull of the positions \(i\) and \(i+n(v)-1\) and all sites of \(P(v)\) above \(p_{o}\). We consider \(p_{0}\)_trapped_ if the leader of \(p_{0}\) cannot reach any position left of \(i\) or right of \(i+n(v)-1\) without crossing \(H(v,i,p_{o})\); see Fig.12a. Hence we would definitely get a crossing for this embedding of \(T(v)\) later on and thus reject it immediately.
Figure 11: A fixing gadget consists of a series of four-leaf fixing units and is always placed at its designed place since it would otherwise cause too many crossings.
3. **there is a site** \(p_{o}\in P\setminus P(v)\) **but not trapped inside** \(H(v,i)\)**:** Suppose that the leader \(s_{o}\) of \(p_{o}\) can reach positions \(j_{1},\ldots,j_{k_{o}}\) without intersecting \(H(v,i,p_{o})\). Consider the leader \(s_{k}\) of \(p_{k}\) for the current embedding of \(T(v)\). Note that \(s_{k}\) prevents \(s_{o}\) from reaching either any position to the left of \(i\) or to the right of \(i+n(v)-1\); see Fig. 11(b). If this means that \(s_{k}\) cannot reach any position, then we reject the embedding. Otherwise we would want to store this embedding of \(T(v)\) and an embedding of \(T(v)\) where \(s_{k}\) can reach a position on the other side (if it exists). However, we have to consider all other sites in \(H(v,i)\), which we do as follows.
There are at most \(n-n(v)\) others sites in \(H(v)\). If any of them is trapped, we reject the embedding. Assume otherwise, namely that for the current embedding, all of them can reach a position outside of \(T(v)\). The leader of \(p_{k}\) then partitions these sites into those that can go out to the left and those that can go out to the right. Hence, among all suitable embeddings of \(T(v)\) these sites can be partitioned in at most \(n(v)\) different ways (since the leader of \(p_{k}\) can go to only that many positions); see Fig. 11(c). For each such partition, we need to store only one embedding. Therefore, before storing a suitable embedding of \(T(v)\), we first check whether we already store an embedding where \(\ell_{k}\) is at the same position.
We can handle each of the \(\mathcal{O}(n(v)^{2})\) embeddings of \(T(v)\) in \(\mathcal{O}(n^{2})\) time each. With \(n\) positions and \(\mathcal{O}(n)\) vertices, we get a running time in \(\mathcal{O}(n^{6})\).
po-Leaders.As with \(\mathsf{s}\)-leaders, we want to store at most \(\mathcal{O}(n(v))\) embeddings of \(T(v)\) for po-leaders. Let \(H^{\prime}(v,i)\) be the rectangle that horizontally spans from positions \(i\) to \(i+n(v)-1\) and vertically from \(p_{k}\) to the top of \(R\). For the current embedding of \(T(v)\) and for any site \(p_{o}\in P\setminus P(v)\) that lies insides \(H^{\prime}(v,i)\), we check whether the horizontal segment of the leader \(s_{o}\) of \(p_{o}\) can leave \(H^{\prime}(v,i)\) without intersecting a vertical segment of a leader of \(T(v)\). If this is not the case for one leader, then we reject the embedding; see Fig. 12(a). Otherwise, the leader \(s_{k}\) of \(p_{k}\) determines for each \(s_{o}\) whether it can leave \(H^{\prime}(v,i)\) on the left or on the right side. Therefore, \(s_{k}\) partitions the sites in \(P\setminus P(v)\) that lie insides \(H^{\prime}(v,i)\) and we need to store only one suitable embedding for each partition; see Fig. 12(b). Note that the horizontal segments of the leader \(s\) of any site of \(P(v)\) that lies outside of \(H^{\prime}(v,i)\) always spans to at least \(H^{\prime}(v,i)\). Therefore whether \(s\) intersects with
Figure 12: Trying to find a leaf order that induces zero crossings of \(\,\mathsf{s}\)-leaders, we store or reject embeddings of \(T(v)\) based on other sites in \(H(v,i)\).
another leader later on outside of \(H^{\prime}(v,i)\) is independent of the embedding of \(T(v)\). The running time for po-leaders is the same as for s-leaders and thus also in \(\mathcal{O}(n^{6})\).
### Efficiently Solvable Instances
We now make some observations about the structure of geophylogeny drawings. This leads to an \(\mathcal{O}(n\log n)\)-time algorithm for crossing minimization on a particular class of "geometry-free" instances, and forms the basis for our FPT algorithm and ILP.
Consider a drawing \(\Gamma\) of a geophylogeny \(G\) with s-leaders and leaf order \(\pi\). Let \(B\) be the line segment between leaf position \(1\) (left) and leaf position \(n\) (right); let the _s-area_ of a site \(p_{i}\) be the triangle spanned by \(p_{i}\) and \(B\). Note that the leader \(s_{i}\) lies within this triangle in any drawing. Now consider two sites \(p_{i}\) and \(p_{j}\) that lie outside each other's s-area. Independently of the embedding of the tree, \(s_{i}\) always passes \(p_{j}\) on the same side: see Fig. 14 where, for example, \(s_{2}\) passes left of \(p_{4}\) in any drawing. As a result, if \(p_{i}\) lies left of \(p_{j}\), then \(s_{i}\) and \(s_{j}\) cross if and only if the leaf \(\ell_{i}\) is positioned right of the leaf \(\ell_{j}\), i.e. \(\pi(\ell_{i})>\pi(\ell_{j})\). The case where \(p_{i}\) is right of \(p_{j}\) is flipped. We call such a pair \(\langle p_{i},p_{j}\rangle\)_geometry free_ since purely the _order_ of the corresponding leaves suffices to recognize if their leaders cross: the precise geometry of the leaf positions is irrelevant.
Conversely, consider a site \(p_{k}\) that lies inside the s-area of \(p_{i}\). Whether the leaders \(s_{i}\) and \(s_{k}\) cross depends on the placement of the leaves \(\ell_{i}\) and \(\ell_{k}\) in a more complicated way than just their relative order: \(s_{i}\) might pass left or right of \(p_{k}\) and it is therefore more complicated to determine whether \(s_{i}\) and \(s_{k}\) cross. In this case, we call the pair \(\langle p_{i},p_{k}\rangle\)_undecided_. See Fig. 15, where \(p_{1}\) is undecided with respect to \(p_{2}\).
Analogously, for po-leaders, let the _po-area_ of \(p_{i}\) be the rectangle that spans horizontally from position \(1\) to position \(n\) and vertically from \(p_{i}\) to the top of \(R\); see Fig. 13(b). A pair \(\langle p_{i},p_{j}\rangle\) of sites is _geometry free_ if \(p_{i}\) does not lie in the po-area of \(p_{j}\) or vice versa. A pair \(\langle p_{i},p_{k}\rangle\) of sites is called _undecided_, if \(p_{k}\) lies in the po-area of \(p_{i}\).
We call a geophylogeny _geometry free_ (for s- or po-leaders) if all pairs of sites are geometry free. While it seems unlikely that a geophylogeny is geometry free for po-leaders in practice, it is not entirely implausible for s-leaders: for example, researchers may take their samples along a
Figure 13: Trying to find a leaf order that induces zero crossings of po-leaders, we store or reject embeddings of \(T(v)\) based on other sites in \(H^{\prime}(v,i)\).
coastline, a river, or a valley, in which case the sites may lie relatively close to a line. Orienting the map such that this line is horizontal might then result in a geometry-free instance. Furthermore, unless two sites share an x-coordinate, increasing the vertical distance between the map and the tree eventually results in a geometry-free drawing for s-leaders; however, the required distance might be impractically large.
Next we show that the number of leader crossings in a geometry-free drawing of geophylogeny can be minimized efficiently using Fernau et al.'s [19] algorithm for the One-Sided Tanglegram problem.
**Proposition 4**: _Given a geometry-free geophylogeny \(G\) on \(n\) taxa, a drawing \(\Gamma\) with the minimum number of leader crossings can be found in \(\mathcal{O}(n\log n)\) time, for both s- and po-leaders._
**Proof:** To use Fernau et al.'s [19] algorithm, we transform \(G\) into a so-called _one-sided tanglegram_\((T_{\mathrm{fix}},T_{\mathrm{vari}})\) that is equivalent in terms of crossing to \(\Gamma\); see Fig. 16. We take the sites \(P\) as the leaves of the tree \(T_{\mathrm{fix}}\) with fixed embedding and embed it such that the points are ordered from left to right; the topology of \(T_{\mathrm{fix}}\) is arbitrary. As the tree \(T_{\mathrm{vari}}\) with variable embedding, we take the phylogenetic tree \(T\).
If \(\Gamma\) uses s-leaders, then we assume that the sites of \(G\) are indexed from left to right. If \(\Gamma\) uses po-leaders, we define an (index) order on \(P\) as follows. Let \(p_{i}\) be a site and \(p_{j}\) a site to the right
Figure 14: In a geometry-free instance the leaf order \(\pi\) fully determines if any two leaders cross.
Figure 15: Drawings of the same geophylogeny with four different leaf orders. Note that \(s_{1}\) and \(s_{3}\) cross if and only if \(\ell_{3}\) is left of \(\ell_{1}\). On the other hand, whether \(s_{1}\) and \(s_{2}\) cross or not depends more specifically on the positions of \(\ell_{1}\) and \(\ell_{2}\).
of it; consider the leader that connects \(p_{i}\) to leaf position \(1\) and the leader that connects \(p_{j}\) to leaf position \(n\). If these leaders cross, we require that \(i\) is after \(j\), otherwise it must be before \(j\). (It is easily shown that this defines an order; see also Fig. (b)b.)
Let \(\pi^{\prime}\) be a leaf order of \(T_{\mathrm{vari}}\). Further let \(s^{\prime}_{i}\) denote the connection of the leaf corresponding to \(p_{i}\) in \(T_{\mathrm{fix}}\) and the leaf \(\ell_{i}\) in \(T_{\mathrm{vari}}\). Note that two connections \(s^{\prime}_{i}\) and \(s^{\prime}_{j}\) with \(i<j\) cross in the tanglegram if and only if \(\pi^{\prime}(\ell_{i})>\pi^{\prime}(\ell_{j})\).
Since \(G\) is geometry free, the crossings in the tanglegram correspond one-to-one with those in the geophyogeny drawing with leaf order \(\pi^{\prime}\); see again Figs. 14 and 16. Hence, the number of crossings of \((T_{\mathrm{fix}},T_{\mathrm{vari}})\) can be minimized in \(\mathcal{O}(n\log n)\) time using an algorithm of Fernau et al. [19]. The resulting leaf order for \(T_{\mathrm{vari}}\) then also minimizes the number of leader crossings in \(\Gamma\).
### FPT Algorithm
In practice, most geophylogenies are not geometry free, yet some drawings with \(\mathsf{s}\)-leaders might have only few sites inside the \(\mathsf{s}\)-area of other sites. Capturing this with a parameter, we can develop an FPT algorithm using the following idea. Suppose we use \(\mathsf{s}\)-leaders and there is exactly one undecided pair \(\langle p_{i},p_{j}\rangle\), i.e. \(p_{j}\) lies inside the \(\mathsf{s}\)-area of \(p_{i}\); see Fig. (a)a. For a particular leaf order, we say the leader \(s_{i}\)_lies left (right)_ of \(p_{j}\) if a horizontal ray that starts at \(p_{j}\) and goes to the left (right) intersects \(s_{i}\); conversely, we say that \(p_{j}\)_lies right (left)_ of \(s_{i}\).
Suppose now that we restrict \(s_{i}\) to lie left of \(p_{j}\) (as \(s_{2}\) lies left of \(p_{3}\) in Fig. (b)b). This restricts the possible positions for \(\ell_{i}\) and effectively yields a _restricted_ geometry-free geophylogeny. The idea for our FPT algorithm is thus to use the algorithm from Theorem 4 on restricted geometry-free instances obtained by assuming that \(s_{i}\) lies to the left or to the right of \(s_{i}\); see again Fig. 17. In particular, we extend Fernau et al.'s dynamic programming algorithm [19] to handle _restricted_ one-sided tanglegrams at a cost in runtime.
**Lemma 1**: _The number of connection crossings in a restricted one-sided tanglegram \(\mathcal{T}\) on \(n\) leaves can be minimized in \(\mathcal{O}(n^{3})\) time._
**Proof:** Let \(\mathcal{T}=(T_{\mathrm{fix}},T_{\mathrm{vari}})\); we write \(T\) for \(T_{\mathrm{vari}}\). Let \(x\) and \(y\) be the children of a vertex \(v\) of \(T\). Fernau et al.'s algorithm would compute the number of crossings \(\mathrm{cr}(x,y)\) and \(\mathrm{cr}(y,x)\) between the connections of \(T(x)\) and the connections of \(T(y)\) for when \(x\) is the left or right child of \(v\)
Figure 16: A geometry-free geophylogeny and a one-sided tanglegram \((T_{\mathrm{fix}},T_{\mathrm{vari}})\) that have the same combinatorics (in terms of leader crossings) as the two geometry-free instances in Fig. 14.
respectively, in \(\mathcal{O}(n(x)+n(y))\) time. For an unrestricted one-sided tanglegram, this can be done independent of the positions of \(T(x)\) and \(T(y)\). For \(\mathcal{T}\) however this would not take in account the forbidding positions of leaves. Hence, as in our algorithm from Theorem 1, we add the position of the leftmost leaf of \(T(v)\) as additional parameter in the recursion. This adds a factor of \(n\) to the running time and thus, forgoing Fernau et al.'s data structures, results in a total running time in \(\mathcal{O}(n^{3})\).
Before describing an FPT algorithm based on restricted geometry-free geophylogenies, let us consider the example from Fig. 15 again. There the drawing \(\Gamma\) has three sites \(p_{1}\), \(p_{2}\), \(p_{3}\) where \(p_{2}\) lies in the \(\mathbf{s}\)-area of both \(p_{1}\) and \(p_{3}\). We can get four restricted geometry-free geophylogenies by requesting that \(p_{2}\) lies to the left or to the right of \(s_{1}\) and of \(s_{3}\). Here one of the instances, \(G^{\prime}\), stands out, namely where \(p_{2}\) lies to the left of \(s_{1}\) and to the right of \(s_{3}\); see Fig. (a)a. In the restricted one-sided tanglegram \(\mathcal{T}^{\prime}\) corresponding to \(G^{\prime}\), we would want \(p_{2}\) left of \(p_{1}\) and right of \(p_{3}\). This stands in conflict with \(p_{1}\) being left of \(p_{3}\) based on their indices. We thus say \(p_{1}\), \(p_{2}\), and \(p_{3}\) form a _conflicting triple_\(\langle p_{1},p_{2},p_{3}\rangle\), which we resolve as follows. Note that \(s_{1}\) and \(s_{3}\) cross for any valid leaf order for \(G^{\prime}\). We thus use the order \(p_{3}\), \(p_{2}\), \(p_{1}\) for \(\mathcal{T}^{\prime}\) (see Fig. (b)b) and, since \(\mathcal{T}^{\prime}\) does not contain the crossing of \(s^{\prime}_{1}\) and \(s^{\prime}_{3}\), we add one extra crossing to the computed solution. A conflicting triple for drawings with po-leaders is defined analogously.
**Theorem 5**: _Given a geometry-free geophylogeny \(G\) on \(n\) taxa and with \(k\) undecided pairs of sites, a drawing \(\Gamma\) of \(G\) with minimum number of crossings can be computed in \(\mathcal{O}(2^{k}\cdot(k^{2}+n^{3}))\) time, for both \(\mathbf{s}\)- and po-leaders._
**Proof:** Our FPT algorithm converts \(G\) into up to \(2^{k}\) restricted geometry-free instances, solves the corresponding restricted one-sided tanglegrams with Lemma 1, and then picks the leaf order that results in the least leader crossings for \(\Gamma\). Therefore, for each undecided pair \(\langle p_{i},p_{j}\rangle\), the algorithm tries routing \(s_{i}\) either to the left of or to the right of \(p_{j}\). Since there are \(k\) such pairs, there are \(2^{k}\) different combinations. However, for some combinations a drawing might be over restricted and no solution exists.
To keep track of all possible combinations, we go through all words over the alphabet \(\{0,1\}\) of length \(k\). Suppose that we consider one such word \(\omega\). Let the \(k\) pairs be in some arbitrary order.
Figure 17: We transform the non-geometry-free geophylogeny \(G\) into restricted geometry-free geophylogeny by deciding whether \(p_{3}\) lies left or right of \(s_{2}\).
For the \(m\)-th pair \(\langle p_{i},p_{j}\rangle\), we set \(s_{i}\) to be left of \(p_{j}\) if \(\omega[m]=0\) and to be right of \(p_{j}\) if \(\omega[m]=1\). Below we show how to construct the restricted geometry-free instances \(G_{\omega}\) and the corresponding restricted one-sided tanglegram \(\mathcal{T}_{\omega}\) in \(\mathcal{O}(k^{2}+n^{2})\) time. Since the number of crossings in the restricted geometry-free drawing can then be minimized in \(\mathcal{O}(n^{3})\) time with Lemma1, the claim on the running time follows.
In order to construct \(\Gamma_{\omega}\) efficiently, we keep track of the positions where a leaf \(\ell_{i}\), for \(i\in\{1,\ldots,n\}\), can be placed with an interval \([a_{i},b_{i}]\); at the start we have \(a_{i}=0\) and \(b_{i}=n\). Suppose that when going through the \(k\) pairs and \(\omega\), we get that \(s_{i}\) becomes restricted by, say, having to be left of a site \(p_{j}\). Then we compute the rightmost position where \(\ell_{i}\) could be placed and update \(b_{i}\) accordingly. This can both be done in constant time. If at any moment \(a_{i}>b_{i}\), then the drawing for \(\omega\) is over restricted and there is no viable leaf order. We then continue with the next word. Otherwise, after all \(k\) pairs, we have restricted \(G\) to \(G_{\omega}\) in \(\mathcal{O}(k)\) time.
Next, we explain how to find an order of \(P\) for \(\mathcal{T}_{\omega}\) that corresponds to \(G_{\omega}\). In particular, we have to show that resolving all conflicting triples as described above in fact yields an order of \(P\). To this end, let \(K\) be the complete graph with vertex set \(P\). (We assume again the same order on the sites as in Theorem4.) For any two sites \(p_{i},p_{j}\in P\) with \(i<j\), we orient \(\{p_{i},p_{j}\}\) as \((p_{j},p_{i})\) if \(\langle p_{i},p_{j}\rangle\) is an undecided pair and \(s_{i}\) is right of \(p_{j}\) in \(G_{\omega}\); otherwise we orient it as \((p_{i},p_{j})\). We then check whether any pair of undecided pairs forms a conflicting triple. For any conflicting triple \(\langle p_{i},p_{j},p_{l}\rangle\) that we find, we reorient the edge between \(p_{i}\) and \(p_{l}\) to \((p_{l},p_{i})\). We claim that \(K\) is acyclic (and prove it below). Therefore we can use a topological order of \(K\) as order for \(\mathcal{T}_{\omega}\). For \(i\in\{1,\ldots,n\}\), we set the dynamic programming values for leaf \(\ell_{i}\) at all positions in \(\mathcal{T}_{\omega}\) outside of \([a_{i},b_{i}]\) to infinity. We can find all conflicting triples in \(\mathcal{O}(k^{2})\) time, construct and orient \(K_{n}\) in \(\mathcal{O}(n^{2})\) time, and initialize \(\mathcal{T}\) in \(\mathcal{O}(n^{2})\) time.
Lastly, we show that \(K\) is indeed acyclic after resolving all conflicting triples. Suppose to the contrary that there is a directed cycle \(C^{\prime}\) in \(K\). Since the underlying graph of \(K\) is the complete graph, there is then also a directed triangle \(C\) in \(K\). To arrive at a contradiction, we show that \(C\) cannot have \(0\), \(1\), \(2\), or \(3\) reoriented edges. Let \(C\) be on \(p_{i}\), \(p_{j}\), \(p_{l}\) with edges \((p_{i},p_{j})\), \((p_{j},p_{l})\), and \((p_{l},p_{i})\).
\(C\) **contains 0 reoriented edges:** Since \(p_{i}\), \(p_{j}\), and \(p_{l}\) do not form a conflicting triple, an easy geometric case distinction shows that this is not geometrically realizable. For example, if, say, \(p_{i}\) is lower than \(p_{j}\) and \(p_{j}\) is lower than \(p_{l}\), then \(p_{j}\) is right of \(s_{i}\) and \(p_{l}\) is right of \(s_{j}\). However, then
Figure 18: A conflicting triple in a restricted geometry-free instance requires a different order in the corresponding restricted one-sided tanglegram and storing the “lost” crossing.
cannot be left of \(s_{i}\) and \(p_{i}\) cannot be right of \(s_{l}\); see Fig. 18(a).
\(C\) **contains 1 reoriented edges:** Suppose that \((p_{i},p_{j})\) has been reoriented as part of a conflicting triple with \(p_{m}\). Note that \(p_{m}\) also lies in the \(\mathsf{s}\)-area (\(\mathsf{po}\)-area) of \(p_{l}\), since either \(p_{i}\) or \(p_{j}\) lies in the \(\mathsf{s}\)-area (\(\mathsf{po}\)-area) of \(p_{l}\) or \(p_{l}\) lies right of \(p_{j}\), left of \(p_{i}\), and below where \(s_{i}\) and \(s_{j}\) definitely cross. However, then based on the orientation of \((p_{j},p_{l})\), and \((p_{l},p_{i})\), we get that \(p_{l}\) must for a conflict triple with \(p_{m}\) and either \(p_{i}\) or \(p_{j}\); see Fig. 18(b). This stands in contradiction to \(C\) containing only one reoriented edge.
\(C\) **contains 2 reoriented edges:** Suppose that \((p_{i},p_{j})\) and \((p_{j},p_{l})\) have been reoriented. We then know that \(s_{i}\) and \(s_{j}\) as well as \(s_{j}\) and \(s_{l}\) definitely cross. Therefore, \(p_{i}\) lies right of \(s_{j}\) (or the line through \(s_{j}\)) and that \(p_{j}\) lies left of \(s_{i}\) (or the line through \(s_{i}\)). Analogously, \(p_{j}\) lies right of \(s_{j}\) (or the line through \(s_{l}\)), and that \(p_{l}\) lies left of \(s_{j}\) (or the line through \(s_{j}\)). Since \((p_{l},p_{i})\) has not been reoriented, we know that \(s_{i}\) and \(s_{l}\) do not necessarily need to cross. For \(s_{l}\) to cross \(s_{j}\) but not \(s_{i}\), we get that \(p_{l}\) can only lie right of \(s_{i}\); see Fig. 18(c). This stands in contradiction to the orientation of \((p_{l},p_{i})\).
\(C\) **contains 3 reoriented edges:** Since all three edges of \(C\) have been reoriented, this is geometrically equivalent to the first case and thus not realizable.
This concludes the proof that there is no directed triangle in \(K\) and hence \(K\) is acyclic. Our FPT algorithm can thus process each of the \(2^{k}\) words in \(\mathcal{O}(k^{2}+n^{2})\) time.
Note that a single site can lie in the \(\mathsf{s}\)-area of every other site, for example, this is likely for a site that lies very close to the top of the map. Furthermore, there can be \(\mathcal{O}(n^{2})\) undecided pairs. In these cases, the running time of the FPT algorithm becomes \(\mathcal{O}(2^{n}n^{2})\) or even \(\mathcal{O}(2^{(}n^{2})n^{4})\). However, a brute-force algorithm that tries all \(2^{(}n-1)\) embeddings of \(T\) and computes for each the number of leader crossings in \(\mathcal{O}(n^{2})\) time, only has a running time in \(\mathcal{O}(2^{n}n^{2})\).
### Integer Linear Programming
As we have seen above, the problem of minimizing the number of leader crossings in drawings of geophylogenies is NP-hard and the preceding algorithms can be expected to be impractical on realistic instances; we have not implemented them. We now provide a practical method to exactly solve instances of moderate size using integer linear programming (ILP).
Figure 19: Cases for the proof of Theorem 5 where the directed triangle \(C\) with edges \((p_{i},p_{j})\), \((p_{j},p_{l})\), and \((p_{l},p_{i})\) is supposed to contains no, one, or two reoriented edges. Colored cones represent a hypothetical possible range for the leader of the corresponding site.
For the following ILP, we consider an arbitrary embedding of the tree as _neutral_ and describe all embeddings in terms of which internal vertices of \(T\) are rotated with respect to this neutral embedding, i.e. for which internal vertices to swap the left-to-right order of their two children. For two sites \(p_{i}\) and \(p_{j}\), we use \(p_{i}\prec p_{j}\) to denote that \(\ell_{i}\) is left of \(\ell_{j}\) in the neutral embedding. Let \(U\) be the set of undecided pairs, that is, all ordered pairs \((p,q)\) where \(q\) lies inside the \(\mathsf{s}\)-area of \(p\); note that these are ordered pairs.
Variables and Objective Function.The program has three groups of binary variables that describe the embedding and crossings.
\(\rho_{i}\in\{0,1\}\ \forall i\in I(T)\).Rotate internal vertex \(i\) if \(\rho_{i}=1\) and keep its neutral embedding if \(\rho_{i}=0\). Note that rotating the lowest common ancestor of \(\ell_{i}\) and \(\ell_{j}\) is the only way to flip their order, so for convenience we write \(\rho_{ij}\) to mean \(\rho_{\text{lca}(i,j)}\). Note, however, that an internal vertex can be the lowest common ancestor of multiple pairs of leaves. \(d_{pq}\in\{0,1\}\ \forall(p,q)\in U\).For each undecided pair \((p,q)\): the leader for \(p\) leader should pass to the left of site \(q\) if \(d_{pq}=0\) and to the right if \(d_{pq}=1\). This is well-defined since the pair is undecided. \(\chi_{pq}\in\{0,1\}\ \forall p,q\in P,\,p<q\).For each pair of distinct sites: the leaders of \(p\) and \(q\) are allowed to cross if \(\chi_{pq}=1\) and are not allowed to cross if \(\chi_{pq}=0\).
There is no requirement that non-crossing pairs have \(\chi_{pq}=0\), but that will be the case in an optimal solution: to minimise the number of crossings, we minimize the sum over all \(\chi_{pq}\).
Constraints.We handle geometry-free pairs and undecided pairs separately.
Consider a geometry-free pair of sites: if the leaders cross in the neutral embedding, we must either allow this, or rotate the lowest common ancestor. Conversely, if they do not cross neutrally, yet we rotate the lowest common ancestor, then we must allow their leaders to cross. Call these sets of pairs \(F_{\text{rotate}}\) and \(F_{\text{keep}}\) respectively, for how to prevent the crossing.
\[\chi_{ij}+\rho_{ij}\geq 1\qquad\forall(i,j)\in F_{\text{rotate}} \tag{2}\]
\[\chi_{ij}-\rho_{ij}\geq 0\qquad\forall(i,j)\in F_{\text{keep}} \tag{3}\]
For undecided pairs \((p,q)\), a three-way case distinction on \([p\prec q]\), \(\rho_{pq}\), and \(d_{pq}\) reveals the following geometry:
* pairs with \(p\prec q\) have crossing leaders if and only if \(\rho_{pq}+d_{pq}=1\);
* pairs with \(p\succ q\) have crossing leaders if and only if \(\rho_{pq}+d_{pq}\neq 1\).
Recall that we do not force \(\chi\) to be zero if there is no intersection, only that it is \(1\) if there _is_ an intersection. We implement these conditions in the ILP as follows. Let \(U_{\text{left}}\subseteq U\) be the undecided pairs with \(p\prec q\).
\[\rho_{pq}-d_{pq}\leq\chi_{pq}\qquad\forall(p,q)\in U_{\text{left}} \tag{4}\]
\[d_{pq}-\rho_{pq}\leq\chi_{pq}\qquad\forall(p,q)\in U_{\text{left}} \tag{5}\]
Conversely, let \(U_{\text{right}}\subseteq U\) be the undecided pairs with \(p\succ q\).
\[\rho_{pq}+d_{pq}-1\leq\chi_{pq}\qquad\forall(p,q)\in U_{\text{right}} \tag{6}\]
\[1-\rho_{pq}-d_{pq}\leq\chi_{pq}\qquad\forall(p,q)\in U_{\text{right}} \tag{7}\]
Finally, we must ensure that each leader \(s_{i}\) respects the \(d\) variables: the line segment from \(p_{i}\) to \(\ell_{i}\) must pass by each other site in the \(\mathsf{s}\)-area on the correct side. By their definition, this does not affect geometry-free pairs, but it remains to constrain the leaf placement for undecided pairs.
Observe that the \(\rho\) variables together fix the leaf order, since they fix the embedding of \(T\). Let \(L_{i}(\rho)\) be the function that gives the \(\mathsf{x}\)-coordinate of \(\ell_{i}\) given the \(\rho\) variables. Note that \(L_{i}\) is linear in each of the \(\rho\) variables: rotating an ancestor of \(\ell_{i}\) shifts its location leaf by a particular constant, and rotating a non-ancestor does not affect it.
For an undecided pair \((p_{i},p_{j})\), consider a leader starting at \(s_{i}\) and extending up through \(s_{j}\): for \(\mathsf{s}\)-leaders this is the ray from \(s_{i}\) through \(s_{j}\), for \(\mathsf{po}\)-leaders this is the vertical line through \(s_{j}\). Let \(\mathrm{x}^{*}(i,j)\) be the \(\mathsf{x}\)-coordinate of where this extended leader intersects the top of the map and note that this is a constant; see Fig. 10. If \(d_{ij}=0\), then \(\ell_{i}\) must be to the left of this intersection; if \(d_{ij}=1\), it must be to the right. We model this in the ILP with two constraints and the _big-M method_, where it suffices to set \(M=n\).
\[L_{i}(\rho)-d_{ij}M\leq\mathrm{x}^{*}(i,j)\qquad\forall(p_{i},p_{j})\in U \tag{8}\]
\[L_{i}(\rho)+(1-d_{ij})M\geq\mathrm{x}^{*}(i,j)\qquad\forall(p_{i},p_{j})\in U \tag{9}\]
This completes the ILP.
The number of variables and constraints are both quadratic in \(n\). Just counting the \(\chi\) variables already gives this number, but we note that in particular the number of undecided pairs leads to additional variables (and seemingly more complicated constraints).
### Heuristics
Since the ILP from the previous section can be slow in the worst case and requires advanced solver software, we now suggest a number of heuristics.
Bottom-Up.First, we use a dynamic program similar to the one in Section3 and commit to an embedding for each subtree while going up the tree. At this point we note that counting the number of crossings is not a leaf additive objective function in the sense of Section3. However, Eq.1 does enable us to introduce an additional cost based on where an entire subtree is placed and where its sibling subtree is placed - just not minimized over the embedding of these subtrees. More precisely, for an inner vertex \(v\) of \(T\) with children \(x\) and \(y\), let \(C(x,y,i)\) be the number of crossings between \(T(x)\) and \(T(y)\) when placed starting at position \(i\) and \(i+n(x)\) respectively; this
Figure 20: Defining \(\mathrm{x}^{*}(i,j)\) by extending a leader from \(s_{i}\) through \(s_{j}\) This partitions the leaf positions into those where the leader goes left of \(s_{j}\) and those where it goes right.
can be computed in \(\mathcal{O}(n(v)^{2})\) time. Note that this ignores any crossings with leaders from other parts of the tree. With base case \(H(\ell,i)=0\) for every leaf \(\ell\), we use
\[H(v,i)=\min\{\quad H(x,i)+H(y,i+n(x))+C(x,y,i),\quad H(y,i)+H(x,i+n(y))+C(y,x,i) \quad\}\]
to pick a rotation of \(T(v)\). Since \(H\) can be evaluated in \(\mathcal{O}(n^{2})\) time, the heuristic runs in \(\mathcal{O}(n^{4})\) time total. The example in Fig. 21 demonstrates that this does not minimize the total number of crossings.
Top-Down.The second heuristic traverses \(T\) from top to bottom (i.e. in pre-order) and chooses a rotation for each inner vertex \(v\) based on how many leaders would cross the vertical line between the two subtrees of \(v\); see Fig. 22. More precisely, suppose that \(T(v)\) has its leftmost leaf at position \(i\) based on the rotations of the vertices above \(v\). For \(x\) and \(y\) the children of \(v\), consider the rotation of \(v\) where \(T(x)\) is placed starting at position \(i\) and \(T(y)\) is placed starting at position \(i+n(x)\). Let \(s\) be the x-coordinate in the middle between the last leaf of \(T(x)\) and the first leaf \(T(y)\). We compute the number of leaders of \(T(v)\) that cross the vertical line at \(s\) and for the reserve rotation of \(v\); the smaller result is chosen and the rotation fixed. This procedure considers each site at most \(\mathcal{O}(n)\) times and thus runs in \(\mathcal{O}(n^{2})\) time.
Leaf-Additive Dynamic Programming.Thirdly, we could optimize any of the quality measures for interior labeling (Section 3). These measures produce generally sensible leaf orders in quadratic time and we may expect the number of leader crossings to be low.
Figure 21: The bottom-up heuristic is not always optimal: combining the locally best leaf orders for \(T(x)\) and \(T(y)\) might not result in the minimum number of leader crossings for \(T(v)\).
Figure 22: The top-down heuristic tries both rotations of \(v\) and here would pick (a).
Greedy (Hill Climbing).Finally, we consider a hill climbing algorithm that, starting from some leaf order, greedily performs rotations that improve the number of crossings. This could start from a random leaf order, a hand-made one, or from any of the other heuristics. Evaluating a rotation can be done in \(\mathcal{O}(n^{2})\) time and thus one round through all vertices runs in \(\mathcal{O}(n^{3})\) time.
## 5 Experimental Evaluation
This section is based on our implementation of the ILP and the heuristics. The code is available online at github.com/joklawitter/geophylo, and data from the corresponding authors upon request.
### Test Data
We use three procedures to generate random instances. For each type and with 10 to 100 taxa (in increments of 5), we generated 10 instances; we call these the _synthetic instances_. We stop at 100 since geophylogeny drawings with more taxa are rarely well-readable. Example instances are shown in Fig. 23.
**Uniform**: Place \(n\) sites on the map uniformly at random. Generate the phylogenetic tree by repeating the following merging procedure. Pick an unmerged site or a merged subtree uniformly at random, then pick a second with probability distributed by inverse distance to the first, and merge them; as position of a subtree, we take the median coordinate on both axes.
**Coastline**: Initially place all sites equidistantly on a horizontal line, then slightly perturb the x-coordinates. Next, starting at the central site and going outwards in both directions, change the y-coordinate of each site randomly (up to 1.5 times the horizontal distance) from the y-coordinate of the previous site. Construct the tree as before.
**Clustered**: These instances group multiple taxa into clusters. First a uniformly random number of sites between three and ten is allocated for a cluster and its center is placed at a uniformly random point on the map. Then for each cluster, we place sites randomly in a disk around the center with size proportional to the cluster size. Construct \(T\) as before, but first for each cluster separately and only then for the whole instance.
In addition, we consider three real world instances derived from published drawings. **Fish** is a 14-taxon geophylogeny by Williams and Johnson [46]. **Lizards** is 20-taxon geophylogeny by Jauss et al. [26], where the sites are mostly horizontally dispersed (see Fig. 2b). **Frogs** is a 64-taxon geophylogeny by Ellepola et al. [18], where the sites are rather chaotically dispersed on the map; the published drawing of **Frogs** uses \(\mathsf{s}\)-leaders and has over 680 crossings.
### Experimental Results
We now describe the main findings from our computational experiments.
The ILP is fairly quick for \(\mathsf{s}\)-leaders.Our implementation uses Python to generate the ILP instance and Gurobi 10 to solve it; we ran the experiments on a 10-core Apple M1 Max processor. The Python code takes negligible time; practically all time is spent in the ILP solver. As expected,
we observe that the running time is exponential in \(n\), but only moderately so (Fig. 24). Instances with up to about 50 taxa can usually be solved optimally within a second, but for Clustered and Uniform instances the ILP starts to get slow at about 100 taxa. We note that geophylogenies with over 100 taxa should probably not be drawn with external labeling: for example, the Frogs instance can be drawn optimally by the ILP in about \(0.5\;\mathrm{s}\), but even though this improves the number of crossings from the published 680 to the optimal 609, the drawing is so messy as to be unreadable (Fig. 26b). We further observe that Coastline instances are solved trivially fast, since with fewer undecided pairs the ILP is smaller and presumably easier to solve.
The ILP is noticeably slower for po-leaders.Instances with up to 25 taxa are still drawn comfortably within a second, but at 50 taxa the typical runtime is over a minute. We conjecture this is due to the increased number of undecided pairs when working with po-leaders.
The synthetic instances have a superlinear number of crossings.The Clustered instances can be drawn with significantly fewer crossings than Uniform: this matches our expectation, as by construction there is more correlation between the phylogenetic tree and the geography of the sites. More surprisingly we find that the Coastline instances require many crossings. We may have made these geophylogenies too noisy, but this observation does warn of the generally quadratic growth in number of crossings, which makes external labeling unsuitable for large geophylogenies unless the geographic correlation is exceptionally good.
The heuristics run instantly and Greedy is often optimal.The heuristics are implemented in single-threaded Java code. Bottom-Up, Top-Down and Leaf-Additive all run instantly, and even the Greedy hill climber runs in a fraction of a second. Of the first three heuristics, Bottom-Up consistently achieves the best results for both s- and po-leaders. Comparing the best solution by these heuristics with the optimal drawing (Fig. 25), we observe that the number crossings in excess of the optimum increases with the number of taxa, in particular for Uniform and Clustered
Figure 23: Examples of generated instances with 20 taxa, here shown with s-leaders. The leaf order was computed with the Greedy hill climber.
instances; Coastline instances are always drawn close to optimally by at least one heuristic. The Greedy hill climber almost always improves this to an optimal solution.
For the number of crossings, po-leaders are promising.Our heuristics require on average only about \(73\%\) as many crossings when using po-leaders compared to s-leaders (\(55\%\) for Coastline instances); the Lazard example in Fig. 2b requires \(11\) s-leader crossings but only \(2\) po-leader crossings. We therefore propose that po-leaders deserve more attention from the phylogenetic community.
Algorithmic recommendations.Our results show that the ILP is a good choice for geophysogeny drawings with external labeling. If no solver software is at hand or it is technically challenging to set up (for example when making an app that runs locally in a user's web browser), then the heuristics offer an effective and efficient alternative, especially Bottom-Up and Greedy.
For the Fish instance, for example, we found that the drawing with s-leaders and \(17\) crossings in Fig. 25(a) is a good alternative to the internal labeling used in the published drawing [46]. However, for instances without a clear structure or with many crossings, it might be better to use internal labeling. Alternatively, the tree could be split like Tobler et al. [43], such that different subtrees are each shown with the map in separate drawings.
## 6 Discussion and Open Problems
In this paper, we have shown that drawings of geophysogenies can be approached theoretically and practically as a problem of algorithmic map labeling. We formally defined a drawing style for geophysogenies that uses either internal labeling with text or colors, or that uses external labeling with s/po-leaders. This allowed us to define optimization problems that can be tackled algorithmically. For drawings with internal labeling, we introduced a class of quality measures that can be optimized efficiently and even interactively provided with user hints. In practice, designers can thus try different quality measures, pick their favorite, and make further adjustments easily
Figure 24: Computing optimal s-leader drawings using the ILP.
even for large instances. For external labeling, minimizing the number of leader crossings is NP-hard in general. Crossing free-instances on the hand can be found in polynomial time, yet our algorithm still runs only in \(\mathcal{O}(n^{6})\) time. Furthermore, for drawings with \(\mathsf{s}\)-leaders, we showed that if the sites lie relatively close to a horizontal line then in the best scenario an \(\mathcal{O}(n\log n)\)-time algorithm and otherwise an FPT algorithm can be used. While we found similar results for drawings with po-leaders, it seems unlikely that geophylyogenies arising in practice have the required properties. Hence, we provide multiple algorithmic approaches to solve this problem and demonstrated experimentally that they perform well in practice.
Even though we have provided a solid base of results, we feel the algorithmic study of geophyogeny drawings holds further promise by varying, for example, the type of leader used, the objective function, the composition of the drawing, or the nature of the phylogeny and the map. Several of these directions show parallels to the variations found in boundary labeling problems. We finish this paper with several suggestions for future work.
One might consider do- and pd-leaders, which use a diagonal segment and can be aesthetically pleasing; see Fig. 27. We expect that some of our results (such as the NP-hardness of crossing minimization and the effectiveness of the heuristics) should hold for these leader types. The boundary labeling literature [7] studies even further types, such as opo and Bezier, and these might be more challenging to adapt.
For external labeling we have only considered the total number of crossings. If different colors are used for the leaders of different clades or if the drawing can be explored with an interactive tool, one might want to minimize the number of crossings within each clade (or a particular clade). Furthermore, one might optimize crossing angles or insist on a minimum distance between leaders and unrelated sites. While we provided heuristics to minimize leader crossings, the development of approximation algorithms, which exist for other labeling problems [5, 33], could be of interest.
Our model of a geophylyogeny drawing can be expanded as well. One might allow the orientation of the map to be freely rotated, the extent of the map to be changed, or the leaves to be placed non-equidistantly. Optimizing over these additional freedoms poses new algorithmic challenges. Straying further from our model, some drawings in the literature have a circular tree around the
Figure 25: Average number of \(\mathsf{s}\)-leader crossings made by the best heuristic minus the number of \(\mathsf{s}\)-leader crossings in the optimal drawing, averaged over 10 random instances per value of \(n\).
map [28, 37]; see Fig. 28. This is similar to contour labeling in the context of map labeling [35]. Also recall that Fig. 1 has area features. Our quality measures for internal labeling are easily adapted to handle this, but (as is the case with general boundary labeling [6]) area features provide additional algorithmic challenges for external labeling. The literature contains many drawings where multiple taxa correspond to the same feature on the map [13] (see also again Fig. 28) and where we might want to look to many-to-one boundary labeling [33, 4]. Furthermore, one can consider non-binary phylogenetic trees and phylogenetic networks.
Lastly, we note that side-by-side drawings can also be used for a phylogenetic tree together with a diagram other than a map: Chen et al. [15] combine it with a scatter plot; Gehring et al. [21] even combine three items (phylogenetic tree, haplotype network, and map).
|
2309.13973 | Perverse Filtrations via Brylinski-Radon transformations | In this article, we prove the $t$-exactness of a Brylinski-Radon
transformation taking values in sheaves on flag varieties. This implies several
weak Lefschetz type results for cohomology. In particular, we obtain de
Cataldo-Migliorini's P=Dec(F) and Beilinson's basic lemma, the latter was an
important ingredient in their proof of P=Dec(F). Our methods also allow the
sharpening of Esnault-Katz's cohomological divisibility theorem and estimates
for the Hodge level. Finally, we upgrade P=Dec(F) to an equivalence of functors
which is also valid over a base. | Ankit Rai, K. V. Shuddhodan | 2023-09-25T09:10:20Z | http://arxiv.org/abs/2309.13973v1 | # Perverse filtrations via Brylinski-Radon transformations
###### Abstract.
In this article, we prove the \(t\)-exactness of a Brylinski-Radon transformation taking values in sheaves on flag varieties. This implies several weak Lefschetz type results for cohomology. In particular, we obtain de Cataldo-Migliorini's P=Dec(F) and Beilinson's basic lemma, the latter was an important ingredient in their proof of P=Dec(F). Our methods also allow the sharpening of Esnault-Katz's cohomological divisibility theorem and estimates for the Hodge level. Finally, we upgrade P=Dec(F) to an equivalence of functors which is also valid over a base.
Key words and phrases:KVS was supported by the CARMIN project fellowship 1991 Mathematics Subject Classification: 1A variety is an irreducible and separated scheme of finite type over \(k\).
On the other hand using the perverse truncation functors one can associate a natural decreasing filtration \(P^{\bullet}\) on \(\mathrm{H}^{i}(Z,\mathrm{K})\) called the _perverse filtration_ as follows
\[P^{j}\left(\mathrm{H}^{i}(Z,\mathrm{K})\right):=\mathrm{Image}\left(\mathrm{H}^{ i}(Z,{}^{p}\tau_{\leqslant-j}\,\mathrm{K})\to\mathrm{H}^{i}(Z,\mathrm{K}) \right). \tag{1.2}\]
One of the principal results of [10] is the following result comparing the above two filtrations.
**Theorem 1.0.1**.: _(de Cataldo-Migliorini)[10, Theorem 4.1.1] Let \(Z\subseteq\mathbb{A}^{N}\) be an affine variety and let \(\mathrm{K}\) be a sheaf on \(Z\). Let \(Z_{\bullet}\) be a full flag as above, obtained by (repeatedly) intersecting \(Z\) with generic hyperplane sections. Then_
\[P^{j}\mathrm{H}^{i}(Z,\mathrm{K})=F^{i+j}\mathrm{H}^{i}(Z,\mathrm{K}).\]
The proof of Theorem 1.0.1 proceeds by constructing an isomorphism between the perverse spectral sequence and the shifted flag spectral sequence given by a generic flag, and obtaining the isomorphism on the filtrations as a corollary. A key vanishing result that allows them to compare these spectral sequences is the strong weak Lefschetz theorem2 due to Beilinson [10, Theorem 5.1.2]. They also prove a quasi-projective version of the above result [10, Theorem 4.2.1], using a similar strategy. The vanishing result needed again comes from the strong weak Lefschetz theorem.
Footnote 2: This is the name used in [10, Theorem 5.1.2]. A form of the statement in [11] is called the basic lemma.
### Brylinski's Radon transform
The goal of this article is to study perverse filtrations using Brylinski's generalization of the Radon transforms [1]. These are sheaf theoretic analogues of the usual Radon transforms and take as input sheaves on \(\mathbb{P}\) and produce a sheaf on the Grassmannian \(\mathbf{G}(d)\). In his article, Brylinski gave several applications of the transform to the Lefschetz theory [1, I.III] and especially to the microlocal study of sheaves. More recently in his seminal article [1], Beilinson used Brylinski's Radon transform to give a construction of the singular support in the algebraic setting.
Our starting point is the observation that, there are strong restrictions on the perverse cohomologies of the Radon transforms of perverse sheaves on \(\mathbb{P}\), that come from an affine scheme mapping quasi-finitely to \(\mathbb{P}\). These restrictions suitably reinterpreted give rise to several vanishing statements.
#### 1.1.1. Perverse filtrations in practice
An interesting special case of the perverse spectral sequence (and hence the perverse filtration) is when \(\mathrm{K}\) is a pushforward of a sheaf along a morphism, and in this case, the spectral sequence is the perverse analogue of the usual Leray spectral sequence. In recent years the perverse filtration has been an active area of research and is one-half of the eponymous P=W conjecture of de Cataldo-Hausel-Migliorini [10]. There is a rich body of work around this topic and we mention a few references here [14], [17], [10]. Finally, we conclude this section by fixing some notations and conventions
#### 1.1.2. etale sheaves
Through this article, we work over an algebraically closed field \(k\) and a prime \(\ell\) invertible in \(k\). We also fix a coefficient ring \(\Lambda\) which is either a finite ring with torsion invertible in \(k\) or an algebraic extension \(E/\mathbb{Q}_{\ell}\). For any scheme \(X/k^{3}\), we denote by \(D^{b}_{c}(X)\) the bounded derived category of constructible etale sheaves with finite tor-dimension and with coefficient in \(\Lambda\). In what follows by a sheaf we simply mean an object in \(D^{b}_{c}(X)\). We shall call a sheaf like if all its cohomology sheaves are lisse. Finally \(D^{b}_{c}(X)\) comes equipped with the standard [1, 1.1.2, (e)] and a perverse \(t\)-structure [1, SS4.0]4. We shall denote the associated perverse truncation and cohomology functors by \(({}^{p}\tau^{\leqslant 0},{}^{p}\tau^{\geqslant 0})\) and \({}^{p}\mathcal{H}^{i}\) respectively.
Footnote 4: For us schemes are always separated and finite type over the base field \(k\).
## 2. Summary of results
Let \(\mathbb{P}\) be the projective space of dimension \(N\) over \(k\). For an integer \(d\) with \(0\leqslant d\leqslant N\) we denote the Grassmannian of \(d\)-planes in \(\mathbb{P}\) by \(\mathbf{G}(d)\). Thus in this notation \(\mathbf{G}(N-1)\) is the dual space of \(\mathbb{P}\), also denoted by \(\mathbb{P}^{\vee}\). As a convention we set \(\mathbf{G}(d)=\emptyset\) for \(d<0\) and \(\mathbf{G}(d)=\mathbf{G}(N)\) for \(d>N\). For \(d\geqslant 0\) (resp. \(d<0\)) and a closed point \(v\in\mathbf{G}(d)\), we denote by \(\Lambda_{v}\) the corresponding linear subspace (resp. the empty set).
### \(\mathbf{P}=\mathbf{Dec(F)}\) on cohomology
let \(Z\) be a scheme and \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism. Then the following result is proved in SS5.3.
**Theorem 2.1.1**.: _For any sheaf \(\mathrm{K}\) on \(Z\), there exists an open dense \(V\subseteq\mathbf{G}(d)\) such that, for any closed point \(v\) of \(V\) (with \(\Lambda_{v}\) the corresponding \(d\)-plane) and any integer \(i\),_
\[P^{-i}\mathrm{H}^{d-N+1+i}(Z,\mathrm{K})\supseteq\ker\left(\mathrm{H}^{d-N+1 +i}(Z,\mathrm{K})\to\mathrm{H}^{d-N+1+i}(\phi^{-1}(\Lambda_{v}),\mathrm{K}\,|_{ \phi^{-1}(\Lambda_{v})})\right).\]
_Moreover, when \(Z\) is affine, we may choose \(V\) such that the above inclusion becomes an equality._
We prove Theorem 2.1.1 without resorting to spectral sequences, and in fact do so by fixing the codimension \(d\), as opposed to dealing with full flag. By varying \(d\) and renumbering the indices it is clear that Theorem 2.1.1 implies Theorem 1.0.1.
Our proof of Theorem 2.1.1 follows from certain \(t\)-exactness properties of Brylinski-Radon transformations (see Proposition 5.2.1). These as opposed to Theorem 2.1.1 remain valid over a base. We now discuss a special case of Proposition 5.2.1.
### \(t\)-exactness of \(\mathcal{R}_{N-1!}\circ\phi_{*}\) implies the basic lemma
Consider the diagram
\[\diagram{0,0}\arrow{0,0}\arrow{0,1}\arrow{1,0}\arrow{1,0}\arrow{2,1}\arrow{2,1} \arrow{3,1}\arrow{4,1}\arrow{5,1}\arrow{6,1}\arrow{7,1}\arrow{8,1}\arrow{9,1} \arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1} \arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1 }\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1 }\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1} \arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}\arrow{10,1}
here \(Q\) is the universal hyperplane section and \(U\) is its open complement. Define a functor \(\mathcal{R}_{N-1!}\colon D^{b}_{c}(\mathbb{P})\to D^{b}_{c}(\mathbb{P}^{\vee})\), as follows
\[\mathcal{R}_{N-1!}:=\pi_{*}^{\vee}j_{!}j^{!}\pi^{!}\pi^{!}.\]
A special case of Proposition 5.2.1 is the following result.
**Proposition 2.2.1**.: _Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism. Then the functor \(\mathcal{R}_{N-1!}\circ\phi_{*}\) is left \(t\)-exact. Moreover, when \(Z\) is affine, it is \(t\)-exact._
The proof of proposition 2.2.1 is via a 'double' Artin vanishing Lemma 3.2.4. We also refer the reader to Proposition 5.5.1 where it is shown that under the assumption of \(Z\) affine, the functor \(\mathcal{R}_{N-1!}\circ\phi_{*}\) from \(\mathcal{P}(Z)\)6 to \(\mathcal{P}(\mathbb{P}^{\vee})\) is in addition _faithful_.
A consequence of Proposition 2.2.1 is the strong weak Lefschetz Theorem from [11] due to Beilinson.
Footnote 6: Fix a smooth morphism \(f:X\to Y\) of relative dimension \(d\) with geometrically connected fibers, we set \(f^{!}:=f^{*}[d]\).
**Theorem 2.2.2**.: _(Beilinson) Let \(\phi:Z\to\mathbb{P}\) be a quasi-finite7 and affine morphism. Let \(\mathrm{K}\) perverse. Then_
Footnote 7: In [11] this is stated for \(\phi\) an affine open immersion.
_(b) the cohomology groups \(\mathrm{H}^{i}(Z,j_{!}j_{!}^{!}j_{2*}j_{2}^{*}\,\mathrm{K})\) vanish for \(i\neq 0\), here \(j_{r}\colon V_{r}\hookrightarrow Z,r=1,2\) are inverse images (under \(\phi\)) of generic hyperplane section complements._
_(b) If \(Z\) is itself affine then, \(\mathrm{H}^{i}(Z,j_{!}j^{!}\,\mathrm{K})=0\) for \(i\neq 0\). Here \(j\colon V\hookrightarrow Z\) is the inverse image (under \(\phi\)) of a generic hyperplane section complement._
Theorem 2.2.2, (b) follows from Proposition 2.2.1. While Theorem 2.2.2, (a) is obtained by using a relative version of Proposition 2.2.1 and its dual. See SS5.4.1 for details.
### Applications to cohomological divisibility and Hodge level
Proposition 2.2.1 also implies Deligne's weak Lefschetz theorem [12, Corollary A.5] under the weaker hypothesis of upper semi-perversity (see Corollary 5.4.2). This allowed us to obtain a weak Lefschetz for the Gysin map (see Proposition 6.1.1), thus answering a question of Esnault-Wan [10]. Then following a strategy by Esnault-Wan [10], we prove the following strengthening of Deligne's integrality theorem and Esnault-Katz's Cohomological divisibility theorem [1, Theorem 2.3, (2)].
#### 2.3.1. Statement on cohomological divisibility
Let \(\mathbb{F}_{q}\) be a finite field with \(q\) elements. Let \(Z_{0}\subset\mathbb{A}^{N}_{\mathbb{F}_{q}}\) be an affine scheme defined by the vanishing of \(r\) polynomials of degree \(d_{1},d_{2}\cdots,d_{r}\) with \(N,r\geq 1\). We denote by \(Z\) (resp. \(\mathbb{A}^{N}\)), the base change of \(Z_{0}\) (resp. \(\mathbb{A}^{N}_{\mathbb{F}_{q}}\)) to an algebraic closure (say \(\bar{\mathbb{F}}_{q}\)) of \(\mathbb{F}_{q}\).
**Theorem 2.3.1**.: _The eigenvalues of the geometric Frobenius acting on_
_(a) \(\mathrm{H}^{\dim(Z)+j}_{c}(Z)\), for \(j\geq 0\) and,_
_
2. \(\operatorname{H}_{c}^{\dim(Z)+j+1}(\mathbb{A}^{N}\backslash Z)\) _for_ \(0\leq j\leq\dim(Z)+1\)__
_are divisible by \(q^{\mu_{j}(N;d_{1},d_{2}\cdots,d_{r})}\) (see Equation (6.2))._
We also have an analogue of Theorem 2.3.1 about Hodge levels. We refer the reader to Theorem 6.2.1 for the precise statement and to SS6.2.1 for the proof of Theorem 2.3.1.
### \(\mathbf{P}=\operatorname{\mathbf{Dec}(F)}\) as functors
In the final SS7 we upgrade Theorem 2.1.1 into an equivalence of functors. The results in the section are motivated by those in [1, SS5].
We make use of both geometric input (via Proposition 4.2.2) and results from stable \(\infty\)-categories via Theorem A.3.1 and Corollary A.5.1. The interested reader may refer to SSA.1 for a possible explanation as to why we needed to make a detour out of \(1\)-categories. We briefly describe the setup here. Let \(S/k\) be a base scheme and let \(\mathscr{E}\) be a locally free sheaf of rank \(N+1\) on \(S\). We denote by \(\mathbb{P}\) and \(\operatorname{Fl}\) the associated projective bundle and flag bundle. Let \(\overline{\pi}\) (resp. \(\overline{\pi}^{\vee}\)) be the projection maps from \(\mathbb{P}\times_{S}\operatorname{Fl}\) to \(\mathbb{P}\) (resp. \(\operatorname{Fl}\)).
#### 2.4.1. The functor \(\operatorname{F}\)
1. On \(\mathbb{P}\times_{S}\operatorname{Fl}\) we have the _universal_ flag \(\overline{Q}_{0}:=\mathbb{P}\times_{S}\operatorname{Fl}\supset\overline{Q}_{ -1}\supset\overline{Q}_{-2}\cdots\overline{Q}_{-N}\supset\overline{Q}_{-N-1}=\emptyset\).
2. For \(-1\geq r\geq-N-1\), we let \(\overline{U}_{r}\) the complement of \(\overline{Q}_{r}\), and denote by \(\bar{l}_{r}:\overline{U}_{r}\hookrightarrow\mathbb{P}\times_{S}\operatorname{Fl}\), the corresponding open immersion. Thus we have adjunction maps \(\overline{l}_{r!}\overline{l}_{r}^{!}\to\overline{l}_{r-1!}\overline{l}_{r-1}^ {!}\).
This allows us to define a functor \(\operatorname{F}\colon\mathcal{D}_{\operatorname{cons}}(\mathbb{P})\to \mathcal{D}F_{\operatorname{cons}}(\operatorname{Fl})\) as follows
\[\operatorname{K}\mapsto\operatorname{F}(\operatorname{K})^{\bullet}:=\{ \cdots\to\operatorname{F}^{i}\operatorname{K}\to\operatorname{F}^{i-1} \operatorname{K}\to\operatorname{F}^{i-2}\operatorname{K}\cdots\},\]
where
\[\operatorname{F}^{r}\operatorname{K}=\left\{\begin{array}{ll}\overline{\pi} _{*}^{\vee}\overline{\pi}^{!}\operatorname{K}&\text{if }r\leq-N-1=-\text{rk}(\mathscr{E})\\ \overline{\pi}_{*}^{\vee}\overline{l}_{r-1!}\overline{l}_{r-1}^{!}\overline{ \pi}^{!}\operatorname{K}&\text{if }-N\leq r\leq 0\\ 0&\text{if }r>0\end{array}\right.. \tag{2.1}\]
Here \(\mathcal{D}_{\operatorname{cons}}(\mathbb{P})\) and \(\mathcal{D}F_{\operatorname{cons}}(\operatorname{Fl})\) are symmetric monoidal stable \(\infty\)-categories. The homotopy category of \(\mathcal{D}_{\operatorname{cons}}(\mathbb{P})\) is \(D_{c}^{b}(\mathbb{P})\), and the objects in \(\mathcal{D}F_{\operatorname{cons}}(\operatorname{Fl})\) can be thought of as constructible complexes with a finite filtration. For a precise definition see SSA.5.
#### 2.4.2. The functor \(\operatorname{P}\)
We also define a functor \(\operatorname{P}\colon\mathcal{D}_{\operatorname{cons}}(\mathbb{P})\to \mathcal{D}F_{\operatorname{cons}}(\operatorname{Fl})\) as follows
\[\operatorname{K}\mapsto\operatorname{P}(\operatorname{K})^{\bullet}:=\{\cdots \to\operatorname{P}^{i}\operatorname{K}\to\operatorname{P}^{i-1}\operatorname{ K}\to\operatorname{P}^{i-2}\operatorname{K}\cdots\},\]
where
\[\operatorname{P}^{i}\operatorname{K}:=\overline{\pi}_{*}^{\vee}\overline{\pi}^{ \dagger p}\tau_{\leq-i}\operatorname{K}.\]
4.3. Beilinson \(t\)-structure on \(\mathcal{D}F_{\mathrm{cons}}(\mathrm{Fl})\) and the functor \(\mathrm{Dec}\)
In the appendix, we equip \(\mathcal{D}F_{\mathrm{cons}}(\mathrm{Fl})\) with the following additional structures.
1. A Beilinson \(t\)-structure (see Theorem A.3.1 and Proposition A.5.1).
2. A functor \(\mathrm{Dec}\) (see Definition A.4.4 and Proposition A.5.1), which is to mimic Deligne's decalee [10, 1.3.3].
#### 2.4.4. Statement of the Theorem
We can now state our result. For a proof see SS7.
**Theorem 2.4.1**.: _Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism with \(Z/S\) affine. Then_
1. \(\mathrm{F}\circ\phi_{*}\) _is_ \(t\)_-exact, for the perverse_ \(t\)_-structure on_ \(\mathcal{D}_{\mathrm{cons}}(\mathbb{P})\) _and Beilinson_ \(t\)_-structure on_ \(\mathcal{D}F_{\mathrm{cons}}(\mathrm{Fl})\)_._
2. _There is a natural equivalence of functors_ \[\mathrm{P}\circ\phi_{*}\simeq\mathrm{Dec}(\mathrm{F}\circ\phi_{*}).\]
3. _There is a functorial (in_ \(\mathrm{K}\)_) isomorphism of spectral sequences with values in_ \(\mathcal{P}(\mathrm{Fl})\) _between the constant perverse spectral sequence_ \[E_{1}^{i,j}={}^{p}\mathcal{H}^{i+j}(\overline{\pi}_{*}^{\vee}\overline{\pi}^{ \dagger p}\mathcal{H}^{-j}(\phi_{*}\,\mathrm{K})[j])\implies{}^{p}\mathcal{H}^ {i+j}(\overline{\pi}_{*}^{\vee}\overline{\pi}^{\dagger}\phi_{*}\,\mathrm{K})\] _and the decalee of the universal flag filtration spectral sequence_ \[E_{1}^{i,j}={}^{p}\mathcal{H}^{i+j}(\mathrm{Gr}_{\mathrm{F}}^{-j}\phi_{*}\, \mathrm{K}[j])\implies{}^{p}\mathcal{H}^{i+j}(\overline{\pi}_{*}^{\vee} \overline{\pi}^{\dagger}\phi_{*}\,\mathrm{K}).\]
When \(S=\mathrm{Spec}(k)\), by localizing at a generic point of \(\mathrm{Fl}\) we recover the result in [1, Theorem 4.1.1].
**Acknowledgements:**
This work owes an overwhelming intellectual debt to Beilinson's article [1]. Though we do not use the results from [1] in this article, its masterful use of the Radon transform was very insightful for us. KVS would like to thank Charanya Ravi and Donu Arapura for their helpful discussions.
### Leitfaden
[leftmargin=*]Beilinson \(t\)-structure on \(\mathcal{D}F_{\mathrm{cons}}\)[leftmargin=*]Theorem A.3.1 [leftmargin=*]Proposition A.5.1 [rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightmargin=*][rightmargin=*][rightrightmargin=*][rightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightmargin=*][rightrightrightmargin=*][rightrightrightmargin=*][rightrightrightmargin=*][
Proof.: (a) is [1, Proposition 4.2.5] and (b) is [1, Proposition 2.2.5] combined with Artin vanishing.
The following lemma is standard and proved here for ease of exposition.
**Lemma 3.1.3**.: _Let \(X/k\) be a smooth variety and \(\mathrm{K}\) be a lisse sheaf on \(X\). Then there is natural isomorphism \({}^{p}\mathcal{H}^{i}(\mathrm{K})=\mathcal{H}^{i}(\mathrm{K}[-\mathrm{dim}(X)] )[\mathrm{dim}(X)]\)._
Proof.: We induct on the number \(N\) of non-zero \(\mathcal{H}^{i}(\mathrm{K})\). The statement is true when \(N\leq 1\). Now assume \(N>1\) and let \(r\) be the least integer such that \(\mathcal{H}^{r}(\mathrm{K})\neq 0\). We have an exact triangle \(\mathcal{H}^{r}(\mathrm{K})[-r]\,\smash{\mathop{\hbox to 0.0pt{\vbox{ \hrule width 100 { \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} \hrule width 100 \vrule height 5.0pt width 0.0pt} } \hrule width 100 \vrule height 5.0pt width 0.
Proof.: Using (b), by smooth base change \(\pi^{*}_{\overline{X}}\phi_{Z*}K\simeq\phi_{X*}\pi^{*}K\), under the natural map. Hence \(\pi^{*}_{\overline{Y}}\phi_{Z*}K=i^{*}_{\overline{Y}}\pi^{*}_{\overline{X}}\phi_ {Z*}K\simeq i^{*}_{\overline{Y}}\phi_{X*}\pi^{*}K\) under the natural map. Since \(\pi_{\overline{Y}}\) is also smooth by assumption, the result follows from another application of smooth base change.
Combining Lemmas 3.2.1 and 3.2.2 we obtain the following corollary.
**Corollary 3.2.3**.: _The functor \(\mathcal{R}_{!}:=\pi^{\vee}_{\overline{X}*}j_{\overline{U}!}j^{!}_{\overline{ U}}\pi^{\dagger}_{\overline{X}}\phi_{Z*}\) is naturally equivalent to \((\pi^{\vee}_{\overline{X}}\circ\phi_{X})_{*}j_{U}j^{!}_{U}\pi^{\dagger}_{X}\)._
Consider the following additional conditions on the Diagram (3.1).
1. The morphism \(\phi_{Z}:Z\to\bar{Z}\) is a quasi-finite morphism, and \(\pi^{\vee}_{\overline{X}}\circ j_{\overline{U}}\) is affine.
2. The morphism \(\pi^{\vee}_{\overline{X}}\circ\phi_{X}\) is affine.
Now we can state the main lemma of this section. This abstracts out the 'double' Artin vanishing needed for the \(t\)-exactness of the Radon transforms with source an affine scheme.
**Lemma 3.2.4**.: _With the notations as above if (a)-(d) are satisfied then \(\mathcal{R}_{!}\) is left \(t\)-exact. Moreover if (e) also holds then \(R_{!}\) is \(t\)-exact._
Proof.: By (b) above, since \(\pi^{\vee}_{\overline{X}}\) is proper we can rewrite \(\mathcal{R}_{!}=(\pi^{\vee}_{\overline{X}}\circ j_{\overline{U}})!j^{!}_{ \overline{U}}\pi^{\dagger}_{\overline{X}}\phi_{S*}\). Since \(\pi^{\vee}_{\overline{X}}\circ j_{\overline{U}}\) is affine by assumption ((c) above), Artin vanishing implies \((\pi^{\vee}_{\overline{X}}\circ j_{\overline{U}})!\) is left \(t\)-exact. Since \(\phi_{S}\) is quasi-finite, [1, Proposition 2.2.5] implies that \(\phi_{S*}\) is left \(t\)-exact. Thus \(\mathcal{R}_{!}\) is left \(t\)-exact.
When (e) holds, using Corollary 3.2.3 it suffices to show that \((\pi^{\vee}_{\overline{X}}\circ\phi_{X})_{*}j_{U}j^{!}_{U}\pi^{\dagger}_{X}\) is right \(t\)-exact. By [1, SS4.2.4], we know that \(j_{U_{!}}\) is right \(t\)-exact. Moreover since \(\pi^{\vee}_{\overline{X}}\circ\phi_{X}\) is affine ((e) above), Artin vanishing implies the right \(t\)-exactness of \((\pi^{\vee}_{\overline{X}}\circ\phi_{X})_{*}\). Thus \(\mathcal{R}_{!}\) is right \(t\)-exact. This proves the exactness of \(\mathcal{R}_{!}\).
## 4. Graded quotients of a Brylinski-Radon transformation
We begin by setting up notations and recalling some basic facts about flag bundles [1, 1]. In the rest of the article, let \(S\) be the base scheme.
### Recollection about Flag bundles
1. Let \(\mathscr{E}\) be a locally free sheaf of rank \(N+1\) on \(S\). Let \(\mathbb{P}(\mathscr{E})\) be the associated projective bundle. Thus for any scheme \(f\colon X\to S\), the set \(\mathbb{P}(\mathscr{E})(X)\) is the collection of rank \(1\) locally free quotients of \(f^{*}\mathscr{E}\) on \(X\).
2. We denote by \(\operatorname{Fl}(\mathscr{E})\)9 the full flag bundle of \(\mathscr{E}\). More precisely \(\operatorname{Fl}\) represents the functor defined on schemes over \(S\), which to a scheme \(f:X\to S\) associates the set \((\mathscr{L}_{r})_{N\geq r\geq 1}\) of locally free sheaves on \(X\) such that for all \(r\), \(\mathscr{L}_{r}\) is of rank \(r\) and is a quotient of \(\mathscr{L}_{r+1}\). Here we set \(\mathscr{L}_{N+1}=f^{*}\mathscr{E}\). Note that \(\operatorname{Fl}\) is smooth over \(S\).
3. We have the _universal_ flag \(\overline{Q}_{0}:=\mathbb{P}\times_{S}\operatorname{Fl}\supseteq\overline{Q}_{-1 }\supseteq\overline{Q}_{-2}\cdots\overline{Q}_{-N}\supseteq\overline{Q}_{-N-1}= \emptyset\). For any scheme \(f\colon X\to S\), \(\overline{Q}_{r}(X)\subseteq\mathbb{P}(X)\times\operatorname{Fl}(X)\) consisting of those one dimensional quotients of \(f^{*}\mathscr{E}\) which are also quotients of \(\mathscr{L}_{N+r}\) in the above notation.
4. The closed immersion \(\overline{Q}_{r}\hookrightarrow\mathbb{P}\times_{S}\operatorname{Fl}\) is regular of codimension \(-r\), and moreover \(\overline{Q_{r}}\) is smooth over \(S\). The maps \(\overline{Q}_{r}\backslash\overline{Q}_{r-1}\to\operatorname{Fl}(N)\) are affine (with fibers isomorphic to \(\mathbb{A}^{N+r}\)). These statements can be verified by Zariski locally over \(S\), in particular, when \(\mathscr{E}=\mathscr{O}_{S}^{\oplus N+1}\), where they are obvious.
5. For \(-1\geqslant r\geqslant-N-1\), we let \(\overline{U}_{r}\) the complement of \(\overline{Q}_{r}\), and denote by \(\overline{l}_{r}:\overline{U}_{r}\hookrightarrow\mathbb{P}\times_{S} \operatorname{Fl}\), the corresponding open immersion. Thus we have adjunction maps \(\overline{l}_{r!}\overline{l}_{r}\to\overline{l}_{r-1!}\overline{l}_{r-1}\).
6. For \(-1\geqslant r\geqslant-N-1\), we denote by \(\overline{l}_{r,r-1}\colon\overline{Q}_{r}\backslash\overline{Q}_{r-1} \hookrightarrow\overline{Q}_{0}\) the locally closed immersion.
7. Finally let \(\overline{\pi}\) (resp. \(\overline{\pi}^{\vee}\)) the projection map from \(\mathbb{P}\times_{S}\operatorname{Fl}\) to \(\mathbb{P}\) (resp. \(\operatorname{Fl}\)).
### \(t\)-exactness of the graded pieces of a Brylinski-Radon transformation
We continue using the notations from SS4.1. Consider the functors \(\operatorname{F}_{S}^{r}\) and \(\operatorname{Gr}_{\operatorname{F}_{S}}^{r}\)10 from \(D_{c}^{b}(\mathbb{P})\) to \(D_{c}^{b}(\operatorname{Fl})\) defined below
Footnote 10: We shall suppress \(S\) from the notation when there is no scope for confusion.
\[\operatorname{F}^{r}(\operatorname{K})=\left\{\begin{array}{ll}\overline{ \pi}_{*}^{\vee}\overline{\pi}^{\dagger}\operatorname{K}&\text{if }r\leqslant-N-1=- \text{rk}(\mathscr{E})\\ \overline{\pi}_{*}^{\vee}\overline{l}_{r-1!}\overline{l}_{r-1}^{\dagger} \overline{\pi}^{\dagger}\operatorname{K}&\text{if }-N\leqslant r\leqslant 0\\ 0&\text{if }r>0\end{array}\right. \tag{4.1}\]
\[\operatorname{Gr}_{\operatorname{F}}^{r}(\operatorname{K})=\left\{\begin{array}[] {ll}0&\text{if }r\leqslant-N-1=-\text{rk}(\mathscr{E})\\ \overline{\pi}_{*}^{\vee}\overline{l}_{r,r-1!}\overline{l}_{r,r-1}^{*} \overline{\pi}^{\dagger}\operatorname{K}&\text{if }-N\leqslant r \leqslant 0\\ 0&\text{if }r>0\end{array}\right.\,. \tag{4.2}\]
Now note that by definition one has a triangle
\[\operatorname{F}^{r+1}\operatorname{K}\xrightarrow{\ \ }\operatorname{F}^{r} \operatorname{K}\xrightarrow{\ \ }\operatorname{Gr}_{\operatorname{F}}^{r} \operatorname{K}\xrightarrow{\ \ }\,. \tag{4.3}\]
**Remark 4.2.1**.: Our notation here is suggestive and we would like to think of \(\operatorname{F}\) as a functor to the filtered derived category of \(\operatorname{Fl}\). However until SS7, we will only need results about \(\operatorname{Gr}_{\operatorname{F}}^{r}\), and hence avoid the enhancement until required.
**Proposition 4.2.2**.: _Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism. Then \(\operatorname{Gr}_{\operatorname{F}}^{r}\circ\phi_{*}[r]\) is left \(t\)-exact. Moreover, if \(Z\) is affine over \(S\), then it is \(t\)-exact._
Proof.: We denote by \(Q_{r}\) (resp. \(l_{r}\)) the base change of \(\overline{Q}_{r}\) (resp. \(\overline{l}_{r}\)) along \(Z\times_{S}\operatorname{Fl}\to\mathbb{P}\times_{S}\operatorname{Fl}\). We have a diagram
\[\begin{CD}Q_{r}\backslash Q_{r-1}@>{j_{r}}>{}>Q_{r}@<{i_{r-1}}>{}>Q_{r-1}\\ \parbox{0.0pt}{$\begin{array}{c}\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par
For this article it would be useful to study the following (modified) Brylinski-Radon transforms,
\[\mathcal{R}_{S,dl}:=\pi_{d*}^{\vee}j_{d!}j_{d}^{\dagger}\pi_{d}^{\dagger}, \tag{5.2}\]
and
\[\mathcal{R}_{S,d*}:=\pi_{d*}^{\vee}j_{d*}j_{d}^{*}\pi_{d}^{\dagger}. \tag{5.3}\]
#### 5.1.2. Basic properties of the Brylinski-Radon transformation
1. Note that (upto Tate twists) \(\mathcal{R}_{d*}\)12 is dual to \(\mathcal{R}_{dl}\). Also \(\mathcal{R}_{d}\) is self-dual (up to Tate twists13). Footnote 12: As before we suppress \(S\) from the notation when there is no scope for confusion.
2. The triple \((Q_{d},\mathbb{P}\times_{S}\mathbf{G}(d),U_{d})\) gives rise to a triangle \[i_{d*}i_{d}^{*}[-1]\rTo j_{dl}j_{d}^{\dagger}\rTo 1\rTo\,\] and hence to a triangle on \(\mathbf{G}(d)\) \[\mathcal{R}_{d}\rTo \mathcal{R}_{dl}[d-N+1]\rTo\pi_{d*}^{\vee}\pi_{d}^{\dagger}[d-N+1] \rTo\.\] (5.4)
(t\)-exactness properties of \(\mathcal{R}_{d^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_Case: \(\phi\) quasi-finite_
Using the isomorphism (5.6), to prove (a) it suffices to show that for any \(0\leqslant d\leqslant N-1\), \(\mathrm{F}^{-N+d+1}\circ\phi_{*}[-N+d+1]\) is left \(t\)-exact. For every integer \(r\) and sheaf \(\mathrm{K}\) on \(Z\), we have an exact triangle
Thus \(\mathrm{F}^{r}\circ\phi_{*}[r]\) is left \(t\)-exact if \(\mathrm{F}^{r+1}\circ\phi_{*}[r+1]\) and \(\mathrm{Gr}^{r}_{\mathrm{F}}\circ\phi_{*}[r]\) are so. By Proposition 4.2.2 and by descending induction (on \(r\)) we are reduced to showing the left \(t\)-exactness for the case \(r=0\) (or \(d=N-1\)), in which case \(\mathrm{F}^{0}\circ\phi_{*}=\mathrm{Gr}^{0}_{\mathrm{F}}\circ\phi_{*}\), and the left \(t\)-exactness follows from Proposition 4.2.2.
_Case: \(\phi\) quasi-finite and \(Z/S\) affine_
Using the isomorphism (5.6) as before, to show right \(t\)-exactness of \(\mathcal{R}_{d!}\circ\phi_{*}\) for any \(d\), it suffices to show the right \(t\)-exactness of \(\mathrm{F}^{r}\circ\phi_{*}\) for any \(r\). For any sheaf \(\mathrm{K}\) on \(Z\) we have an exact triangle
By Proposition 4.2.2, \(\mathrm{Gr}^{r}_{\mathrm{F}}\circ\phi_{*}[r]\) is \(t\)-exact, and hence \(\mathrm{Gr}^{r}_{\mathrm{F}}\circ\phi_{*}[r]\) is right \(t\)-exact for \(r\leqslant 0\). Hence for \(r\leqslant-1\), \(\mathrm{F}^{r}\circ\phi_{*}[r]\) is right \(t\)-exact if \(\mathrm{F}^{r+1}\circ\phi_{*}[r]\) is so. By descending induction on \(r\) we are reduced to the case \(r=0\), in which case it follows from Proposition 4.2.2.
Now we show the right \(t\)-exactness of \(\mathcal{R}_{d}\circ\phi_{*}\). The statement is trivially true for \(d=0\). Now suppose \(d\geqslant 1\). Thus it is enough to show that \(f_{d}^{\dagger}\mathcal{R}_{d}\circ\phi_{*}\) is right \(t\)-exact whenever \(f_{d-1}^{\dagger}\mathcal{R}_{d-1}\circ\phi_{*}\) is so. As before we have a Cartesian diagram
(5.7)
Hence using the isomorphism in (4.5), for any sheaf \(K\) on \(Z\) and \(r=d-N+1\leqslant 0\), we obtain an exact triangle
We have already shown that \(\mathrm{Gr}^{r}_{\mathrm{F}}\circ\phi_{*}\) is right \(t\)-exact (for \(r\leqslant 0\)) and thus we are done.
**Remark 5.2.2**.: Using SS5.1.2, (a) we can now deduce the dual statements. More precisely \(R_{d*}\circ\phi_{!}[N-d-1]\) is right \(t\)-exact when \(\phi\) is quasi-finite. Moreover when \(Z/S\) is affine, \(R_{d*}\circ\phi_{!}\) and \(R_{d}\circ\phi_{!}\) are left \(t\)-exact.
### P=Dec(F) on cohomology
In this section, we work over \(S=\mathrm{Spec}(k)\). For ease of notation, we shall denote \(\mathcal{R}_{d!}\) (resp. \(\mathcal{R}_{d}\)) simply by \(R_{d!}\) (resp. \(R_{d}\)). Now we shall prove Theorem 2.1.1 using Proposition 5.2.1.
Proof.: For \(d\leqslant 0\), [1, 4.2.4] implies that \(\mathrm{H}^{d-N+1+i}(Z,{}^{p}\tau_{\geqslant i}\,\mathrm{K})=0\) and hence \(P^{-i}\mathrm{H}^{d-N+1+i}(Z,\mathrm{K})=\mathrm{H}^{d-N+1+i}(Z,\mathrm{K})\). Moreover when \(Z\) is affine, Artin vanishing implies that \(\mathrm{H}^{d-N+1+i}(Z,{}^{p}\tau_{\leqslant i}\,\mathrm{K})=0\) for \(d\geqslant N\) and hence \(P^{-i}\mathrm{H}^{d-N+1+i}(Z,\mathrm{K})=0\). Thus Theorem 2.1.1 is trivially true when \(d\leqslant 0\) or \(d\geqslant N\), and consistent with our convention for \(\mathbf{G}(d)\). Now we assume \(0<d<N\).
Applying perverse cohomology functor to the triangle (5.4) for the sheaf \(\phi_{*}\,\mathrm{K}\) and \(\phi_{*}{}^{p}\tau_{\leqslant i}K\), we get a commutative diagram of perverse sheaves whose rows and left column are exact
\[{}^{p}\mathcal{H}^{i}(\mathcal{R}_{d!}[d-N+1](\phi_{*}{}^{p}\tau_{ \geqslant i}K))\] \[{}^{p}\mathcal{H}^{i}(\mathcal{R}_{d!}[d-N+1](\phi_{*}K))\] \[{}^{p}\mathcal{H}^{i}(\mathcal{R}_{d!}[d-N+1](\phi_{*}{}^{p}\tau_ {\leqslant i}K))\]
Now Proposition 5.2.1 implies that the perverse sheaf \({}^{p}\mathcal{H}^{i}(\mathcal{R}_{d!}[d-N+1](\phi_{*}{}^{p}\tau_{\geqslant i }K))\) vanishes, and thus \(\ker(\alpha_{i})\subseteq\mathrm{Im}(\beta_{i})\). Moreover when \(Z/S\) is affine, \({}^{p}\mathcal{H}^{i+1}(\mathcal{R}_{d}(\phi_{*}{}^{p}\tau_{\leqslant i}K))\) vanishes too and hence \(\ker(\alpha_{i})=\mathrm{Im}(\beta_{i})\). Now let \(V\subseteq\mathbf{G}(d)\) be the locus where
1. \(\mathcal{R}_{d}(\phi_{*}K)\) is lisse.
2. The stalk of \(\mathcal{R}_{d}(\phi*K)[-d(N-d)]\) at any closed point \(v\in V\) is isomorphic to \(R\Gamma(\phi^{-1}(\Lambda_{v}),\mathrm{K}\,|_{\phi^{-1}(\Lambda_{v})})\).
By constructibility of \(\mathcal{R}_{d}(\phi_{*}K)\) and generic base change [13, Chapitre 7. Theoreme 1.9], \(V\) contains a dense open subset and we replace \(V\) with that open subset. Since restriction to an open subset is \(t\)-exact we have \(\ker(\alpha_{i}|_{V})\subseteq\mathrm{Im}(\beta_{i}|_{V})\) (resp. \(\ker(\alpha_{i}|_{V})=\mathrm{Im}(\beta_{i}|_{V})\) as the case may be). By smooth base change and Lemma 3.1.3, \(\mathrm{Im}(\beta_{i}|_{V})\) is the constant perverse sheaf with values in \(P^{-i}\mathrm{H}^{i+d-N+1}(Z,\mathrm{K})\). On the other hand by (a) and (b) above, \(\ker(\alpha_{i}|_{V})\) is the constant perverse sheaf with values in \(\ker\left(\mathrm{H}^{i+d-N+1}(Z,\mathrm{K})\to\mathrm{H}^{i+d-N+1}(\phi^{-1}( \Lambda_{v}),\mathrm{K}\,|_{\phi^{-1}(\Lambda_{v})})\right)\) for any \(v\) in \(V\).
**Remark 5.3.1**.: The argument above implies that \(\ker(\alpha_{i})\subseteq\mathrm{Im}(\beta_{i})\) (resp. \(\ker(\alpha_{i})=\mathrm{Im}(\beta_{i})\)) remains valid even when the base is not necessarily a field.
### The case \(d=N-1\)
We continue using the notations from the earlier sections. Now we look at the special case of \(d=N-1\). We denote the Grassmannian \(\mathbf{G}(N-1)\) by \(\mathbb{P}^{\vee}\). Under this assumption, Proposition 5.2.1 and Remark 5.2.2 imply the following14.
Footnote 14: Corollary 5.4.1 is equivalent to the exactness of \(\mathrm{Gr}_{F}^{0}\) in Proposition 4.2.2, and thus can be readily deduced directly from Lemma 3.2.4 as in the proof of Proposition 4.2.2.
**Corollary 5.4.1**.: _Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism. Then \(\mathcal{R}_{N-1!}\circ\phi_{*}\) is left \(t\)-exact and \(\mathcal{R}_{N-1*}\circ\phi_{!}\) is right \(t\)-exact. Moreover, if \(Z/S\) is affine then they are both \(t\)-exact._
For the next two results, we work over \(S=\operatorname{Spec}(k)\) and use notations from SS5.3.
**Corollary 5.4.2**.: _Let \(\phi:Z\to\mathbb{P}\) be a quasi-finite. Let \(\operatorname{K}\) be an upper semi-perverse sheaf on \(Z\). Then the cohomology groups \(\operatorname{H}^{i}(Z,j_{!}j^{!}\operatorname{K})=0\) vanish for \(i<0\). Here \(j\colon V\hookrightarrow Z\) is the inverse image (under \(\phi\)) of a generic hyperplane section complement._
Proof.: By constructibility and Deligne's generic base change, we may choose an open dense subset \(V\hookrightarrow\mathbb{P}^{\vee}\) such that \(\mathcal{R}_{N-1!}(\phi_{*}\operatorname{K})\) is lisse and the stalk of \(\mathcal{R}_{N-1!}(\phi_{*}\operatorname{K})[-N]\) at a closed point \(v\in V\) is isomorphic to \(R\Gamma(Z,j_{v!}j^{!}_{v}\operatorname{K})\). Here \(j_{v}\colon Z\backslash\phi^{-1}\Lambda_{v}\hookrightarrow Z\), is the hyperplane complement. Now suppose \(\operatorname{K}\) was upper semi-perverse on \(Z\), then by Corollary 5.4.1, \({}^{p}\mathcal{H}^{i}(\mathcal{R}_{N-1!}\circ\phi_{*}(\operatorname{K}))=0\) for \(i<0\) and this vanishing continues to hold when restricted to \(V\). Thus by Lemma 3.1.3, \(\operatorname{H}^{i}(Z,j_{v!}j^{!}_{v}\operatorname{K})=0\), for \(i<0\).
#### 5.4.1. Proof of Theorem 2.2.2
Proof.: The argument for (b) is similar to Corollary 5.4.2, where one has exactness of \(\mathcal{R}_{N-1!}\circ\phi_{*}\) (as opposed to only left \(t\)-exactness of \(\mathcal{R}_{N-1!}\circ\phi_{*}\)), thanks to \(Z/k\) being affine. Now shall we prove (a).
Let \(\mathcal{R}_{\mathbb{P}^{\vee},!}\colon D^{b}_{c}(\mathbb{P}\times_{k}\mathbb{ P}^{\vee})\to D^{b}_{c}(\mathbb{P}^{\vee}\times_{k}\mathbb{P}^{\vee})\) be the l-Radon transform over the base \(\mathbb{P}^{\vee}\) (see Equation (5.2)). Similarly let \(\mathcal{R}_{\mathbb{P},*}\colon D^{b}_{c}(\mathbb{P}\times_{k}\mathbb{P}) \to D^{b}_{c}(\mathbb{P}\times_{k}\mathbb{P}^{\vee})\) be the *-Radon transform over the base \(\mathbb{P}\) (see Equation (5.3)).
Let \(j_{N-1}\colon U_{N-1}\hookrightarrow\mathbb{P}\times_{k}\mathbb{P}^{\vee}\) be the complement of the universal hyperplane section (see SS5.1,(b)). Let \(\Delta\colon\mathbb{P}\hookrightarrow\mathbb{P}\times_{k}\mathbb{P}\) be the diagonal embedding. It follows from the definition of \(\mathcal{R}_{\mathbb{P},*}\), that
\[\mathcal{R}_{\mathbb{P},*}\circ\Delta_{*}=j_{N-1*}j^{*}_{N-1}\mathcal{R}_{ \mathbb{P},*}\circ\Delta_{*},\]
and thus
\[\mathcal{R}_{\mathbb{P}^{\vee},!}\circ\mathcal{R}_{\mathbb{P},*}\circ\Delta_{* }=\left(\mathcal{R}_{\mathbb{P}^{\vee},!}\circ j_{N-1*}\right)\circ\left(j^{* }_{N-1}\circ\left(\mathcal{R}_{\mathbb{P},*}\circ\Delta_{*}\right)\right).\]
Now Corollary 5.4.1 implies that \(\mathcal{R}_{\mathbb{P},*}\circ\Delta_{*}\) is \(t\)-exact. Moreover \(j_{N-1}\) is obviously quasi-finite and since the composite \(U_{N-1}\hookrightarrow\mathbb{P}\times_{k}\mathbb{P}^{\vee}\to\mathbb{P}^{\vee}\) is affine, Corollary 5.4.1 also implies that \(\mathcal{R}_{\mathbb{P}^{\vee},!}\circ j_{N-1*}\) is \(t\)-exact. Hence implying the \(t\)-exactness of \(\mathcal{R}_{\mathbb{P}^{\vee},!}\circ\mathcal{R}_{\mathbb{P},*}\circ\Delta_{*}\)15.
Thus for \(\phi\) a quasi-finite and affine morphism, \(\mathcal{R}_{\mathbb{P}^{\vee},!}\circ\mathcal{R}_{\mathbb{P},*}\circ\Delta_ {*}\circ\phi_{*}\) is \(t\)-exact. As before using constructibility and generic base change (on \(\mathbb{P}^{\vee}\times_{k}\mathbb{P}^{\vee}\)) for \(\mathcal{R}_{\mathbb{P}^{\vee},!}\circ\mathcal{R}_{\mathbb{P},*}\circ\Delta_ {*}\circ\phi_{*}\operatorname{K}\), we get (a).
Footnote 15: This statement (and the proof) is also valid over a base \(S\).
### Radon transform from \(\mathcal{P}(z)\) to \(\mathcal{P}(\mathbb{P}^{\vee})\) is faithful
Suppose we are in the situation of Corollary 5.4.1 with \(Z/S\) assumed to be affine. Thus \(\mathcal{R}_{N-1!}\circ\phi_{*}\) is \(t\)-exact and we can look at the induced functor (also denoted by \(\mathcal{R}_{N-1!}\circ\phi_{*}\)) between \(\mathcal{P}(Z)\) and \(\mathcal{P}(\mathbb{P}^{\vee})\). Corollary 5.4.1 implies that this is exact. However one can say more thanks to the existence of inverse Radon transform.
The existence of an inverse Radon transform implies that \({}^{p}\mathcal{H}^{0}\circ\mathcal{R}_{N-1}\) induces an equivalence of categories between the quotient categories \(\mathcal{A}(\mathbb{P})\)16 and \(\mathcal{A}(\mathbb{P}^{\vee})\)[13, Proposition 5.7]. It follows from the triangle (5.4) that \({}^{p}\mathcal{H}^{0}\circ\mathcal{R}_{l}(\mathbb{P}^{\vee})={}^{p}\mathcal{H} ^{0}\circ\mathcal{R}_{N-1}\) on \(\mathcal{A}(\mathbb{P})\). Thus the former also induces an equivalence of categories. Thus we have functors
Footnote 16: Here \(\mathcal{A}(\mathbb{P})\) is the quotient of \(\mathcal{P}(\mathbb{P})\) by the Serre sub-category of perverse sheaves coming from \(S\) (via \(\dagger\)-pullback). Similarly for \(\mathcal{A}(\mathbb{P}^{\vee})\).
\[\mathcal{P}(Z)\xrightarrow[\phi_{*}]{}\mathcal{P}(\mathbb{P})\xrightarrow[ \mathcal{H}^{0}\circ\mathcal{R}_{N-1!}]{}\mathcal{P}(\mathbb{P}^{\vee}) \tag{5.8}\]
It follows from Corollary 5.4.1 that \({}^{p}\mathcal{H}^{0}\circ\mathcal{R}_{N-1!}\circ\phi_{*}=\mathcal{R}_{N-1!} \circ\phi_{*}\). Consider the diagram
(5.9)
Let \(\mathrm{K}\) be a perverse sheaf such that \(\mathcal{R}_{N-1!}\circ\phi_{*}\,\mathrm{K}\) belongs to Serre sub-category of perverse sheaves coming from \(S\) on \(\mathbb{P}^{\vee}\), then diagrams (5.8) and (5.9) imply that
\[\phi_{*}\,\mathrm{K}\simeq\pi_{\mathbb{P}}^{\dagger}\mathrm{L}\]
for some perverse sheaf \(\mathrm{L}\) on \(\mathbb{P}\). However, by projection formula implies that
\[\pi_{Z*}\,\mathrm{K}\simeq\oplus_{i=0}^{N}\mathrm{L}[-2i+N].\]
Since by assumption \(\pi_{Z}\) is affine and \(\mathrm{K}\) perverse, Artin vanishing implies \(\pi_{Z*}\,\mathrm{K}\) is semi-perverse and thus \(\mathrm{L}=0\) and hence so is \(\mathrm{K}\). Thus for \(\phi\colon Z\to\mathbb{P}\) be quasi-finite and \(Z/S\) affine we have proved the following.
**Proposition 5.5.1**.: _The functors \(\mathcal{R}_{N-1!}\circ\phi_{*}\), \({}^{p}\mathcal{H}^{0}\circ\mathcal{R}_{N-1}\circ\phi_{*}\), \({}^{p}\mathcal{H}^{0}\circ\mathcal{R}_{N-1}\circ\phi_{!}\) and \(\mathcal{R}_{N-1*}\circ\phi_{!}\) from \(\mathcal{P}(Z)\) to \(\mathcal{A}(\mathbb{P}^{\vee})\) (and hence also to \(\mathcal{P}(\mathbb{P}^{\vee})\)) are all faithful and exact._
**Remark 5.5.2**.: Corollary 5.4.1 and Proposition 5.5.1 together imply that a sheaf \(\mathrm{K}\) on \(Z\) is perverse iff \(R_{N-1!}\circ\phi_{*}(\mathrm{K})\) is so.
## 6. Applications to divisibility of Frobenius eigenvalues and Hodge level
In this section we shall apply Corollary 5.4.2 to carry out the strategy in [11] to obtain a strengthening of Esnault-Katz's cohomological divisibility theorem [1, Theorem 2.3, (2)]. The projective case was handled in [1, Theorem 1.3].
### A Lefschetz theorem for affine Gysin map
We work over an arbitrary algebraically closed field \(k\). Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite map. Let \(\mathrm{K}\) be a semi-perverse sheaf on \(Z\).
**Lemma 6.1.1**.: _Then for a general hyperplane section \(\Lambda\subset\mathbb{P}\) the Gysin map_
\[\mathrm{H}^{r}_{c}(\phi^{-1}(\Lambda),i^{\dagger}\mathrm{K})\to\mathrm{H}^{r}_ {c}(Z,\mathrm{K}),\]
_is a surjection for \(r=1\) and an isomorphism for \(r>1\). Here \(i\colon\phi^{-1}(\Lambda)\subset Z\) is the closed embedding._
Proof.: By the standard Gysin triangle, it suffices to show that \(\mathrm{H}^{r}_{c}(Z,j_{*}j^{*}\mathrm{K})\) vanishes for \(r>0\). Here \(j\colon Z\backslash\phi^{-1}\Lambda\hookrightarrow Z\) is the complement of a general hyperplane section pullback. The statement is dual to the assertion in Corollary 5.4.2.
Now suppose \(i\colon Z\hookrightarrow\mathbb{A}^{N}_{k}\) be a closed subvariety of dimension \(n\). Let \(\ell\) be a prime invertible in \(k\).
**Proposition 6.1.2**.: _Then for a general hyperplane section \(\Lambda\subset\mathbb{A}^{N}\), the Gysin map_
\[\mathrm{H}^{r-2}_{c}\left(\Lambda\backslash(\Lambda\cap Z),\mathbb{Q}_{\ell} \right)(-1)\to\mathrm{H}^{r}_{c}(\mathbb{A}^{N}\backslash Z,\mathbb{Q}_{\ell}),\]
_is a surjection for \(r-1>\dim(Z)\)._
Proof.: If \(Z=\emptyset\) we are done. We now assume that \(\dim(Z)\geq 0\). We may also assume \(r<2N\), otherwise the result is obvious. For any hyperplane section \(\Lambda\subset\mathbb{A}^{N}\), we have a commutative diagram with surjective vertical arrows
(6.1)
Applying Lemma 6.1.1 to \(\mathrm{K}=\mathbb{Q}_{\ell}[\dim(Z)]\) on \(Z\), we conclude that for a general hyperplane section \(\Lambda\), the map \(\alpha\) and thus the map \(\beta\) is a surjection for \(r-1>\dim(Z)\).
**Remark 6.1.3**.: Proposition 6.1.2 was already observed for \(Z\subset\mathbb{A}^{N}\), a local complete intersection (and thus \(\mathbb{Q}_{\ell}[\dim(Z)]\) is perverse) in [10] using Deligne's weak Lefschetz Theorem [11, Corollary A.5].
### Bounds on cohomological divisibility
Let \((N;d_{1},d_{2}\cdots,d_{r})\in\mathbb{N}\times\mathbb{N}^{r}\) be a tuple of natural numbers. For each non-negative integer \(j\) consider the following function on \(\mathbb{N}\times\mathbb{N}^{s}\)
\[\mu_{j}(N;d_{1},d_{2}\cdots,d_{r})=j+\max(0,\left\lceil\frac{N-j-\sum_{i}d_{i} }{\max_{i}d_{i}}\right\rceil)\geq j. \tag{6.2}\]
Let \(\mathbb{F}_{q}\) be a finite field with \(q\) elements. Let \(Z_{0}\subset\mathbb{A}_{\mathbb{F}_{q}}^{N}\) be an affine scheme defined by the vanishing of \(r\) polynomials of degree \(d_{1},d_{2}\cdots,d_{r}\) with \(N,r\geq 1\). We denote by \(Z\) (resp. \(\mathbb{A}^{N}\)), the base change of \(Z_{0}\) (resp. \(\mathbb{A}_{\mathbb{F}_{q}}^{N}\)) to an algebraic closure (say \(\bar{\mathbb{F}}_{q}\)) of \(\mathbb{F}_{q}\).
_Summary of previous results on cohomological divisibility:_ Let \(\mathrm{H}_{c}^{*}(\ \ )\) denote the compactly supported \(\ell\)-adic cohomology, for \(\ell\nmid q\). By [14, XXI, Appendice], the eigenvalues of the geometric Frobenius on \(\mathrm{H}_{c}^{*}(Z)\) (and \(\mathrm{H}_{c}^{*}(\mathbb{A}^{N}\backslash Z)\)) are algebraic integers and thus it makes sense to talk about their divisibility properties. It follows from [14, XXI, Appendice, (5.2)] that the Frobenius eigenvalues on \(\mathrm{H}_{c}^{N+j}(\mathbb{A}^{N}\backslash Z)\) are divisible by \(q^{j}\)17 for \(j\geq 0\). In [1, Theorem 2.3, (2)], Esnault-Katz showed an improved bound replacing \(q^{j}\) above by \(q^{\mu_{j}(N;d_{1},d_{2}\cdots d_{r})}\). We now prove the main result of this section.
Footnote 17: This holds for any separated scheme of finite type over \(\mathbb{F}_{q}\).
#### 6.2.1. Proof of Theorem 2.3.1
Proof.: The long exact sequence (with values in \(\mathrm{H}_{c}^{*}\)) for the triple \((Z,\mathbb{A}^{N},\mathbb{A}^{N}\backslash Z)\) together with the Frobenius equivariant isomorphism \(R\Gamma_{c}(\mathbb{A}^{N},\mathbb{Q}_{\ell})\simeq\mathbb{Q}_{\ell}(-N)\) implies that it suffices to prove (b). Moreover, in doing so we may freely extend the field of definition of \(Z\) since the geometric Frobenius concerning \(\mathbb{F}_{q^{r}}\) is the \(r^{\mathrm{th}}\) iterate of the geometric Frobenius with respect to \(\mathbb{F}_{q}\).
First suppose \(\dim(Z)=-1\), then \(Z=\emptyset\) and in this case the theorem is obvious since \(\mathrm{H}_{c}^{\dim(Z)+j+1}(\mathbb{A}^{N}\backslash Z)\) vanishes for \(0\leq j\leq\dim(Z)+1\). Thus we may assume \(\dim(Z)\geq 0\). Next suppose \(N=1\), then since \(d_{i}\geq 1\) we must have \(\dim(Z)=0\) and \(\mu_{j}(N;d_{1},d_{2}\cdots,d_{r})=j\). Also note that \(\mathrm{H}_{c}^{1}(\mathbb{A}^{1}\backslash Z)\simeq\mathrm{H}_{c}^{0}(Z) \simeq\oplus^{\#Z(\mathbb{F}_{q})}\mathbb{Q}_{\ell}\) (the \(j=0\) case) and \(\mathrm{H}_{c}^{2}(\mathbb{A}^{1}\backslash Z)\simeq\mathbb{Q}_{\ell}(-1)\) (the \(j=1\) case). Thus the theorem is verified when \(N=1\). In the rest of the proof, we assume \(N\geq 2\) and \(\dim(Z)\geq 0\).
Using Proposition 6.1.1 and if necessary extending the base field (and still denoting it by \(\mathbb{F}_{q}\)), we have a hyperplane \(\Lambda_{0}\subset\mathbb{A}_{\mathbb{F}_{q}}^{N}\) such that the Gysin map
(6.3)
is a surjection for \(j>0\) and such that \(\dim(\Lambda\cap Z)=\dim(Z)-1\). Here \(\Lambda\) is the base change of \(\Lambda_{0}\) to \(\bar{\mathbb{F}}_{q}\). Note that \(\Lambda\cap Z\subset\Lambda\) is defined by \(r\) equations of degrees \(d_{1},d_{2}\cdots,d_{r}\) in \(\Lambda\) an affine space of dimension \(N-1\geq 1\).
By induction on \(\dim(Z)\), we may assume that the eigenvalues of the geometric Frobenius acting on \(\mathrm{H}_{c}^{(\dim(Z)-1)+(j-1)+1}\left(\Lambda\backslash(\Lambda\cap Z)\right)\) are divisible by \(q^{\mu_{j-1}(N-1;d_{1},d_{2}\cdots,d_{r})}\) for \(0\leq j-1\leq\dim(Z)\). Thus the surjection in (6.3) implies that the eigenvalues of the geometric Frobenius acting on \(\mathrm{H}_{c}^{\dim(Z)+j+1}(\mathbb{A}^{N}\backslash Z)\) are divisible by \(q^{\mu_{j-1}(N-1;d_{1},d_{2}\cdots,d_{r})}+1=q^{\mu_{j}(N;d_{1},d_{2}\cdots,d _{r})}\) (where we have accounted for the Tate twist on the left), in the range \(1\leq j\leq\dim(Z)+1\). Finally for \(j=0\), the result follows from [1, Theorem 2.3, (1)].
An analogous argument can be used to deduce the following result about Hodge levels18, where instead of [1, Theorem 2.3, (1)] one uses [1, SS5.3] in the proof of Theorem 2.3.1 above. In the next theorem \(Z\subset\mathbb{A}_{\mathbb{C}}^{N}\) is an affine scheme defined by \(r\) polynomials of degree \(d_{1},d_{2}\cdots,d_{r}\) with \(N,r\geq 1\).
Footnote 18: The Hodge level of a Mixed Hodge structure \((\mathrm{H}_{\mathbb{Z}},\mathrm{W}_{\bullet},\mathrm{F}^{\bullet})\) is the largest integer \(n\) such that \(\mathrm{F}^{n}\mathrm{H}_{\mathbb{C}}=\mathrm{H}_{\mathbb{C}}\).
**Theorem 6.2.1**.: _The Hodge levels of_
1. \(\mathrm{H}_{c}^{\dim(Z)+j}(Z)\)_, for_ \(j\geq 0\) _and_
2. \(\mathrm{H}_{c}^{\dim(Z)+j+1}(\mathbb{A}^{N}\backslash Z)\) _for_ \(0\leq j\leq\dim(Z)+1\)__
_are atleast \(\mu_{j}(N;d_{1},d_{2}\cdots,d_{r})\)._
## 7. P=Dec(F) as functors
In this section, we shall upgrade Theorem 2.1.1 to an equivalence of functors. We shall make use of the results proved in the appendix. We begin by recalling the same
### Summary of notations and results from Appendix A
As before we fix an algebraically closed field \(k\) and let \(X/k\) be a separated scheme of finite type over \(k\).
1. There is a symmetric monoidal stable \(\infty\)-category \(\mathcal{D}_{\mathrm{cons}}(X)\) whose underlying homotopy category is \(D^{b}_{c}(X)\). It comes equipped with a perverse \(t\)-structure (see SSA.2.1).
2. We denote by \(\mathcal{D}_{\mathrm{indcons}}(X)\), the ind-completion of \(\mathcal{D}_{\mathrm{cons}}(X)\). This is a symmetric monoidal presentable stable \(\infty\)-category (see SSA.2.2). Moreover there is an unique \(t\)-structure on \(\mathcal{D}_{\mathrm{indcons}}(X)\) such that the inclusion \(\mathcal{D}_{\mathrm{cons}}(X)\subseteq\mathcal{D}_{\mathrm{indcons}}(X)\) is \(t\)-exact.
3. We denote by \(\mathcal{D}F_{\mathrm{indcons}}(X):=\mathrm{Func}(\mathbb{Z}^{\mathrm{op}}, \mathcal{D}_{\mathrm{indcons}})\), the filtered category of \(\mathcal{D}_{\mathrm{indcons}}(X)\). This is also a symmetric monoidal presentable stable \(\infty\)-category. This comes equipped with functors \(\mathrm{Gr}^{i}\), \(\mathrm{ev}_{i}\) and \(\mathrm{K}^{\bullet}\to\mathrm{K}^{\bullet}(-\infty)\) with values in \(\mathcal{D}_{\mathrm{indcons}}\) (SA.2.2, (b), (c) and (d)).
4. The category \(\mathcal{D}F_{\mathrm{indcons}}(X)\) has a Beilinson \(t\)-structure (see Theorem A.3.1). An object \(\mathrm{K}^{\bullet}\) in \(\mathcal{D}F_{\mathrm{indcons}}(X)\) is in \(\mathcal{D}F_{\mathrm{indcons}}^{\leq 0}\) (resp. \(\mathcal{D}F_{\mathrm{indcons}}^{\geq 0}\)) if \(\mathrm{Gr}^{i}\,\mathrm{K}^{\bullet}\) (resp. \(\mathrm{K}^{i}\)) is in \(\mathcal{D}_{\mathrm{indcons}}^{\leq i}\) (resp. \(\mathcal{D}_{\mathrm{indcons}}^{\geq i}\)). We denote the truncations for this \(t\)-structure by \(({}^{B}\tau_{\leq 0},{}^{B}\tau_{\geq 0})\).
5. There is a functor \(\mathrm{Dec}\colon\mathcal{D}F_{\mathrm{indcons}}(X)\to\mathcal{D}F_{\mathrm{ indcons}}(X)\) which on objects is given by \(\mathrm{Dec}(\mathrm{K}^{\bullet})^{i}=({}^{B}\tau_{\leq-i}\,\mathrm{K}^{ \bullet})(-\infty)\).
6. We let \(\mathcal{D}F_{\mathrm{cons}}(X)\)19 to be the full subcategory of \(\mathcal{D}F_{\mathrm{indcons}}(X)\) consisting of objects \(\mathrm{K}^{\bullet}\) such that Footnote 19: Objects in this category are to be thought of as finite filtered constructible complexes. 1. \(\mathrm{K}^{\bullet}\) is complete, that is \(\lim\mathrm{K}^{i}=0\) and there exists an \(N\) such that \(\mathrm{Gr}^{i}(\mathrm{K}^{\bullet})=0\) for \(|i|>N\). 2. \(\mathrm{Gr}^{i}(\mathrm{K}^{\bullet})\) belong to the full subcategory \(\mathcal{D}_{cons}(X)\) of \(\mathcal{D}_{\mathrm{indcons}}(X)\).
* By Proposition A.5.1, \(\mathcal{D}F_{\mathrm{cons}}(X)\) is a symmetric monoidal stable \(\infty\)-category and the Beilinson \(t\)-structure from \(\mathcal{D}_{\mathrm{indcons}}(X)\) restricts to one on \(\mathcal{D}F_{\mathrm{cons}}(X)\).
* Moreover the functor \(\mathrm{Dec}\) from (e) above, preserves the subcategory \(\mathcal{D}F_{\mathrm{cons}}(X)\) of \(\mathcal{D}_{\mathrm{indcons}}(X)\) (see Proposition A.5.1, (b)).
### Revisiting Theorem 2.1.1
We recall the set-up of Proposition 4.2.2. We have a base scheme \(S/k\), a locally free sheaf \(\mathscr{E}\) on \(S\) of rank \(N+1\). We denote by \(\mathbb{P}\) (resp. \(\mathrm{Fl}\)) the associated projective bundle (resp. flag bundle).
We also defined functors20
Footnote 20: We continue using the same notation for the functor induced between the corresponding \(\infty\)-categories.
\[\mathrm{F}^{r},\mathrm{Gr}_{\mathrm{F}}^{r}\colon\mathcal{D}_{\mathrm{cons}}( \mathbb{P})\to\mathcal{D}_{\mathrm{cons}}(\mathrm{Fl}).\]
We can put the \(\mathrm{F}^{r}\)'s together and view them as a functor
\[\mathrm{F}\colon\mathcal{D}_{\mathrm{cons}}(\mathbb{P})\to\mathcal{D}F_{ \mathrm{cons}}(\mathrm{Fl}),\]
such that for any sheaf \(\mathrm{K}\), \(\mathrm{ev}_{r}(\mathrm{F}(\mathrm{K}))=\mathrm{F}^{r}(\mathrm{K})\) and \(\mathrm{Gr}^{r}(\mathrm{F}(\mathrm{K}))=\mathrm{Gr}_{\mathrm{F}}^{r}(\mathrm{ K})\). Thus using Proposition A.5.1, (a) we may restate Proposition 4.2.2 as follows. Let \(\phi\colon Z\to\mathbb{P}\) be a quasi-finite morphism.
**Proposition 7.2.1**.: _The functor \(\mathrm{F}\circ\phi_{*}\colon\mathcal{D}_{\mathrm{cons}}(Z)\to\mathcal{D}F_{ \mathrm{cons}}(\mathrm{Fl})\) is left \(t\)-exact. Moreover, if \(Z/S\) is affine then it is \(t\)-exact._
In the rest of this section, we assume \(\phi\colon Z\to\mathbb{P}\) is quasi-finite with \(Z/S\) affine. In particular Proposition 7.2.1 implies that there is an equivalence of functors
\[\mathrm{F}\circ\phi_{*}\circ{}^{p}\tau_{\leqslant-r}\simeq{}^{B}\tau_{ \leqslant-r}\circ\mathrm{F}\circ\phi_{*}. \tag{7.1}\]
Let \(\overline{\pi}\) (resp. \(\overline{\pi}^{\vee}\)) denote the projection from \(\mathbb{P}\times_{S}\mathrm{Fl}\) to \(\mathbb{P}\) (resp. \(\mathrm{Fl}\)). We now define a functor
\[\mathrm{P}\colon\mathcal{D}_{\mathrm{cons}}(\mathbb{P})\to\mathcal{D}F_{ \mathrm{cons}}(\mathrm{Fl}),\]
such that for any sheaf \(\mathrm{K}\), \(\mathrm{ev}_{r}(\mathrm{P}(\mathrm{K})):=\overline{\pi}_{*}^{\vee}\overline{ \pi}^{\dagger p}\tau_{\leqslant-r}(\mathrm{K})\) and the obvious maps between them. Hence \(\mathrm{Gr}^{r}(\mathrm{P}(\mathrm{K}))={}^{p}\mathcal{H}^{-r}(\mathrm{K})[r]\).
Combining Proposition 7.2.1, Equation (7.1) and the definition of the operator \(\mathrm{Dec}\) (see SS7.1, (e) above) we have the following result.
**Theorem 7.2.2**.: _There is a natural equivalence of functors_
\[\mathrm{P}\circ\phi_{*}\simeq\mathrm{Dec}(\mathrm{F}\circ\phi_{*}).\]
An immediate corollary to Theorem 7.2.2 and Corollary A.4.3, (a) is the following comparison of associated spectral sequences [LurHA, Proposition 1.2.2.14] with values in perverse sheaves on \(\mathrm{Fl}\).
**Corollary 7.2.3**.: _For any sheaf \(\mathrm{K}\) on \(Z\), there is a natural (in \(\mathrm{K}\)) isomorphism21 between the constant perverse spectral sequence_
Footnote 21: The isomorphism of the spectral sequences could also have been obtained using Postnikov systems at the level of derived categories, a detour to the \((\infty,1)\)-world is not necessary.
\[E_{1}^{i,j}={}^{p}\mathcal{H}^{i+j}(\overline{\pi}_{*}^{\vee}\overline{\pi}^{ \dagger p}\mathcal{H}^{-j}(\phi_{*}\,\mathrm{K})[j])\implies{}^{p}\mathcal{H }^{i+j}(\overline{\pi}_{*}^{\vee}\overline{\pi}^{\dagger}\phi_{*}\,\mathrm{K})\]
_and the decalee of the universal flag filtration spectral sequence_
\[E_{1}^{i,j}={}^{p}\mathcal{H}^{i+j}(\mathrm{G}\mathrm{F}_{\mathrm{F}}^{-j} \phi_{*}\,\mathrm{K}[j])\implies{}^{p}\mathcal{H}^{i+j}(\overline{\pi}_{*}^{ \vee}\overline{\pi}^{\dagger}\phi_{*}\,\mathrm{K}).\]
## Appendix A Beilinson \(t\)-structure on \(\mathcal{D}F_{\mathrm{cons}}(X)\)
Throughout this appendix, we work over an algebraically closed field \(k\) and a scheme \(X/k\) a separated scheme of finite type. We also fix a coefficient ring \(\Lambda\), a finite ring with torsion invertible in \(k\) or an algebraic extension \(E/\mathbb{Q}_{\ell}\).
By \(D_{c}^{b}(X)\) we shall mean the bounded constructible derived category of etale sheaves on \(X\) with coefficients in \(\Lambda\). This comes equipped the standard [1, 1.1.2, (e)] and perverse \(t\)-structure [1, SS4.0].
### A first attempt
Our aim in this appendix is to construct a filtered version of \(D_{c}^{b}(X)\) which meets the following three conditions
1. It has a lift of the perverse \(t\)-structure from \(D_{c}^{b}(X)\). Let us call the truncation maps in this \(t\)-structure \(({}^{B}\tau_{\leq 0},\,{}^{B}\tau_{\geq 0})\).
2. Objects in the filtered category should correspond to diagrams \[\mathrm{K}^{\bullet}:=\{\cdots\to\mathrm{K}^{i}\to\mathrm{K}^{i-1}\to\mathrm{K }^{i-2}\cdots\},\] where \(\mathrm{K}^{i}\)'s are sheaves, thought of as the \(i^{\mathrm{th}}\) level in the filtration.
3. This category comes equipped with an endo functor \(\mathrm{Dec}\), defined as follows \[\mathrm{K}^{\bullet}\to\mathrm{Dec}(\mathrm{K}^{\bullet})^{i}:={}^{B}\tau_{ \leq-i}(\mathrm{K}^{\bullet})(-\infty)\] meant to mimic Deligne's decalee [1, 1.3.3].
Successful execution of these conditions would then imply \(\mathrm{P}\)=\(\mathrm{Dec}(\mathrm{F})\) at the level of filtered complexes as an immediate corollary to the \(t\)-exactness of a Brylinski-Radon transform.
As a first attempt one could begin with the filtered derived category \(DF_{c}^{b}(X)\)[1, 1.1.2, (d)], whose construction is already quite delicate for \(\Lambda=\mathbb{Q}_{\ell}\). One remedy would be to use the pro-etale site [1] together with results of Schapira-Schneiders [1] which would naturally lead us to consider the derived category of the abelian category of covariant functors, \(\mathrm{Func}(\mathbb{Z}^{\mathrm{op}},\mathrm{Sh}(X_{\mathrm{pro\acute{e}t}}))\)22. Together with Beilinson's construction of f-categories [1, Definition A.1], this would (almost meet) our conditions (a) and (b)23. This leaves us with (c). There are at least a couple of issues here
1. Already for \(X=\operatorname{Spec}(k)\), \(D_{c}^{b}(X)\) does not have arbitrary (even sequential colimits). A partial remedy for this would be restricting the filtrations we allow in (b) above.
2. Even with the restricted class of filtrations, it is unclear in what sense \(\operatorname{Dec}\) is a functor. \(\operatorname{Dec}(\operatorname{K}^{\bullet})\) is defined using the \(t\)-structure on the derived category (which in turn is a lift of the perverse \(t\)-structure again defined on \(D_{c}^{b}(X)\)).
#### a.1.1. Strategy of proof
To remedy the above issues we mimic the approach of [1, SS5.1] where this was carried out for \(D(R)\), the stable \(\infty\) category of \(R\)-modules. In particular, we would like an equivalent of their Theorem 5.4, which as the authors point out is a transport of Beilinson's construction of f-categories to \(D(R)\).
In the rest of the appendix we shall work with \(\infty\)-categories [1, SS1.1.2] and in particular stable \(\infty\)-categories [1, Definition 1.1.9]. We will aim to construct a symmetric monoidal stable \(\infty\)-category denoted by \(\mathcal{D}F_{\operatorname{indcons}}(X)\) which will satisfy (a)-(c) from SSA.1. Our desired category \(\mathcal{D}F_{\operatorname{cons}}(X)\) will be a full subcategory of \(\operatorname{Func}(\mathbb{Z}^{\operatorname{op}},\mathcal{D}_{\operatorname {indcons}}(X))\)24, where \(\mathcal{D}_{\operatorname{indcons}}(X)\) is itself a symmetric monoidal _presentable_ stable \(\infty\)-category.
To achieve our goal of constructing a category that satisfies (a)-(c) above we need to
Footnote 24: In the rest of the appendix unless specified ordinary categories are to be thought of as \(\infty\)-categories [1, Proposition 1.1.2.2].
1. define the category \(\mathcal{D}_{\operatorname{indcons}}(X)\) and equip it with a perverse \(t\)-structure25, Footnote 25: A \(t\)-structure on a stable \(\infty\)-category is by definition a \(t\)-structure on its homotopy category, which is a triangulated category (see [1, Definition 1.2.1.4] ).
2. lift this \(t\)-structure to the functor category \(\operatorname{Func}(\mathbb{Z}^{\operatorname{op}},\mathcal{D}_{ \operatorname{indcons}}(X))\) and,
3. show that the \(t\)-structure restricts to one on \(\mathcal{D}F_{\operatorname{cons}}(X)\).
### The perverse \(t\)-structure on \(\mathcal{D}_{indcons}(X)\)
In this section, we will deal with (i) above, and lift the \(t\)-structure to \(\mathcal{D}F_{\operatorname{indcons}}(X)\) in the next section. As before \(X/k\) is a separated scheme of finite type and \(\Lambda\) is a coefficient ring.
#### a.2.1. The category \(\mathcal{D}_{cons}(X)\)
In [13, Definition 1.1] the authors construct a symmetric, monoidal and stable \(\infty\)-category \(\mathcal{D}_{\operatorname{cons}}(X)\) with the following properties.
1. Their underlying homotopy categories agree with the usual derived category \(D_{c}^{b}(X)\)[13, Proposition 7.1, Theorem 7.7, Lemma 7.9]. In particular the categories \(\mathcal{D}_{\operatorname{cons}}(X)\) come equipped with a perverse \(t\)-structure (\({}^{p}\mathcal{D}_{\operatorname{cons}}^{\leq 0},{}^{p}\mathcal{D}_{ \operatorname{cons}}^{\geq 0}\)).
2. The usual six-functor formalism extends to the assignment \(X\to\mathcal{D}_{\operatorname{cons}}(X)\)[13, Remark 7.10].
#### a.2.2. The category \(\operatorname{Ind}(\mathcal{D}_{cons}(X))\)
Let \(\operatorname{Ind}(\mathcal{D}_{cons}(X))\) be the \(\operatorname{Ind}\)-completion [1, Lemma 5.3.2.9] of \(\mathcal{D}_{cons}(X)\).
1. This category is symmetric monoidal presentable stable \(\infty\)-category. We shall denote this by \(\mathcal{D}_{\operatorname{indcons}}(X)\) (see [13, Corollary 8.3] for why this notation is reasonable).
2. \(\mathcal{D}_{\operatorname{cons}}(X)\) is a full subcategory of \(\mathcal{D}_{\operatorname{indcons}}(X)\) and consists precisely of the compact objects [13, Proposition 8.2].
3. The perverse \(t\)-structure on \(\mathcal{D}_{\mathrm{cons}}(X)\) extends uniquely to one on \(\mathcal{D}_{\mathrm{indcons}}(X)\), denoted by \(({}^{p}\mathcal{D}_{\mathrm{cons}}^{\leq 0},{}^{p}\mathcal{D}_{\mathrm{cons}}^{ \geq 0})\)[1, Lemma 6.1.2, (a)]. We call this the perverse \(t\)-structure on \(\mathcal{D}_{\mathrm{indcons}}(X)\).
4. Moreover \({}^{p}\mathcal{D}_{\mathrm{indcons}}^{\leq 0}=\mathrm{Ind}({}^{p}\mathcal{D}_{ \mathrm{cons}}^{\leq 0})\), \({}^{p}\mathcal{D}_{\mathrm{indcons}}^{\geq 0}=\mathrm{Ind}({}^{p}\mathcal{D}_{ \mathrm{cons}}^{\leq 0})\). Further \({}^{p}\mathcal{D}_{\mathrm{indcons}}^{\leq 0}\) is closed under colimits and \({}^{p}\mathcal{D}_{\mathrm{indcons}}^{\geq 0}\) is closed under limits [1, Lemma 6.1.2, (a), (b)].
### The category \(\mathcal{D}F_{\mathrm{indcons}}(X)\)
We continue using the notation from previous sections. We define
\[\mathcal{D}F_{\mathrm{indcons}}(X):=\mathrm{Func}(\mathbb{Z}^{\mathrm{op}}, \mathcal{D}_{\mathrm{indcons}}(X)).\]
We shall denote an object in \(\mathcal{D}F_{\mathrm{indcons}}(X)\) by \(\mathrm{K}^{\bullet}\).
#### a.3.1. Some basic operations on \(\mathcal{D}F_{\mathrm{indcons}}(X)\)
1. \(\mathcal{D}F_{\mathrm{indcons}}(X)\) is a symmetric monoidal presentable stable \(\infty\)-category [1, SS2.23].
2. For any integer \(i\in\mathbb{Z}\), there is a functor \(\mathrm{ev}_{i}\colon\mathcal{D}F_{\mathrm{indcons}}(X)\to\mathcal{D}_{ \mathrm{indcons}}(X)\), which on objects will be denoted by \(\mathrm{K}^{\bullet}\to\mathrm{K}^{i}\).
3. For any integer \(i\in\mathbb{Z}\), there is a functor \(\mathrm{Gr}^{i}\colon\mathcal{D}F_{\mathrm{indcons}}(X)\to\mathcal{D}_{ \mathrm{indcons}}(X)\), which on objects is given by \(\mathrm{K}^{\bullet}\to\mathrm{cofib}(\mathrm{K}^{i+1}\to\mathrm{K}^{i})\).
4. We also have a functor sending \(\mathrm{K}\to\mathrm{K}(-\infty):=\mathrm{colim}\,\mathrm{K}^{i}\).
5. An object \(\mathrm{K}^{\bullet}\) is said to be _complete_ if \(\lim_{i}\mathrm{K}^{i}=0\).
6. The \(\mathrm{ev}_{i}\) and \(\mathrm{Gr}^{i}\) functors commute with all limits and colimits [1, SS2]. In particular, they are exact.
7. Finally we note that \(\mathrm{ev}_{i}\) has a left adjoint, denoted by \(\mathrm{ins}_{i}\), which sends an object \(\mathrm{L}\to\mathrm{ins}_{i}(\mathrm{L})^{\bullet}\) with \((\mathrm{ins}_{i}(\mathrm{L}))^{j}=\mathrm{L}\) for \(j\leq i\) and \(0\) otherwise.
8. The functors \(\{\mathrm{Gr}^{i}\}_{i\in\mathbb{Z}}\) form a conservative family of functors on complete objects [1, SS2.7].
Now define full subcategories \(({}^{p}\mathcal{D}F_{\mathrm{indcons}}^{\leq 0},{}^{p}\mathcal{D}F_{ \mathrm{indcons}}^{\geq 0})\) as follows
\[{}^{p}\mathcal{D}F_{\mathrm{indcons}}^{\leq 0}:=\{\mathrm{K}^{\bullet}; \mathrm{Gr}^{i}(\mathrm{K}^{\bullet})\in{}^{p}\mathcal{D}_{\mathrm{indcons}}^{ \leq i}\},\]
and dually
\[{}^{p}\mathcal{D}F_{\mathrm{indcons}}^{\geq 0}:=\{\mathrm{K}^{\bullet}; \mathrm{K}^{i}\in{}^{p}\mathcal{D}_{\mathrm{indcons}}^{\geq i}\}.\]
In the next section, we shall prove the following result mimicking the proof in [1, Theorem 5.3], where it is proved for the filtered \(\infty\)-category of \(R\)-modules.
**Theorem A.3.1**.: _(Beilinson) The subcategories \(({}^{p}\mathcal{D}F_{\mathrm{indcons}}^{\leq 0},{}^{p}\mathcal{D}F_{\mathrm{indcons}}^{ \geq 0})\) define a \(t\)-structure on \(\mathcal{D}F_{\mathrm{indcons}}(X)\)._
### Proof of Theorem a.3.1
Proof.: We begin by noting that \({}^{p}\mathcal{D}^{\leq i}_{\mathrm{Indcons}}\) is stable under colimits (see SSA.2.2, (d)). Moreover \(\mathrm{Gr}^{i}\) commutes with colimits (see SSA.3.1,(c)), and thus by definition, \({}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\) is stable under colimits. By presentability26 the inclusion \({}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\subset{}^{p}\mathcal{D}F_{ \mathrm{indcons}}(X)\) has a right adjoint, which we denote by \(R\colon{}^{p}\mathcal{D}F_{\mathrm{indcons}}(X)\to{}^{p}\mathcal{D}F^{\leq 0}_{ \mathrm{indcons}}\). In particular given any object \(\mathrm{K}^{\bullet}\) in \({}^{p}\mathcal{D}F_{\mathrm{indcons}}(X)\), we have a fiber sequence
Footnote 26: This step prevented us from directly using the more obvious category \(\mathcal{D}F_{\mathrm{cons}}(X)\).
\[R(\mathrm{K}^{\bullet})\to\mathrm{K}^{\bullet}\to Q(\mathrm{K}^{\bullet}).\] (A.1)
Thus to complete the proof of the theorem it suffices to show that \(Q(\mathrm{K}^{\bullet})\in{}^{p}\mathcal{D}F^{>0}_{\mathrm{indcons}}\) or equivalently that the space \(\mathrm{Map}_{\mathcal{D}_{\mathrm{indcons}}(X)}(\mathrm{K}^{\prime},Q( \mathrm{K}^{\bullet})^{i})\) is contractible for \(\mathrm{K}^{\prime}\in{}^{p}\mathcal{D}^{\leq i}_{\mathrm{indcons}}\). By adjunction (see SSA.3.1, (e)) we have
\[\mathrm{Map}_{\mathcal{D}_{\mathrm{indcons}}(X)}(\mathrm{K}^{\prime},Q( \mathrm{K}^{\bullet})^{i})=\mathrm{Map}_{\mathcal{D}F_{\mathrm{indcons}}(X)} (\mathrm{ins}_{i}\,\mathrm{K}^{\prime},Q(\mathrm{K}^{\bullet})),\]
and thus it is enough to show that \(\mathrm{Map}_{\mathcal{D}F_{\mathrm{indcons}}(X)}(\mathrm{ins}_{i}\,\mathrm{K} ^{\prime},Q(\mathrm{K}^{\bullet}))\) is contractible for any \(\mathrm{K}^{\prime}\in{}^{p}\mathcal{D}^{\leq i}_{\mathrm{indcons}}\). Consider the diagram below obtained as a pullback of the fibre sequence in Diagram (A.1) along the map \(\eta\in\mathrm{Map}_{\mathcal{D}F_{\mathrm{indcons}}(X)}(\mathrm{ins}_{i}\, \mathrm{K}^{\prime},Q(\mathrm{K}^{\bullet}))\)
(A.2)
Recall that \(\mathrm{K}^{\prime}\in{}^{p}\mathcal{D}^{\leq i}_{\mathrm{indcons}}\), hence \(\mathrm{ins}_{i}(\mathrm{K}^{\prime})\) is in \({}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\) by definition of the functor \(\mathrm{ins}_{i}(\mathrm{K}^{\prime})\). This implies \(\mathrm{L}^{\bullet}\) is also in \({}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\) (since \(R(\mathrm{K}^{\bullet})\in{}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\) by construction and \({}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\) is closed under extensions). Since \(R(\mathrm{K}^{\bullet})\to\mathrm{K}^{\bullet}\) is the counit of the adjunction and \(\mathrm{L}^{\bullet}\in{}^{p}\mathcal{D}F^{\leq 0}_{\mathrm{indcons}}\), there exists a dotted arrow in Diagram (A.2) which provides a splitting for the top row. This implies that \(\eta\) is \(0\) (in the homotopy category) as required.
**Notations A.4.1**.: We shall denote the truncations for the \(t\)-structure induced on \(\mathcal{D}F_{\mathrm{indcons}}(X)\) via Theorem A.4 by \(({}^{B}\tau_{\leq 0},{}^{B}\tau_{\geq 0})\).
**Lemma A.4.2**.: _The functors \(\mathrm{Gr}^{i}[i]\colon\mathcal{D}F_{\mathrm{indcons}}(X)\to\mathcal{D}_{ \mathrm{indcons}}(X)\) are \(t\)-exact for the Beilinson \(t\)-structure on the source and perverse \(t\)-structure on the target._
Proof.: That \(\mathrm{Gr}^{i}[i]\) is right \(t\)-exact is clear from the definition. For the left \(t\)-exactness note that for any \(\mathrm{K}^{\bullet}\) there is a fibre sequence \(\mathrm{K}^{i}[i]\to\mathrm{Gr}^{i}\,\mathrm{K}^{\bullet}[i]\to\mathrm{K}^{i+1 }[i+1]\), and hence the left \(t\)-exactness follows from the fact that \(\mathcal{D}^{\geq 0}_{\mathrm{indcons}}\) is closed under extensions.
**Corollary A.4.3**.: _We have the following corollary to Lemma A.4.2._
_(a) There is a natural isomorphism \(\mathrm{Gr}^{i}\circ{}^{B}\tau_{\leq j}\simeq{}^{p}\tau_{\leq i+j}\mathrm{Gr}^ {i}\) (See [1, Theorem 5.4, (2)])._
_._
2. _A complete object_ \(\mathrm{K}^{\bullet}\) _in_ \(\mathcal{D}F_{\mathrm{indcons}}(X)\) _is in the heart of the Beilinson_ \(t\)_-structure if and only if the_ \(\mathrm{Gr}^{i}\,\mathrm{K}[i]\) _belong to the heart for all_ \(i\in\mathbb{Z}\)_._
Proof.: Statement (a) is immediate from \(t\)-exactness of \(\mathrm{Gr}^{i}[i]\) (Lemma A.4.2).
For (b) observe that the only if the direction is clear from Lemma A.4.2 and is valid without the completeness hypothesis. We now prove the 'if' direction of the corollary. For an object \(\mathrm{K}^{\bullet}\), if for all \(i\), \(\mathrm{Gr}^{i}\,\mathrm{K}^{\bullet}[i]\) belongs to the heart of the perverse \(t\)-structure on \(\mathcal{D}_{\mathrm{indcons}}(X)\), then \(\mathrm{K}^{\bullet}\in{}^{p}DF_{\mathrm{indcons}}^{\leq 0}\). For the other inclusion observe that by completeness \(\mathrm{K}^{i}\simeq\lim_{j\geq i}\mathrm{cofib}(\mathrm{K}^{i}\to\mathrm{K}^{ j})\), and the result follows from the stability of \(\mathcal{D}_{\mathrm{indcons}}^{\geq 0}\) under limits (see SSA.2.2, (d)).
**Definition A.4.4**.: We define a functor \(\mathrm{Dec}\colon\mathcal{D}F_{\mathrm{indcons}}(X)\to\mathcal{D}F_{ \mathrm{indcons}}(X)\) which on objects is given by
\[\mathrm{Dec}(\mathrm{K}^{\bullet})^{i}=({}^{B}\tau_{\leq-i}\,\mathrm{K}^{ \bullet})(-\infty).\] (A.3)
### The category \(\mathcal{D}F_{\mathrm{cons}}(X)\)
Consider the following full subcategory \(\mathcal{D}F_{\mathrm{cons}}(X)\) of \(\mathcal{D}F_{\mathrm{indcons}}(X)\) consisting of objects \(\mathrm{K}^{\bullet}\) such that
1. \(\mathrm{K}^{\bullet}\) is complete and there exists an \(N\) such that \(\mathrm{Gr}^{i}(\mathrm{K}^{\bullet})=0\) for \(|i|>N\).
2. \(\mathrm{Gr}^{i}(\mathrm{K}^{\bullet})\) belong to the full subcategory \(\mathcal{D}_{cons}(X)\) of \(\mathcal{D}_{\mathrm{indcons}}(X)\).
We have the following
**Proposition A.5.1**.: \(\mathcal{D}F_{\mathrm{cons}}(X)\) _is a symmetric monoidal stable \(\infty\)-category and the Beilinson \(t\)-structure (from \(\mathcal{D}F_{\mathrm{indcons}}(X)\)) restricts to one on \(\mathcal{D}F_{\mathrm{cons}}(X)\). Moreover_
1. _an object_ \(\mathrm{K}^{\bullet}\) _in_ \(\mathcal{D}F_{\mathrm{cons}}(X)\) _is in the heart of the Beilinson_ \(t\)_-structure iff_ \(\mathrm{Gr}^{i}[i]\,\mathrm{K}^{\bullet}\) _are perverse._
2. _The functor_ \(\mathrm{Dec}\) _(see Definition_ A.4.4_) preserves the subcategory_ \(\mathcal{D}F_{\mathrm{cons}}(X)\)_._
Proof.: Conditions (a) and (b) in the definition of \(\mathcal{D}F_{\mathrm{cons}}(X)\) together imply that, an object \(\mathrm{K}^{\bullet}\) in \(\mathcal{D}F_{\mathrm{indcons}}(X)\) belongs to \(\mathcal{D}F_{\mathrm{cons}}(X)\) iff \(\mathrm{K}^{i}\)'s are in \(\mathcal{D}_{\mathrm{cons}}(X)\).
Now since \(\mathcal{D}_{\mathrm{cons}}(X)\) is itself a symmetric monoidal subcategory of \(\mathcal{D}_{\mathrm{indcons}}(X)\), the symmetric monoidal part of the claim follows from the definition of day convolution structure [1, SS2.23] on \(\mathcal{D}F_{\mathrm{indcons}}(X)\). Since \(\mathrm{Gr}^{i}\) are exact functors, stability and that the Beilinson \(t\)-structure restricts to \(\mathcal{D}F_{\mathrm{cons}}(X)\) is clear. (a) follows from Corollary A.4.3, (a). Finally (b) follows from Corollary A.4.3, (b) since the inclusion of \(\mathcal{D}_{\mathrm{cons}}(X)\) into \(\mathcal{D}_{\mathrm{indcons}}(X)\) is \(t\)-exact for the perverse \(t\)-structure (see SSA.2.2, (c)).
|
2309.05877 | Choosing restart strategy at partial knowledge of process statistics | Optimization of a random processes by restart is a subject of active
theoretical research in statistical physics and has long found practical
application in computer science. Meanwhile, one of the key issues remains
largely unsolved: when should we restart a process whose detailed statistics
are unknown to ensure that our intervention will improve performance?
Addressing this query here we propose several constructive criteria for the
effectiveness of various protocols of non-instantaneous restart in the mean
completion time problem and in the success probability problem. Being expressed
in terms of a small number of easily estimated statistical characteristics of
the original process, these criteria allow informed restart decision based on
partial information. | Ilia Nikitin, Sergey Belan | 2023-09-11T23:57:44Z | http://arxiv.org/abs/2309.05877v3 | # Choosing restart strategy at partial knowledge of process statistics
###### Abstract
Optimization of a random processes by restart is a subject of active theoretical research in statistical physics and has long found practical application in computer science. Meanwhile, one of the key issues remains largely unsolved: when should we restart a process whose detailed statistics are unknown to ensure that our intervention will improve performance? Addressing this query here we propose several constructive criteria for the effectiveness of various protocols of non-instantaneous restart in the mean completion time problem and in the success probability problem. Being expressed in terms of a small number of easily estimated statistical characteristics of the original process, these criteria allow informed restart decision based on partial information.
## I Introduction
Restart as a method of accelerating randomized tasks was first proposed in the early 90s in computer science. Namely, the authors of Refs. [1; 2] showed that applying a restart to a probabilistic algorithm whose completion time represents highly fluctuating random variable leads to smaller tail probabilities and to smaller expected completion time. Beyond computer science, optimization via restart is the subject of active research by the statistical physics community. The starting point for these studies was the work of Evans and Majumdar [3], who found that stochastic (Poisson) restart reduces the mean first-passage time of Brownian motion. Also, this area of research received an additional impetus for development due to the works [4; 5; 6] devoted to the kinetics of enzymatic reactions, where the restart corresponds to dissociation of the intermediate enzyme-substrate complex. Theoretical analysis presented in this studies demonstrated that an increase in the dissociation rate constant could potentially speed up enzymatic turnover.
Thus, essentially the same mathematical model arises in different research fields. Regardless of the specific context, one of the critical tasks is the following: how to find a restart strategy that is guaranteed to reduce the expected completion time of the stochastic process of interest? The general renewal formalism developed by Pal and Reuveni [7] allows to predict whether the implementation of a particular restart protocol will be effective for a given stochastic process. Moreover, as proved in [2; 7; 8], a strictly regular (periodic) strategy, implying that the random process is restarted every \(\tau_{*}\) units of time, is universally optimal. The value of the optimal period \(\tau_{*}\) is problem-specific and can be determined once we know the probability density of the random completion time of the original process.
In practice, however, the statistics of the optimized process can be poorly specified or even completely unknown [2; 4; 8; 9; 10; 11; 12]. Trying to address this issue, the authors of [2] have developed a restart protocol, nowadays known as Luby strategy, which has the following remarkable property: regardless of the statistical details of the original process, the average completion time of this process subject to the Luby strategy exceeds the expectation of completion time provided by the optimal periodic protocol by no more than a logarithmic factor.
While ingenious and elegant, Luby's strategy suffers from two serious drawbacks. First, it only applies to processes with discrete completion times. In the case of continuous time, as was recently shown by Lorentz [8], such a universal strategy simply does not exist. Secondly, and even more importantly, even for discrete-time processes, applying the Luby strategy, we can only be sure that the resulting mean completion time will not be too bad compared to the optimal value achieved by the best periodic strategy. In other words, there is no guarantee that Luby strategy will not degrade performance as compared to the restart-free case [8].
Thus, two extremes are well understood: if the complete statistics of the process duration is available, then it is straightforward to find the optimal restart strategy, while in the absence of information about the process statistical properties it would be better do not use restart at all, since this is the only way to avoid unintentional decrease in performance. In the recent paper [13], which addresses an intermediate scenario of partially known statistics, Eliazar and Reuveni formulated the _constructive criteria_ for the effectiveness of restarts, expressed through simple statistical characteristics of a randomized task. Unlike previously known _existence results_, constructive criteria serve not just as indicators of the restart effectiveness, but also offer a strategy that is guaranteed to reduce the average completion time.
The results presented in [13], however, are obtained under the assumption of instantaneous restart events, while in real-life settings a restart is accompanied by some time delays. Say, in the context of single molecule
enzyme kinetics, some time is required to enzyme which unbinds from substrate to find a new one in the surrounding solution [4; 6; 14]. Similarly, the restart of a computer program typically involves a time overhead. Also, models with non-instantaneous restarts provide more realistic pictures of colloidal particle diffusion with resetting [15]. To the best of our knowledge, the existing literature lacks constructive criteria of restart efficiency for models with non-momentary restarts. In addition, the criteria proposed in [13] refer only to the case of a periodic restart. The constructive criteria, if any, for other types of restart strategies remains unknown.
Finally, let us note that the potential of restart is not limited to optimization of the mean completion time. In particular, implementation of restart can also improve the probability of getting a desired outcome when a processes has several alternative completion scenarios [16]. constructive criteria for the problem of optimizing the splitting probabilities have not yet been formulated as far as we know.
In this paper, we fill aforementioned gaps by constructing a set of constructive efficiency criteria for regular and stochastic non-instantaneous restarts in both the mean completion time problem, and in the success probability problem.
## II Optimization of the mean completion time
Consider a stochastic process whose random duration time \(T\) has a probability density \(P(T)\). The restart protocol \(\mathcal{R}\) is characterized by a (possibly infinite) sequence of of inter-restarts time intervals \(\tau_{1},\tau_{2},\ldots\). If the process is completed prior to the first restart event, the story ends there. Otherwise, the current attempt is aborted, and the process starts from scratch. Similarly, next attempt may either complete prior to the second restart or not, with the same rules. This procedure repeats until the process finally reaches completion. We will also assume that initialization of the process and each restart are accompanied by some random time delay \(t\) which is independent of \(T\).
In the absence of restart, the random waiting time for the process completion is given by the sum \(T+t\). The restart protocol is considered effective if \(\langle T_{\mathcal{R}}\rangle<\langle T\rangle+\langle t\rangle\), where \(T_{\mathcal{R}}\) is a random waiting time of the process completion in the presence of the protocol \(\mathcal{R}\), and angular brackets denote averaging over the statistics of the original process and possibly over the statistics of the inter-restarts intervals (in the case of a stochastic protocol). It is convenient to define dimensionless effectiveness of restart as
\[\eta_{\mathcal{R}}=1-\frac{\langle T_{\mathcal{R}}\rangle}{\langle T\rangle+ \langle t\rangle}. \tag{1}\]
Clearly, the effective protocols obey \(0<\eta_{\mathcal{R}}\leq 1\).
In the simplest case of a strictly regular schedule, implying that the restart events are equally spaced in time, i.e., \(\tau_{k}=\tau\) for all \(k=1,2,...\), the expected completion time can be expressed as
\[\langle T_{\tau}\rangle=\frac{\langle T\rangle+\tau-\langle|T-\tau|\rangle+2 \langle t\rangle}{2Pr[T\leq\tau]}, \tag{2}\]
where \(\langle|T-\tau|\rangle\) is the mean absolute deviation (MAD) of the random variable \(T\) from the value of \(\tau\). Equation (2) was first derived in Ref. [13] in particular case of zero time penalty \(t=0\).
Another important scenario is the Poisson restart, where inter-restarts intervals are mutually independent and identically distributed with exponential probability density \(\rho(\tau)=re^{-r\tau}\). Here \(r\) represents the restart rate. The average completion time in the presence of a Poisson restart is given by (see [6])
\[\langle T_{r}\rangle=\frac{1-\tilde{P}(r)+r\langle t\rangle}{r\tilde{P}(r)}, \tag{3}\]
where \(\tilde{P}(r)=\int_{0}^{\infty}dTP(T)e^{-rT}\) denotes the Laplace transform of the probability density function \(P(T)\).
In addition, we will consider the case of a gamma-protocol for which the random intervals between restarts are independently sampled from the Gamma distribution \(\rho(\tau)=\frac{\beta^{k}}{\Gamma(k)}\tau^{k-1}e^{-\beta\tau}\) with rate parameter \(\beta\) and shape parameter \(k\). The expected completion time of a process under gamma-restart with \(k=2\) is given by
\[\langle T_{\beta}\rangle=\frac{\beta\langle t\rangle+2-2\tilde{P}(\beta)+ \beta\partial_{\beta}\tilde{P}(\beta)}{\beta\tilde{P}(\beta)-\beta^{2} \partial_{\beta}\tilde{P}(\beta)}. \tag{4}\]
Once full statistics of the original process encoded into the probability density \(P(T)\) is known, one can use Eqs. (2)-(4) to determine the values (if any) of the control parameter (period \(\tau\) or rates \(r\), \(\beta\)) for which the corresponding strategy is advantageous. Moreover, the most efficient strategy is always periodic, and the optimal period can be found by minimizing the right side of Eq. (2) with respect to \(\tau\). Our goal is to formulate constructive criteria of restart efficiency, allowing to choose guaranteed beneficial restart protocol without knowledge of the full process statistics. Several such criteria were proposed in [13] for the particular case of the periodic restart with zero penalty \(t=0\).
Let us outline the idea underlying our derivation of the desired criteria. First of all, using various probabilistic inequalities, we obtain an upper bound for the mean completion time \(\langle T_{\tau}\rangle\leq\mathcal{T}\) of the process under restart protocol \(\mathcal{R}\), where the time scale \(\mathcal{T}\) is expressed through the mean restart period (\(\tau\), \(r\) or \(\beta\) depending on the protocol) and some simple statistical characteristics of the original process (e.g., statistical moments, median value, MADs, etc.). Then the solution of the inequality
\(\mathcal{T}\leq\langle T\rangle+\langle t\rangle\) with respect to the mean period defines a set (possibly empty) of efficient values for period. Further, by minimizing \(\mathcal{T}\) on this set, one can find the the period which provides the maximum guaranteed efficiency. Next, we demonstrate concrete implementations of this simple idea.
### Periodic restart
For any random variable \(T\) one has \(\langle|T-\tau|\rangle\geq\langle|T-m|\rangle\), where \(m\) is the median value of \(T\). Moreover, the equality is achieved for \(\tau=m\). So, using Eq. (2) we get the following estimate
\[\langle T_{\tau}\rangle\leq\frac{\langle T\rangle+\tau-\langle|T-m|\rangle+2 \langle t\rangle}{2Pr[T\leq\tau]}. \tag{5}\]
It follows from Eq. (5) that if the condition
\[m+\langle t\rangle<\langle|T-m|\rangle, \tag{6}\]
is satisfied, then the inequality \(\langle T_{\tau}\rangle\leq\langle T\rangle+\tau-\langle|T-m|\rangle+2\langle t \rangle<\langle T\rangle+\langle t\rangle\) holds for any period belonging to the interval
\[m\leq\tau<\langle|T-m|\rangle-\langle t\rangle. \tag{7}\]
Moreover, the choice \(\tau_{0}=m\) is optimal in the sense that regular restart with period \(\tau_{0}\) provides the highest guaranteed efficiency which is given by
\[\eta_{1}=\frac{\langle|T-m|\rangle-m-\langle t\rangle}{\langle T\rangle+ \langle t\rangle}. \tag{8}\]
The described criteria generalizes result presented [13] to the case of a non-zero duration of restart events.
Further, as shown in [17] for the articular case \(t=0\), the inequality \(2m<\langle T\rangle\) represents a sufficient condition for the existence of an effective periodic protocol. Here we turn this existence result into a constructive criterion that holds in the presence of a restart penalty. By virtue of Jensen's inequality, one has \(\langle|T-\tau|\rangle\geq\langle T\rangle-\tau\), and, therefore, Eq. (2) allows to write the following estimate
\[\langle T_{\tau}\rangle\leq\frac{\tau+\langle t\rangle}{Pr[T\leq\tau]}. \tag{9}\]
From Eq. (9) we see that if the condition
\[m<\frac{1}{2}\langle T\rangle-\frac{1}{2}\langle t\rangle \tag{10}\]
is met, then regular restart with the period belonging to the interval
\[m\leq\tau<\frac{1}{2}\langle T\rangle-\frac{1}{2}\langle t\rangle, \tag{11}\]
reduces the average completion time since in this case one get \(\langle T_{\tau}\rangle\leq 2\tau+2\langle t\rangle<\langle T\rangle+\langle t\rangle\). The guaranteed efficiency is maximal at \(\tau_{0}=m\) and is estimated as
\[\eta_{2}\geq 1-\frac{2(m+\langle t\rangle)}{\langle T\rangle+\langle t\rangle}. \tag{12}\]
### Poisson restart
Periodic strategy is important due to its optimal property, which has already been discussed above. Namely, if you found a value \(\tau_{*}\geq 0\) (probably \(\tau_{*}=+\infty\)) such that \(\langle T_{\tau_{*}}\rangle\leq\langle T_{\tau}\rangle\) for any \(\tau\geq 0\), then \(\langle T_{\tau_{*}}\rangle\leq\langle T_{\mathcal{R}}\rangle\) for all \(\mathcal{R}\). However, as previous studies have shown [6], a periodic protocol with a non-optimal period \(\tau\neq\tau_{*}\) may be inferior to other restart strategies. Since the optimal period \(\tau_{*}\) of a regular restart does not have to be equal to the median completion time of the initial process \(m\), the periodic strategies constructed above are generally not optimal. In addition, the conditions of their applicability (see Eqs. (6) and (10)) are not necessary for the existence of an effective protocol. In other words, violation of the conditions \(m+\langle t\rangle<\langle|T-m|\rangle\) and \(m<\frac{1}{2}\langle T\rangle-\frac{1}{2}\langle t\rangle\) for a given stochastic process does not mean that an effective restart protocol can not be constructed.
Given above arguments, it is interesting to develop efficient policies of non-periodic restart. Particularly attractive in this respect is the Poisson strategy, which has been widely studied before, see e.g. [3; 4; 5; 6; 18; 19]. A simple sufficient condition for the efficiency of the Poisson restart [18] reads
\[\langle T^{2}\rangle>2\langle T\rangle(\langle T\rangle+\langle t\rangle). \tag{13}\]
Note, however, that the latter inequality represent existence result: it serves as an indicator of the existence of an effective Poisson strategy without presenting it. Moreover, the logic underlying derivation of sufficient condition determined by Eq. (13) suggests that knowing the first two statistical moments of the random completion time is not enough to choose at effective restart rate. Namely, the pair \(\langle T\rangle\) and \(\langle T^{2}\rangle\) determines only the slope of \(\langle T_{\tau}\rangle\) in its dependence on \(r\) at the point \(r=0\), saying nothing about its behavior for non-zero values of \(r\).
A constructive condition for the effectiveness of the Poisson restart can be formulated if we add information about the third-order moment. Based on the knowledge of \(\langle T\rangle\), \(\langle T^{2}\rangle\) and \(\langle T^{3}\rangle\), one can estimate the the Laplace transform of \(P(T)\) as [20]
\[\tilde{P}(r)\geq 1-r\langle T\rangle+\frac{r^{2}\langle T^{2}\rangle^{2}}{r \langle T^{3}\rangle+2\langle T^{2}\rangle}. \tag{14}\]
Next, assuming that the expression on the right side of (14) is positive, from (3) and (14) we get the following estimate
\[\langle T_{\tau}\rangle\leq\frac{r(\langle t\rangle+\langle T\rangle)(r \langle T^{3}\rangle+2\langle T^{2}\rangle)-r{\langle T^{2}\rangle}^{2}}{(1-r \langle T\rangle)(r\langle T^{3}\rangle+2\langle T^{2}\rangle)+r^{2}{\langle T ^{2}\rangle}^{2}}. \tag{15}\]
Therefore, if \(r\) satisfies the set of inequalities
\[\left\{\begin{array}{l}\frac{r(\langle t\rangle+\langle T\rangle)(r\langle T^{3} \rangle+2\langle T^{2}\rangle)-r\langle T^{2}\rangle^{2}}{\langle 1-r\langle T\rangle)(r \langle T^{3}\rangle+2\langle T^{2}\rangle)+r^{2}\langle T^{2}\rangle^{2}}< \langle t\rangle+\langle T\rangle,\\ \\ 1-r\langle T\rangle+\frac{r^{2}\langle T^{2}\rangle^{2}}{r\langle T^{3} \rangle+2\langle T^{2}\rangle}>0,\\ \\ r>0.\end{array}\right. \tag{16}\]
then Poisson restart at rate \(r\) reduces the mean completion time. Solving this system under the assumption that a sufficient condition defined by (13) is met, we find an interval of effective rates
\[0<r<\frac{\langle T^{2}\rangle(\langle T^{2}\rangle-2\langle T\rangle(\langle T \rangle+\langle t\rangle))}{(\langle T\rangle+\langle t\rangle)(\langle T \rangle\langle T^{3}\rangle-\langle T^{2}\rangle^{2})}. \tag{17}\]
Further, within the interval given by (17), the expression on the right side of the inequality (15) attains its minimum value at the point \(r_{opt}^{m}\) given by Eq. (55) in Appendix. As can be found from (17), (14) and (15), the resulting efficiency is given by (56), see Appendix.
In the limit of negligible restart duration \(t=0\), we obtain fairly compact expressions for the optimal point
\[r_{0}=\frac{\sqrt{2}\langle T^{2}\rangle^{3/2}-2\langle T\rangle\langle T^{2} \rangle}{\langle T\rangle\langle T^{3}\rangle-\langle T^{2}\rangle^{2}}. \tag{18}\]
while the corresponding estimate for the dimensionless effectiveness reads
\[\eta_{3}\geq\frac{2\langle T^{2}\rangle\langle T\rangle^{2}+\langle T^{2} \rangle^{2}-2\sqrt{2}\langle T\rangle\sqrt{\langle T^{2}\rangle^{3}}}{\langle T \rangle\left(2\langle T\rangle\langle T^{2}\rangle+\langle T^{3}\rangle-2 \sqrt{2}\sqrt{\langle T^{2}\rangle^{3}}\right)}. \tag{19}\]
Note that the proposed method of constructing an effective Poisson strategy is one of many possible. Alternatively, the Laplace transform of the probability density \(P(T)\) can be estimated as [20]
\[\tilde{P}(r)\geq\sum_{k=0}^{2l-1}(-1)^{k}\frac{r^{k}\langle T^{k}\rangle}{k!}, \tag{20}\]
where \(l=1,2,...\). Let us evaluate the term \(\tilde{P}(r)\) in the numerator of right-hand side of the formula (3) using (20) with \(l=2\), i.e. \(\tilde{P}(r)\geq 1-r\langle T\rangle+\frac{r^{2}}{2}\langle T^{2}\rangle-\frac{r^{ 3}}{6}\langle T^{3}\rangle\). Next, to estimate the Laplace transform \(\tilde{P}(r)\) entering the denominator in the same expression we use Eq. (20) with \(l=1\): \(\tilde{P}(r)\geq 1-r\langle T\rangle\). Then, exploiting these estimates and assuming that \(r<1/\langle T\rangle\) we get from Eq. (3)
\[\langle T_{r}\rangle\leq\frac{6\langle t\rangle+6\langle T\rangle-3r\langle T ^{2}\rangle+r^{2}\langle T^{3}\rangle}{6-6r\langle T\rangle}, \tag{21}\]
and, thus, the interval of effective rates is determined by the following set of inequalities
\[\left\{\begin{array}{l}\frac{6\langle t\rangle+6\langle T\rangle-3r\langle T ^{2}\rangle+r^{2}\langle T^{3}\rangle}{6-6r\langle T\rangle}<\langle t \rangle+\langle T\rangle,\\ \\ r<\frac{1}{\langle T\rangle},\\ \\ r>0.\end{array}\right. \tag{22}\]
Assuming that the condition (13) is fulfilled, we readily find a solution
\[0<r<\frac{3(\langle T^{2}\rangle-2\langle T\rangle(\langle T\rangle+\langle t \rangle))}{\langle T^{3}\rangle}, \tag{23}\]
which obviously differs from that given by Eq. (17) since its derivation is based on different estimate on \(\tilde{P}(r)\). Rate \(r_{opt}^{c}\) determined by Eq. (57) in Appendix minimizes the right side of the inequality (21) on the interval (23), thus providing the highest guaranteed efficiency whose estimate from below is given by Eq. (58).
Finally, using (14) to estimate Laplace transform in the numerator of the right side of the formula (3) and the following from (14) the weaker inequality \(\tilde{P}(r)\geq 1-r\langle T\rangle\) to estimate the denominator, we get a third way to build a win-win Poisson protocol. Assuming \(r<1/\langle T\rangle\), for the average completion time with this approach, we have
\[\langle T_{r}\rangle<\frac{2(\langle t\rangle+\langle T\rangle)\langle T^{2} \rangle+r\left[(\langle t\rangle+\langle T\rangle)\langle T^{3}\rangle- \langle T^{2}\rangle^{2}\right]}{(1-r\langle T\rangle)\left(r\langle T^{3} \rangle+2\langle T^{2}\rangle\right)} \tag{24}\]
A set of guaranteed effective rates is determined by the solution of the system of inequalities
\[\left\{\begin{array}{l}\frac{2(\langle t\rangle+\langle T\rangle)\langle T ^{2}\rangle+r\left[(\langle t\rangle+\langle T\rangle)\langle T^{3}\rangle- \langle T^{2}\rangle^{2}\right]}{(1-r\langle T\rangle)(r\langle T^{3}\rangle+ 2\langle T^{2}\rangle)}<\langle t\rangle+\langle T\rangle,\\ \\ r<\frac{1}{\langle T\rangle},\\ \\ r>0,\end{array}\right. \tag{25}\]
solving which we get
\[0<r<\frac{\langle T^{2}\rangle\left[\langle T^{2}\rangle-2\langle T\rangle( \langle t\rangle+\langle T\rangle)\right]}{(\langle t\rangle+\langle T \rangle)\langle T\rangle\langle T^{3}\rangle}. \tag{26}\]
The optimal rate and the corresponding efficiency are determined by Eqs. (59) and (60) in Appendix.
More generally, using the inequality (20) for \(l>2\), as well as combining it with (14), it is possible, in principle, to obtain an unlimited number of other guaranteed effective Poisson strategies. Here we have given only the simplest protocols that require knowledge of a minimum number of statistical moments for their application.
### Gamma strategy
Let us now analyze the stochastic gamma-protocol. From (4), we find that if the condition
\[\langle T^{3}\rangle\geq 3(\langle T\rangle+\langle t\rangle)\langle T^{2}\rangle, \tag{27}\]
is met then \(\partial_{\beta}\langle T_{\beta}\rangle<0\) at \(\beta=0\). So, the inequality (27) represents a sufficient condition for the existence of an effective gamma-strategy. However, the knowledge of the first three moments, \(\langle T\rangle\), \(\langle T^{2}\rangle\) and \(\langle T^{3}\rangle\), does not allow
us to choose a value of the parameter \(\beta\) that reduces the mean completion time for sure. A desired constructive criterion for the effectiveness of the gamma-strategy can be formulated by adding information about the fourth-order statistical moment \(\langle T^{4}\rangle\).
To work out an upper estimate for the average completion time \(\langle T_{\beta}\rangle\) given by (4), it is useful to note that the derivative of the Laplace transform \(\partial_{\beta}\tilde{P}(\beta)\) can be represented as
\[\partial_{\beta}\tilde{P}(\beta)=\partial_{\beta}\int_{0}^{\infty }dTP(T)e^{-\beta T}= \tag{28}\] \[=-\int_{0}^{\infty}dTP(T)Te^{-\beta T}=-\langle T\rangle\tilde{ Q}(\beta). \tag{29}\]
where \(Q(T)=\frac{T}{\langle T\rangle}P(T)\). Being non-negative and normalized by unity, \(\tilde{Q}(T)\) can be treated as a probability density of some random variable, and, therefore, its Laplace transform \(\tilde{Q}(r)\) can be evaluated from below using the inequality 20
\[\tilde{Q}(\beta)\geq\sum_{k=0}^{2l-1}(-1)^{k}\frac{\beta^{k}\langle T^{k} \rangle_{Q}}{k!}, \tag{30}\]
where \(\langle T^{n}\rangle_{Q}\equiv\int_{0}^{\infty}T^{n}Q(T)dT=\frac{\langle T^{n +1}\rangle}{\langle T\rangle}\). From (28) and (30) we then find
\[\partial_{\beta}\tilde{P}(\beta)\leq\sum_{k=0}^{2l-1}\frac{(-1)^{k+1}}{k!} \beta^{k}\langle T^{k+1}\rangle \tag{31}\]
Now let us estimate the term \(\tilde{P}(\beta)\) in the numerator of Eq. (4) via the inequality (20) at \(l=2\). Also let us use (20) at \(l=2\) to estimate the term \(\tilde{P}(\beta)\) in the denominator. Finally, the term \(\partial_{\beta}\tilde{P}(\beta)=-\langle T\rangle\tilde{Q}(\beta)\) can be evaluated using (30) with \(l=1\). Combining all these estimates we obtain
\[\langle T_{\beta}\rangle\leq\frac{6\langle t\rangle+6\langle T\rangle-\beta^{2 }\langle T^{3}\rangle+\beta^{3}\langle T^{4}\rangle}{6-3\beta^{2}\langle T^{2 }\rangle-\beta^{3}\langle T^{3}\rangle}, \tag{32}\]
which implies that \(1-\frac{\beta^{2}}{2}\langle T^{2}\rangle-\frac{\beta^{3}}{6}\langle T^{3} \rangle>0\). Then the effective rates \(\beta\) can be found from the requirement that the right-hand side of Eq. (32) is less than mean completion time in the absence of restart \(\langle T\rangle+\langle t\rangle\). This yields the following interval
\[0<\beta<\frac{\langle T^{3}\rangle-3(\langle T\rangle+\langle t\rangle) \langle T^{2}\rangle}{(\langle T\rangle+\langle t\rangle)\langle T^{3} \rangle+\langle T^{4}\rangle}. \tag{33}\]
As expected, this interval (33) is not empty if the existence conditions (27) is met. The best rate providing maximum guaranteed efficiency can be found by solving algebraic equation of the third degree.
Following the same logic, one can derive the effective gamma-strategy with an arbitrary natural shape parameter \(k\) of the probability density of inter-restarts intervals \(\rho(\tau)=\frac{\beta^{k}}{\Gamma(k)}\tau^{k-1}e^{-\beta\tau}\). Note, however, that the larger \(k\) is, the greater the number of statistical moments is required to determine the range of effective rates \(\beta\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Protocol & Applicability condition & Range for effective period/rate & Recommended period/rate & Efficiency \\ \hline Regular1 & \(m+\langle t\rangle<\langle|T-m|\rangle\) & \(m\leq\tau<\langle|T-m|\rangle-\langle t\rangle\) & \(\tau_{0}=m\) & \(\eta_{1}=\frac{(1^{2}-m)-m-\langle t\rangle}{\langle T\rangle+\langle t\rangle}\) \\ Regular2 & \(m<\frac{1}{2}\langle T\rangle-\frac{1}{2}\langle t\rangle\) & \(m\leq\tau<\frac{1}{2}\langle T\rangle-\frac{1}{2}\langle t\rangle\) & \(\tau_{0}=m\) & \(\eta_{2}>1-\frac{2(m+\langle t\rangle)}{\langle T\rangle+\langle t\rangle}\) \\ Poisson1 & \(\langle T^{2}\rangle\geq 2\langle T\rangle(\langle T\rangle+\langle t\rangle)\) & \(0<r<\frac{(T^{2})((T^{2})-2\langle T\rangle(\langle T\rangle+\langle t \rangle))}{(\langle T\rangle+\langle t\rangle)(\langle T\rangle^{3})-(T^{2})^{2}}\) & see Eq. (55) & see Eq. (56) \\ Poisson2 & \(\langle T^{2}\rangle\geq 2\langle T\rangle(\langle T\rangle+\langle t\rangle)\) & \(0<r<\frac{3(T^{2})-2\langle T\rangle(\langle T\rangle+\langle t\rangle))}{( \langle T^{3}\rangle+\langle T\rangle)}\) & see Eq. (57) & see Eq. (58) \\ Poisson3 & \(\langle T^{2}\rangle\geq 2\langle T\rangle(\langle T\rangle+\langle t\rangle)\) & \(0<r<\frac{(T^{2})[(T^{2})-2\langle T\rangle(\langle t\rangle+\langle T \rangle)]}{(\langle t\rangle+\langle T\rangle)(\langle T\rangle^{2})(T^{3})}\) & see Eq. (59) & see Eq. (60) \\ Gamma & \(\langle T^{3}\rangle\geq 3(\langle T\rangle+\langle t\rangle)\langle T^{2}\rangle\) & \(0<\beta<\frac{(T^{3})-3(\langle T\rangle+\langle t\rangle)(T^{2})}{(\langle T \rangle+\langle t\rangle)(\langle T^{3}\rangle+\langle t\rangle)}\) & numerically available & numerically available \\ \hline \end{tabular}
\end{table}
Table 1: The table contains various restart protocols that are guaranteed to reduce the average completion time of the process. For each strategy, the table’s columns specify the following: the condition under which the corresponding strategy is applicable; the resulting range for an effective period (for periodic protocols) or rate (for Poisson or Gamma protocols); the optimal period or rate providing the maximum guaranteed effectiveness; an estimate for resulting maximum guaranteed efficiency.
## Optimization of success probability
So far, we have been talking about optimizing the average completion time of a stochastic process. Meanwhile, restart can also be used to increase the probability of observing a desired outcome of a random process with several alternative completion scenarios [16]. Examples of such processes include random search with multiple targets [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], random search with mortality [32; 33; 34; 35; 36; 37; 38; 39; 40], chemical reactions with competing paths [41; 42; 43], folding of biopolymer into one of several native states[44; 45; 46; 47; 48; 49]. In this section, we provide constructive criteria of when restart increases chances that random process ends in the desired way.
As a model consider a random process with two completion scenarios - "success" and "failure". The process is characterized by a random completion time \(T\) having a probability density of \(P(T)\). The latter can be represented as the superposition \(P(T)=P^{s}(T)+P^{f}(T)\), where \(P^{s}(T)\) and \(P^{f}(T)\) denote the contribution of successful and unsuccessful trials, respectively. Note that the normalization of the function \(P^{s}(T)\) determines the 'undisturbed' probability of success of \(p\): \(p=\int_{0}^{\infty}P^{s}(T)dT\).
As previously, the restart protocol \(\mathcal{R}\) is determined by a sequence of time intervals \(\tau_{1},\tau_{2},\dots\) which specifies the restart moments. We will say that the protocol is effective if its implementation increases the probability of success, i.e. \(p_{\mathcal{R}}>p\). The corresponding restart efficiency is defined as
\[\chi=\frac{p_{\mathcal{R}}-p}{1-p}. \tag{34}\]
For useful protocols we, thus, have \(0<\chi\leq 1\).
If the process is restarted in strictly regular fashion with period \(\tau\), then the resulting probability of observing a successful outcome is equal to [16]
\[p_{\tau}=\frac{\int_{0}^{\tau}P^{s}(T)dT}{\int_{0}^{\tau}P(T)dT}. \tag{35}\]
The probability of success \(p_{r}\) for the process under Poisson restart at rate \(r\) has the following form [16]
\[p_{r}=\frac{\tilde{P}^{s}(r)}{\tilde{P}(r)}, \tag{36}\]
where \(\tilde{P}^{s}(r)\) and \(\tilde{P}(r)\) denote the Laplace transforms of, respectively, \(P^{s}(T)\) and \(P(T)\) evaluated at \(r\).
Finally, for the gamma-protocol with rate parameter \(\beta\) and shape parameter \(k=1\) the resulting success probability is be given by the following expression
\[p_{\beta}=\frac{\beta\partial_{\beta}\tilde{P}^{s}(\beta)-\tilde{P}^{s}(\beta )}{\beta\partial_{\beta}\tilde{P}(\beta)-\tilde{P}(\beta)}. \tag{37}\]
Note that Eqs. (35-37) are valid in the presence of arbitrarily distributed random penalty for restart as long as the penalty is uncorrelated with outcome.
### Regular strategy
Suppose the condition
\[m_{s}<m, \tag{38}\]
is met, where \(m_{s}\) is the median completion time of successful attempts and \(m\) is the unconditional median completion time of the stochastic process of interest. These metrics are defined by the relations \(p\int_{0}^{m_{s}}dTP^{s}(T)=1/2\) and \((1-p)\int_{0}^{m_{f}}dTP^{f}(T)=1/2\). Then from (35) it is easy to see that the restart with a period \(\tau\) belonging to the interval
\[m_{s}<\tau<m, \tag{39}\]
is effective for sure, since in this case the following estimate is valid
\[p_{\tau}=\frac{p+2\int_{m_{s}}^{\tau}dTP^{s}(T)}{1-2\int_{\tau}^{m}dTP(T)}>p. \tag{40}\]
Unfortunately, the optimal period providing the greatest guaranteed efficiency \(\chi\) on the interval (39) cannot be expressed in terms of \(m_{s}\) and \(m\) but depends on the fine details of the probability density \(P(T)\).
### Poisson strategy
A simple sufficient condition for existence of effective Poisson restart protocol found in [16] reads
\[\langle T_{s}\rangle<\langle T\rangle, \tag{41}\]
where \(\langle T_{s}\rangle=p^{-1}\int_{0}^{\infty}dTP^{s}(T)T\) is the average completion time of successful trials, while \(\langle T\rangle\) is the unconditional mean completion time. This criterion, however, does not say anything about how to choose an efficient restart rate since knowledge of linear statistical moments \(\langle T_{s}\rangle\) and \(\langle T\rangle\) alone is not enough for these purposes. Below we show that adding information about the second moment of the random completion time \(\langle T^{2}\rangle\) allows us to formulate a constructive criterion for the effectiveness of a Poisson restart.
The probability of success given by Eq. (36) can be estimated from below as
\[p_{r}\geq\frac{(1-r\langle T_{s}\rangle)(\langle T^{2}\rangle r+\langle T \rangle)}{(\langle T^{2}\rangle-\langle T\rangle^{2})r+\langle T\rangle}p, \tag{42}\]
where we used the bound \(P^{s}(r)\geq p\left(1-r\langle T_{s}\rangle\right)\), which directly follows from (14), and the inequality [20]
\[P(r)\leq 1-\frac{r\langle T\rangle^{2}}{r\langle T^{2}\rangle+\langle T \rangle}. \tag{43}\]
We see from (42) that if the existence condition (41) is met, then a Poisson restart with a rate enclosed inside the interval
\[0<r<\frac{\langle T\rangle\left(\langle T\rangle-\langle T_{s}\rangle\right)}{ \langle T^{2}\rangle\langle T_{s}\rangle}, \tag{44}\]
increases the chances of success, i.e. \(p_{r}>p\). As can be found by examining the right side of the inequality (46) for an extremum, the point \(r_{0}\) given by Eq. (61) in Appendix provides the maximum guaranteed gain for the given values of \(\langle T_{s}\rangle\), \(\langle T\rangle\) and \(\langle T^{2}\rangle\) provided the use of way of estimates described above. The resulting efficiency is estimated from below as shown in Eq. (62).
Let us demonstrate another road to the range of efficient rates. Alternatively, using the results of [50], we can evaluate the Laplace transform from above as
\[\tilde{P}(r)\leq 1-\frac{\langle T\rangle^{2}}{\langle T^{2}\rangle}+\frac{ \langle T\rangle^{2}}{\langle T^{2}\rangle}e^{-\frac{\langle T^{2}\rangle}{ \langle T\rangle}r}, \tag{45}\]
and then from Eq. (36) one gets
\[p_{r}\geq\frac{\langle T^{2}\rangle(1-r\langle T_{s}\rangle)}{\langle T^{2} \rangle-\langle T\rangle^{2}+\langle T\rangle^{2}e^{-\frac{\langle T^{2} \rangle}{\langle T\rangle}r}}p. \tag{46}\]
By requiring the expression on the right side of the last inequality to exceed the undisturbed probability of success \(p\), we find that if the condition \(\langle T_{s}\rangle<\langle T\rangle\) is satisfied, then all rates belonging to the interval \(0<r<r_{c}\), where \(r_{c}\) represents the solution of the transcendental equation \(1-\frac{\langle T^{2}\rangle\langle T_{s}\rangle}{\langle T\rangle^{2}}r-e^{ -\frac{\langle T^{2}\rangle}{\langle T\rangle}r}=0\), increase the chances of success. Interestingly, at \(\langle T\rangle-\langle T_{s}\rangle\ll\langle T\rangle\), one obtains the boundary \(r_{c}\approx 2\frac{\langle T\rangle-\langle T_{s}\rangle}{\langle T^{2}\rangle}\) which twice exceeds the corresponding value dictated by Eq. (44).
### Gamma strategy
Finally, let us analyse the case of gamma-protocol. From (4), we see that if the condition
\[\langle T^{2}\rangle\geq\langle T_{s}^{2}\rangle, \tag{47}\]
is met, then \(\partial_{\beta}p_{\beta}>0\) at \(\beta=0\) and, therefore, the inequality (47) represents a sufficient condition for the existence of an efficient gamma-strategy. Note, however, that the fixed pair \(\langle T^{2}\rangle\) and \(\langle T_{s}^{2}\rangle\) determines only the slope of \(p_{\beta}\) in its dependence of \(\beta\) at the point \(\beta=0\), without saying anything about its behavior for non-zero \(\beta\). Let us show that a efficient rates can be specified if the third-order moment \(\langle T^{3}\rangle\) is additionally known.
The success probability dictated by (37) can be estimated from below in several different ways. Namely, first let us exploit inequality (20) at \(l=1\) and \(l=2\) for \(\beta\partial_{\beta}\tilde{P}(\beta)\) and \(\tilde{P}(\beta)\), respectively. Next, let us estimate both terms in the denominator of Eq. (37) via the inequality [20]
\[\tilde{P}(\beta)\leq\sum_{k=0}^{2l}(-1)^{k}\frac{\beta^{k}\langle T^{k}\rangle }{k!}, \tag{48}\]
at \(l=1\). This yields
\[p_{\beta}\geq\frac{6-3\langle T_{s}^{2}\rangle\beta^{2}-\langle T_{s}^{3} \rangle\beta^{3}}{3(2-\langle T^{2}\rangle\beta^{2}+\langle T^{3}\rangle \beta^{3})}p, \tag{49}\]
where we assumed \(1-\frac{\beta^{2}}{2}\langle T^{2}\rangle+\frac{\beta^{3}}{2}\langle T^{3} \rangle\geq 0\). Then, requiring that the corresponding bound is greater than the undisturbed probability of success \(p\), we obtain the interval
\[0<\beta<\frac{3(\langle T^{2}\rangle-\langle T_{s}^{2}\rangle)}{3\langle T^{3 }\rangle+\langle T_{s}^{3}\rangle}, \tag{50}\]
for rates which guarantee to increase the chances of success. The interval is non-empty as long as the condition (47) is met.
Alternatively, one can use (20) with \(l=1\) to bound \(\tilde{P}(\beta)\) in the numerator of (37), while leaving other estimates unchanged. This gives
\[p_{\beta}\geq\frac{2-2\beta^{2}\langle T_{s}^{2}\rangle}{2-\beta^{2}\langle T ^{2}\rangle+\beta^{3}\langle T^{3}\rangle}p. \tag{51}\]
The same line of reasoning as described above the following interval of efficient rates
\[0<\beta<\frac{\langle T^{2}\rangle-2\langle T_{s}^{2}\rangle}{\langle T^{3} \rangle}. \tag{52}\]
In contrast to the previous case, here we face a stronger applicability condition \(\langle T^{2}\rangle\geq 2\langle T_{s}^{2}\rangle\).
Similar result can be obtained if we estimate both terms entering numerator of expression for \(p_{\beta}\) using (20) with \(l=1\), and exploit (43) to estimate the term \(-\partial_{\beta}\tilde{P}(\beta)=\langle T\rangle\tilde{Q}(\beta)\) in the denominator. For the probability of success we then find
\[p_{\beta}\geq\frac{(2-2\beta^{2}\langle T_{s}^{2}\rangle)(\langle T^{2} \rangle+\beta\langle T^{3}\rangle)}{2\langle T^{2}\rangle-\beta^{2}\langle T ^{2}\rangle^{2}+\beta^{3}\langle T^{3}\rangle\langle T^{2}\rangle+2\beta \langle T^{3}\rangle}p. \tag{53}\]
This estimate yields a narrower interval of effective rates
\[0<\beta<\frac{\langle T^{2}\rangle(\langle T^{2}\rangle-2\langle T_{s}^{2} \rangle)}{\langle T^{3}\rangle(\langle T^{2}\rangle+2\langle T_{s}^{2} \rangle)}, \tag{54}\]
with the previous condition of applicability \(\langle T^{2}\rangle>2\langle T_{s}^{2}\rangle\).
To determine the best rates that provide the maximum guaranteed efficiency of gamma restart at the intervals described above, and to calculate the resulting efficiencies, one need to solve high-order algebraic equations, which is more convenient to do by numerical methods.
## Conclusion
While very useful, the previously known sufficient conditions for the existence of an efficient restart strategy (see [16; 17; 6]) do not specify this strategy. Significant progress in solving the problem of choosing an effective restart policy at partial knowledge of process statistics was achieved in work [13], whose authors formulated relatively simple constructive criteria for the effectiveness of periodic restarts. Overcoming limitations of the previously known existence results, these criteria offer a specific restart period that is guaranteed to reduce the average completion time of the random process. Motivated by this progress, in this paper, we generalized one of the criteria proposed in [13] to the case of a non-zero time penalty for restart and also constructed several new criteria, some of which concern the case of stochastic restart. In addition, we have offered the first examples of constructive criterion of restart efficiency in context of the success probability optimization. The results of the analysis are summarized in tables (I) and (II).
It is important to note that all the criteria formulated here represent sufficient but not necessary conditions. This means that there exist random processes for which none of the applicability conditions given by Eqs. (6),(10), (13), (27), (38), (41) and (47) is fulfilled, but an effective strategy still exists. Thus, the protocols listed in the tables (I) and (II) are complementary and, in practice, should be used together in order to compensate for each other's shortcomings at least partially.
## Acknowledgements
The work was supported by the Russian Science Foundation (RSF), project #22-72-10052. The authors are grateful to V.V. Lebedev for useful comments. I.S. Nikitin would like to thank A.M. Zubkov for referring to relevant literature on statistical inequalities.
## Appendix
### Optimization of the mean completion time
#### Poisson protocol
It is easy to show that within the interval given by (17), the expression on the right side of the inequality (15) reaches its minimum value at the point
\[r_{opt}^{m}=\frac{-2\langle T^{2}\rangle\mathcal{T}_{on}+\sqrt{2\langle T^{2} \rangle^{3}\left[\mathcal{T}_{on}\langle T^{3}\rangle-\langle T^{2}\rangle^{2 }-2\mathcal{T}_{on}\langle T^{2}\rangle\langle t\rangle\right]\left[\langle T ^{3}\rangle\langle T\rangle-\langle T^{2}\rangle^{2}\right]^{-1}}}{\mathcal{T }_{on}\langle T^{3}\rangle-\langle T^{2}\rangle^{2}}, \tag{55}\]
where \(\mathcal{T}_{on}=\langle T\rangle+\langle t\rangle\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Protocol & Applicability condition & Range for effective period/rate & Recommended period/rate & Efficiency \\ \hline Regular & \(m_{s}<m\) & \(m_{s}<\tau<m\) & - & - \\ Poisson 1 & \(\langle T_{s}\rangle<\langle T\rangle\) & \(0<r<\frac{\langle T\rangle\left(\mathcal{T}\right)-\langle T_{s}\rangle \rangle}{\langle T^{2}\rangle^{2}\langle T_{s}\rangle}\) & see Eq. (61) & see Eq. (62) \\ Poisson 2 & \(\langle T_{s}\rangle<\langle T\rangle\) & numerically available & numerically available & numerically available \\ Gamma 1 & \(\langle T_{s}^{2}\rangle<\langle T^{2}\rangle\) & \(0<\beta<\frac{3\langle(T^{2})-\langle T_{s}^{2}\rangle\rangle}{3\langle T^{ 3}\rangle+\langle T_{s}^{2}\rangle}\) & numerically available & numerically available \\ Gamma 2 & \(2\langle T_{s}^{2}\rangle<\langle T^{2}\rangle\) & \(0<\beta<\frac{\langle T^{2}\rangle-2\langle T_{s}^{2}\rangle}{\langle T^{3} \rangle}\) & numerically available & numerically available \\ Gamma 3 & \(2\langle T_{s}^{2}\rangle<\langle T^{2}\rangle\) & \(0<\beta<\frac{\langle T^{2}\rangle(\langle T^{2}\rangle-2\langle T_{s}^{2} \rangle)}{\langle T^{3}\rangle(\langle T^{2}\rangle+2\langle T_{s}^{2}\rangle)}\) & numerically available & numerically available \\ \hline \end{tabular}
\end{table}
Table 2: A table summarizing various restart protocols guaranteed to increase the probability of success. For each strategy, the table’s columns specify the following: the condition when the corresponding strategy works; the resulting range for an effective period (for periodic protocols) or rate (for Poisson or Gamma protocols); the optimal period or rate providing the maximum guaranteed effectiveness; an estimate for resulting maximum guaranteed efficiency.
As can be easily shown from (17), (14) and (15), the resulting efficiency is defined as
\[\eta_{3}\geq 1-\frac{\mathcal{T}_{on}\langle T^{3}\rangle-\langle T^{2}\rangle^{2 }}{\mathcal{T}_{on}\left[\langle T^{3}\rangle-2\langle T\rangle\langle T^{2} \rangle\right]-2\mathcal{T}_{on}r_{opt}^{m}\left[\langle T\rangle\langle T^{3 }\rangle-\langle T^{2}\rangle^{2}\right]} \tag{56}\]
Rate \(r_{opt}^{c}\) minimizes the right side of the inequality (21) on the interval (23), thus providing the greatest guaranteed efficiency at the given values of the first three points of the initial completion time
\[r_{opt}^{c}=\frac{1}{\langle T\rangle}-\frac{\sqrt{\langle T^{3}\rangle( \langle T^{3}\rangle-3\langle T^{2}\rangle\langle T\rangle+6\langle T \rangle^{2}\mathcal{T}_{on})}}{\langle T\rangle\langle T^{3}\rangle} \tag{57}\]
Efficiency can be estimated from below as
\[\eta_{4}\geq 1-\frac{3\langle T^{2}\rangle-2\langle T^{3}\rangle r_{opt}^{c}}{ 6\langle T\rangle\left(\langle t\rangle+\langle T\rangle\right)} \tag{58}\]
The optimal rate is given by
\[r_{opt}^{ms}=\frac{-2\mathcal{T}_{on}\langle T^{2}\rangle+\sqrt{2}\sqrt{ \left(2\mathcal{T}_{on}\langle T\rangle\langle T^{2}\rangle+\mathcal{T}_{on }\langle T^{3}\rangle-\langle T^{2}\rangle^{2}\right)\langle T^{2}\rangle^{3 }\left[\langle T\rangle\langle T^{3}\rangle\right]^{-1}}}{\mathcal{T}_{on} \langle T^{3}\rangle-\langle T^{2}\rangle^{2}}. \tag{59}\]
Efficiency can be estimated from below as
\[\eta_{5}\geq 1-\frac{\langle T^{3}\rangle\left(\langle t\rangle+T\right)- \langle T^{2}\rangle^{2}}{\left(\langle t\rangle+\langle T\rangle\right) \left(\langle T^{3}\rangle-2\langle T\rangle\left(r_{opt}^{ms}\langle T^{3 }\rangle+\langle T^{2}\rangle\right)\right)} \tag{60}\]
### Optimization of splitting probabilities
#### Poisson strategy
It is easy to show by examining the right side of the inequality (46) at the extremum, the point belonging to the specified interval
\[r_{0}=\frac{-\langle T_{s}\rangle\langle T\rangle\langle T^{2}\rangle+\sqrt{ \langle T\rangle^{3}\langle T^{2}\rangle\langle T_{s}\rangle(\langle T \rangle\langle T_{s}\rangle+\sigma^{2})}}{\sigma^{2}\langle T^{2}\rangle \langle T_{s}\rangle}, \tag{61}\]
where \(\sigma^{2}=\langle T^{2}\rangle-\langle T\rangle^{2}\), provides the maximum guaranteed gain for the given values of \(\langle T_{s}\rangle\), \(\langle T\rangle\) and \(\langle T^{2}\rangle\). The resulting efficiency is estimated from below as
\[\chi_{2}> \frac{p}{1-p}\frac{\langle T\rangle}{\sigma^{4}}\cdot\bigg{[} \left(\left(\langle T\rangle^{2}+\langle T^{2}\rangle\right)\langle T_{s} \rangle-\langle T\rangle\sigma^{2}\right)-2\sqrt{\langle T\rangle\langle T^{2} \rangle\langle T_{s}\rangle\left(\langle T\rangle\langle T_{s}\rangle+\sigma^ {2}\right)}\bigg{]}. \tag{62}\]
|
2309.11802 | On the role of soft gluons in collinear parton densities and parton
shower event generators | The role of soft (non-perturbative) gluons in collinear parton densities and
parton shower event generators is investigated with the Parton Branching method
as a solution of the DGLAP evolution equations. It is found that soft gluons
play a significant role.
Within the Parton Branching frame, the Sudakov form factor can be split into
a perturbative and non-perturbative part. The non-perturbative part can be
calculated analytically under certain conditions. It is shown that the
inclusion of soft (non-perturbative) gluons in the parton density evolution is
essential for the proper cancellation of divergent terms. It is argued that the
non-perturbative part of the Sudakov form factor has its correspondence in
Transverse Momentum Dependent parton distributions. Within the Parton Branching
approach, this non-perturbative Sudakov form factor is constrained by fits of
inclusive, collinear parton densities.
We show that the non-perturbative Sudakov form factor and soft gluon
emissions are essential for inclusive distributions (collinear parton densities
and Drell-Yan transverse momentum spectra). We also show by using Parton
Branching TMD parton shower, that the effect of soft gluons plays essentially
no role in final state hadron spectra and jets. | M. Mendizabal, F. Guzman, H. Jung, S. Taheri Monfared | 2023-09-21T06:13:51Z | http://arxiv.org/abs/2309.11802v3 | # On the role of soft gluons in collinear parton densities
###### Abstract
The role of soft (non-perturbative) gluons in collinear parton densities is investigated with the Parton Branching method as a solution of the DGLAP evolution equations. It is found that soft gluons contribute significantly to collinear parton densities.
Within the Parton Branching frame, the Sudakov form factor can be split into a perturbative and non-perturbative part. The non-perturbative part can be calculated analytically under certain conditions. It is shown that the inclusion of soft (non-perturbative) gluons to the parton density evolution is essential for the proper cancellation of divergent terms.
It is argued that the non-perturbative part of the Sudakov form factor has its correspondence in Transverse Momentum Dependent parton distributions. Within the Parton Branching approach, this non-perturbative Sudakov form factor is constrained by fits of inclusive, collinear parton densities.
We show that the non-perturbative Sudakov form factor and soft gluon emissions are essential for inclusive distributions (collinear parton densities and Drell-Yan transverse momentum spectra), while those soft gluons play essentially no role in final state hadron spectra.
Introduction
Calculations based on the DGLAP [1, 2, 3, 4] evolution of parton densities together with hard scattering coefficient functions (or matrix elements) at next-to-leading (NLO) and next-to-next-to-leading order (NNLO) in the strong coupling provide a very successful description of experimental measurements from small to rather large scales \(\mu\).
The Parton-Branching (PB) approach [5, 6] gives a solution to the DGLAP equations, based on an iterative solution of the integral evolution equation. The PB-approach also allows to study in detail each of the branching vertices, and in particular the contribution of perturbative and non-perturbative emissions. In the frame of Transverse Momentum Dependent (TMD) parton densities [7, 8] and especially in the CSS formalism [9], a non-perturbative Sudakov form factor is introduced, in addition. In the PB-approach this non-perturbative Sudakov form factor already appears in inclusive (collinear) parton densities, and with the determination of collinear parton densities by fits to inclusive experimental data, the non-perturbative Sudakov form factor is fixed also when applied to TMD parton densities.
In this paper we show explicitly how the Sudakov from factor is obtained from the DGLAP evolution equation and how this form factor can be split into a perturbative and non-perturbative part. We argue, that both parts are essential for collinear parton distributions, as neglecting soft gluons would lead to non-cancellation of singular contribution in cross section calculations at NLO and beyond.
## 2 PB approach as a solution of DGLAP
The PB method [5, 6] provides a solution of the DGLAP [1, 2, 3, 4] evolution equations. The DGLAP evolution equation for the parton density of parton \(a\) with momentum fraction \(x\) at the scale \(\mu\) reads:
\[\mu^{2}\frac{\partial f_{a}(x,\mu^{2})}{\partial\mu^{2}}=\sum_{b}\int_{x}^{1} \frac{dz}{z}\ P_{ab}\left(\alpha_{s}(\mu^{2}),z\right)\ f_{b}\left(\frac{x}{z},\mu^{2}\right)\, \tag{1}\]
with the regularized DGLAP splitting functions \(P_{ab}\) describing the splitting of parton \(b\) into a parton \(a\). The splitting functions \(P_{ab}\) can be decomposed as (in the notation of Ref. [5]):
\[P_{ab}(z,\alpha_{s})=D_{ab}(\alpha_{s})\delta(1-z)+K_{ab}(\alpha_{s})\frac{1}{ (1-z)_{+}}+R_{ab}(z,\alpha_{s}). \tag{2}\]
The coefficients \(D\) and \(K\) can be written as \(D_{ab}(\alpha_{s})=\delta_{ab}d_{a}(\alpha_{s})\), \(K_{ab}(\alpha_{s})=\delta_{ab}k_{a}(\alpha_{s})\) and the coefficients \(R_{ab}\) contain only terms which are not singular for \(z\to 1\). Each of those three coefficients can be expanded in powers of \(\alpha_{s}\):
\[d_{a}(\alpha_{s})=\sum_{n=1}^{\infty}\left(\frac{\alpha_{s}}{2\pi}\right)^{n} d_{a}^{(n-1)},\ k_{a}(\alpha_{s})=\sum_{n=1}^{\infty}\left(\frac{\alpha_{s}}{2 \pi}\right)^{n}k_{a}^{(n-1)},\ R_{ab}(z,\alpha_{s})=\sum_{n=1}^{\infty}\left( \frac{\alpha_{s}}{2\pi}\right)^{n}R_{ab}^{(n-1)}(z) \tag{3}\]
The plus-prescription and the \(D\) part in eq.(2) can be expanded and eq.(1) can be reformulated introducing a Sudakov form factor \(\Delta_{a}^{S}(\mu^{2})\) (for details on the calculation see the appendix Sec. 6) which is defined as:
\[\Delta_{a}^{S}(\mu^{2},\mu_{0}^{2})=\exp\left(-\int_{\mu_{0}^{2}}^{\mu^{2}}\frac {\mathrm{d}{\mu^{\prime}}^{2}}{\mu^{\prime 2}}\left[\int_{0}^{z_{M}}k_{a}(\alpha_{s}) \frac{1}{1-z}\mathrm{d}z-d_{a}(\alpha_{s})\right]\right)\, \tag{4}\]
where an upper limit \(z_{M}=1-\epsilon\) is introduced to allow numerical integration over \(z\). In order to reproduce DGLAP, \(\epsilon\to 0\) is required. Note, that the expression of \(\Delta_{a}^{S}\) is different from the one used in Ref. [5], for a relation of both see the appendix Sec. 6. The evolution equation for the parton density \(f_{a}(x,\mu^{2})\) at scale \(\mu\) is then given by (as a solution of eq.(1), see also [10]):
\[f_{a}(x,\mu^{2})=\Delta_{a}^{S}(\mu^{2})\ f_{a}(x,\mu_{0}^{2})+\sum_{b}\int_{ \mu_{0}^{2}}^{\mu^{2}}\frac{d\mu^{\prime 2}}{\mu^{\prime 2}}\frac{\Delta_{a}^{S}( \mu^{2})}{\Delta_{a}^{S}(\mu^{\prime 2})}\int_{x}^{z_{M}}\frac{dz}{z}\ \hat{P}_{ab}( \alpha_{s}(\mu^{\prime 2}),z)\ f_{b}\left(\frac{x}{z},\mu^{\prime 2}\right) \tag{5}\]
with the unregularized splitting functions \(\hat{P}_{ab}\) (without the \(D_{ab}\) piece, replacing \(1/(1-z)_{+}\) by \(1/(1-z)\)) and \(\mu_{0}\) being the starting scale. Since the evolution equation is solved iteratively, one has access to every individual branching vertex, and thus one can calculate also the transverse momenta (\(q_{\mathrm{t}}\)) of the emitted partons. Details on the formulation for TMD parton distributions are given in Ref. [5]).
On a collinear level, it was shown in Ref. [5] that the PB approach reproduces exactly the DGLAP evolution of parton densities [11], if the renormalization scale (the argument in \(\alpha_{s}\)) is set to the evolution scale \(\mu\) and if \(z_{M}\to 1\). In Ref. [12] the PB parton distributions are obtained from a fit [13, 14] of the parameters of the \(x\)-dependent starting distributions to describe high-precision deep-inelastic scattering data [15]. Two different parton distribution sets (we use PB-NLO-2018 as a shorthand notation for PB-NLO-HERAI+II-2018) were obtained, PB-NLO-2018 Set1, which for collinear distributions agrees exactly with HERAPDF2.0NLO [15], and another set, PB-NLO-2018 Set2, which uses \(q_{\mathrm{t}}\) as the argument in \(\alpha_{s}\), inspired by angular ordering conditions. All PB parton distributions (and many others) are accessible in TMDlib and via the graphical interface TMDplotter [16, 17].
In the following, we concentrate on the PB-NLO-2018 Set1 scenario because of its direct correspondence to standard DGLAP solutions, a discussion on PB-NLO-2018 Set2 is given in Ref. [18, 19].
### The PB Sudakov form factor
The concept of resolvable and non-resolvable branchings with Sudakov form factors allows for an intuitive interpretation of the parton evolution pattern. The Sudakov form factors give the probability to evolve from one scale to another scale without resolvable branching. While the concept of the PB method is similar to a parton shower approach, the method is used here to solve the DGLAP evolution equation.
In order to illustrate the importance of resolvable and non-resolvable branchings we separate the Sudakov form factor \(\Delta_{a}^{S}\) into a perturbative (\(q_{\rm t}>q_{0}\)) and non-perturbative (\(q_{\rm t}<q_{0}\)) part by introducing a resolution scale \(z_{\rm dyn}=1-q_{0}/q\) (see Ref. [20]). This scale is motivated by angular ordering and the requirement to resolve an emitted parton with \(q_{\rm t}=q(1-z)>q_{0}\).
The Sudakov form factor is then given by* :
Footnote *: It can be shown, that \(\Delta_{a}^{\rm(P)}\) coincides with the Sudakov form factor used in CSS [9] up to next-to-leading and even partially next-to-next-to-leading logarithms (see [21, 22]). The non-perturbative Sudakov form factor \(\Delta_{a}^{\rm(NP)}\) has a similar structure as the non-perturbative Sudakov form factor in CSS with the typical \(\log(\mu^{2}/\mu_{0}^{2})\) dependence.
\[\Delta_{a}^{S}(\mu^{2},\mu_{0}^{2})= \exp\left(-\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{dq^{2}}{q^{2}} \left[\int_{0}^{z_{\rm dyn}(q)}dz\frac{k_{a}(\alpha_{s})}{1-z}-d_{a}(\alpha_{ s})\right]\right) \tag{6}\] \[\times\exp\left(-\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{dq^{2}}{q^{2} }\int_{z_{\rm dyn}(q)}^{z_{M}}dz\frac{k_{a}(\alpha_{s})}{1-z}\right)\] \[= \Delta_{a}^{\rm(P)}\left(\mu^{2},\mu_{0}^{2},q_{0}^{2}\right) \cdot\Delta_{a}^{\rm(NP)}\left(\mu^{2},\mu_{0}^{2},\epsilon,q_{0}^{2}\right)\;.\]
It is interesting to note, that \(\Delta_{a}^{\rm(NP)}\) develops a \(\log\mu\) dependence, and under certain conditions, \(\Delta_{a}^{\rm(NP)}\) can be even calculated analytically. It is this \(\log\mu\) dependence which makes the non-perturbative contribution of soft gluon emissions and the resulting net transverse momentum so different from any intrinsic \(k_{\rm T}\)-dependence, which is \(\mu\)-scale independent.
In Fig. 1 we show parton distributions obtained with the PB approach using the starting distributions from PB-NLO-2018 Set1 from Ref. [12] for different scales \(\mu\). We show distributions for down quark parton densities for different values of \(z_{M}\): \(z_{M}\to 1\) (default) and
Figure 1: Integrated down-quark distributions at \(\mu=10,100\) GeV and \(\mu=500\) GeV obtained from the PB approach for different values of \(z_{M}\): PB-NLO-HERAI+II-set1 applies \(z_{M}\to 1\) and PB-NLO-set1 applies \(z_{M}=z_{\rm dyn}\) with \(q_{0}=1\) GeV and without intrinsic -\(k_{\rm T}\) distribution (\(q_{s}=0\)). The ratio plots show the ratios to the one for \(z_{M}\to 1\).
\(z_{M}=z_{\rm dyn}=1-q_{0}/q\), with \(q_{0}=1\) GeV+ and without any intrinsic \(k_{\rm T}\)-distribution (\(q_{s}=0\)), which by definition does not matter for the integrated parton densities. The distributions obtained from PB-NLO-2018 set1 with \(z_{M}\to 1\) are significantly different from those applying \(z_{M}=z_{\rm dyn}\), illustrating the importance of soft contributions even for collinear distributions. Comparing the collinear distributions at low and high scales \(\mu\), a clear scale dependence of the contribution on soft gluons (and \(\Delta_{a}^{\rm(NP)}\)) is observed.
Footnote †: The value of \(q_{0}\) is arbitrary, and chosen as \(q_{0}=1\) GeV for illustration only. The starting parameters are the same as for PB-NLO-2018 Set1.
It is obvious that limiting the \(z\)-integration by \(z_{\rm dyn}\) (and neglecting \(\Delta_{a}^{\rm(NP)}\)) leads to distributions which are no longer consistent with the collinear \(\overline{\rm MS}\) factorization scheme. A similar conclusion was found in Ref. [23].
Since the PB-approach can also be used to determine TMD parton distributions (see Ref. [5]) we will illustrate in Fig. 2 the effect of the \(z_{M}\) cut-off in the transverse momentum distribution. We show results obtained with the PB-approach for down quarks for PB-NLO-2018 Set1 with a default Gaussian width \(q_{s}=0.5\) GeV for the intrinsic \(k_{\rm T}\)-distribution. Since we want to focus only on the evolution (as given in eq.(6)), we show also the results when no intrinsic \(k_{\rm T}\)-distribution is applied (\(q_{s}=0\) GeV, practically we use a Gauss distribution with \(q_{s}=0.0001\) GeV) and we illustrate the effect of neglecting \(\Delta_{a}^{\rm(NP)}\) (by applying \(z_{\rm dyn}=1-q_{0}/q\), as in Fig. 1).
The transverse momentum distributions show very clearly the effect of \(\Delta_{a}^{\rm(NP)}\). Applying the cutoff-scale, \(z_{M}=z_{\rm dyn}=1-q_{0}/q\), removes emissions with \(q_{\rm t}<q_{0}\) (there are still low-\(k_{\rm T}\) contributions, which come from adding vectorially all intermediate emissions). However, very soft emissions are automatically included with \(z_{M}\to 1\). Please note, that if \(\alpha_{s}(q_{\rm t})\) is chosen a special treatment is needed for \(q_{\rm t}<q_{0}\).
Figure 2: Transverse momentum distributions of down quarks at \(\mu=10,100\) GeV (left, middle column) and \(\mu=500\) GeV (right column) obtained from the PB approach for \(z_{M}\to 1\) as well as \(z_{M}=z_{\rm dyn}=1-q_{0}/q\). The red curve shows PB-NLO-2018 Set1 (including intrinsic-\(k_{\rm T}\)), the blue curve shows a prediction without including any intrinsic-\(k_{\rm T}\) distribution (\(q_{s}=0\)), and the magenta curve shows a prediction applying \(z_{M}=z_{\rm dyn}\) with \(q_{0}=1.0\) GeV without including intrinsic-\(k_{\rm T}\) distributions.
As seen in Figs. 1,2 the contribution of soft gluons is \(\mu\)-scale dependent. Assuming that any intrinsic transverse motion of partons inside hadrons is universal (\(\mu\)-scale independent), soft gluon emission cannot be mimicked by any intrinsic-\(k_{\rm T}\) distribution, but rather the soft gluon contribution needs to be properly resummed.
## 3 Cross Section calculation
Physical cross sections are calculated as a convolution of the scale dependent parton densities with the hard matrix elements (coefficient functions). For simplicity, we show only the calculation of deep inelastic scattering with NLO parton densities and coefficient functions, the expressions are given in text books (i.e. Ref. [10] eq. 4.80 ):
\[F_{2}(x,Q^{2})=x\sum_{q,\bar{q}}e_{q}^{2}\int_{x}^{1}\frac{d\xi}{\xi}q(\xi,Q^{ 2})\left[\delta\left(1-\frac{x}{\xi}\right)+\frac{\alpha_{s}}{2\pi}C_{ \mbox{\tiny\overline{MS}}}\left(\frac{x}{\xi}\right)+\cdots\right] \tag{7}\]
with the coefficient function \(C_{\mbox{\tiny\overline{MS}}}\) in \(\overline{\mbox{MS}}\)-scheme. The \(\overline{\mbox{MS}}\) coefficient function for massless quarks at \({\cal O}(\alpha_{s})\) read:
\[C_{q}^{\mbox{\tiny\overline{MS}}}(z) = C_{F}\left[2\left(\frac{\log(1-z)}{1-z}\right)_{+}-\frac{3}{2} \left(\frac{1}{1-z}\right)_{+}-(1+z)\log(1-z)\right. \tag{8}\] \[\left.-\frac{1+z^{2}}{1-z}\log z+3+2z-\left(\frac{\pi^{2}}{3}+ \frac{9}{2}\right)\delta(1-z)\right]\]
For a consistent formulation, the integral over \(\xi\) in eq.(7) has to extend up to one, both in the expression for the cross section, as well as in the expression for the parton density, otherwise singular pieces remain un-cancelled.
It becomes clear, that the contribution of soft gluon emissions is important both in the parton densities as well as in the cross section calculations. Approaches, where the integral is limited by \(z_{\rm dyn}\) lead to a different factorization scheme and the coefficient functions obtained in collinear, massless \(\overline{\mbox{MS}}\)-scheme are no longer appropriate. In Ref. [24] the same issue is discussed from the perspective of Monte Carlo event generators and the use of collinear parton densities in a backward evolution approach for the parton shower.
From the above considerations, we can summarize, that the region of soft gluon emissions is very important in the evolution of the parton densities, as well as in the calculation of the hard cross section. Within the PB-approach, the non-perturbative Sudakov form factor is constrained by the fit to inclusive measurements to determine inclusive parton distributions: once the evolution frame is specified (depending on the scale choice for \(\alpha_{s}\)), the non-perturbative Sudakov form factor is fixed by the fit to inclusive measurements. The PB-TMD distributions are then calculated without any further assumptions.
Implications from soft gluon emissions
The treatment of soft gluon emissions will lead also to effects in different physics processes: soft gluons are very important for the description of the low \(p_{\rm T}\)-region in Drell-Yan production and different effects can be expected in particle (jet) spectra coming from initial state parton showers.
### Drell-Yan \(p_{\rm T}\)-spectrum
The Drell-Yan \(p_{\rm T}\)-spectrum at large transverse moment is described by hard single parton emissions, at lower \(p_{\rm T}\) soft gluons have to be resummed. Also the intrinsic motion of parton inside the hadrons plays a role. In Ref. [25] the Drell Yan \(p_{\rm T}\)-spectrum at low and high Drell-Yan masses and at different center-of-mass energies is discussed, and it is found that PB-NLO-2018 Set2 leads to a rather reasonable description. In Ref. [18, 19] a very detailed analysis of the description of the Drell-Yan transverse momentum spectrum is given, with the result that after a determination of the parameter \(q_{s}\) of the intrinsic \(k_{\rm T}\)-distribution, PB-NLO-2018 Set2 leads to a very good description of Drell-Yan measurements at different Drell-Yan masses \(m_{\rm DY}\) and as well at different center-of-mass energies \(\sqrt{s}\). The success of describing the DY \(p_{\rm T}\)-spectrum with PB-NLO-2018 Set2 is ascribed to the inclusion of \(\Delta_{a}^{(\rm NP)}\) and the treatment of \(\alpha_{s}\) at small scales.
This can be contrasted to the behaviour of standard parton shower Monte Carlo event generator based on collinear parton densities which need an intrinsic-\(k_{\rm T}\) spectrum dependent on \(\sqrt{s}\). In Ref. [26] a study is reported on tuning the parameter of the intrinsic \(k_{\rm T}\)-distribution for the Monte Carlo event generators, Pythia and Herwig (with the most recent tunes). It is found that the gaussian width \(q_{s}\) of the intrinsic \(k_{\rm T}\)-distribution depends on \(\sqrt{s}\) (for both generators and differnt tunes).
### Soft gluon emissions in parton shower Monte Carlo event generators
In Monte Carlo event generators, the initial parton shower is generated in a backward evolution approach for efficiency reasons (see e.g. Refs. [27, 28, 29, 30, 31, 32]). In event generators based on collinear parton densities, the accumulated transverse momentum of the initial state cascades determine the total transverse momentum of the hard process, e.g. in Drell-Yan production the initial state radiation determines the Drell-Yan \(p_{\rm T}\). In Monte Carlo generators based on TMD distributions (e.q. Cascade3 [32]), the transverse momentum of the hard process is already determined by the TMD distribution, and the initial state parton shower is not allowed to change this, rather only adding radiated partons. Such a treatment allows to study the effect of a different \(z_{\rm dyn}\)-values in the initial state shower, without changing the overall kinematics. We use Drell-Yan production at NLO (simulated by MadGraph5_aMC@NLO) supplemented with TMD distributions and initial state parton shower from Cascade3, as described in Refs [25, 33, 34, 35].
We first investigate (Fig. 3) the spectrum of the splitting variable \(z\) and the rapidity \(y\) of emitted partons in the initial state shower for different values of \(q_{0}\) leading to different
values of \(z_{\rm dyn}=1-q_{0}/q\). Since \(z_{\rm dyn}\) depends on \(q\) and very different values of \(q=q_{\rm t}/(1-z)\) are accessible in the evolution, no clear cut in \(z_{\rm dyn}\) is observed, however, the spectrum itself depends significantly on \(z_{\rm dyn}\) and \(q_{0}\).
Next, we investigate the transverse momentum spectrum of emitted partons in the initial state shower for different values of \(q_{0}\). In Fig. 4(left) we show the transverse momentum (\(q_{\rm t}\)) spectrum of all partons emitted in the initial state shower for different values of \(q_{0}\).
It is evident that extremely low values of \(q_{0}\) result in a significant number of very soft (non-perturbative) emissions. In Fig. 4(right) we show the same distributions but only for emitted quarks. As expected, for such processes \(g\to q\bar{q}\) and \(q\to gq\) there is no singular behavior of
Figure 4: Transverse momentum distributions of emitted partons in the initial state cascade for different values of \(q_{0}\) in \(Z\)-boson events. In (left) is shown the spectrum for all partons, in (right) is shown the spectrum for quarks only.
Figure 3: Distributions of the splitting variable \(z\) (left) and the rapidity of emitted partons \(y\) (right) during the initial state shower for different values of \(q_{0}\) in \(Z\)-boson events.
the splitting function for \(z\to 1\) and the spectrum at low transverse momenta is rather flat, compared to the one, where gluon emission is included.
It is interesting to investigate whether these soft gluons change any of the observable hadron distributions. In the Lund string hadronization [36, 37, 38, 39] gluons are treated as kinks in the color strings, therefore very soft gluons are expected to have a negligible effect. In Fig. 5 we show the spectra the transverse momentum and rapidity of particles in \(Z\)-boson events. The small dependence on \(q_{0}\) seen in the rapidity spectrum comes from very low \(p_{\rm T}\)-hadrons. While the spectrum of partons is very different for different values of \(q_{0}\), there is essentially no effect on \(q_{0}\) visible in the particle spectra, leading to stable final results.
In summary we observe, that the effect of different \(z_{\rm dyn}\)-values in the corresponding non-perturbative Sudakov form factor \(\Delta_{a}^{(\rm NP)}\) is rather significant for the low \(p_{\rm T}\)-spectrum of partons in the initial state cascade (and therefore important for the Drell-Yan \(p_{\rm T}\)), while the effect is negligible for final state hadron spectra and even more for jets coming from the initial state shower.
## 5 Summary and Conclusion
We have investigated the perturbative and non-perturbative regions of collinear parton densities, by making use of the formulation of the DGLAP evolution equation in terms of Sudakov form factors applied in the PB-method. The separation of the Sudakov form factor into a perturbative and non-perturbative part is motived by investigations of the Drell-Yan transverse momentum spectrum within the CSS approach.
We find, that soft (non-perturbative) gluon emissions play a significant role in inclusive parton distributions, and those emissions are an essential part of the \(\overline{\rm MS}\)-scheme: neglecting those would lead to non-cancellation of important singular pieces.
Figure 5: Transverse momentum (left) and rapidity (right) distributions of particles for different values of \(q_{0}\). The bump at large \(p_{\rm T}\) comes from the muons of the \(Z\)-boson decay.
With the requirement to describe and fit inclusive distributions, like the inclusive DIS cross-section, the non-perturbative Sudakov form factor is constrained and determined, once the factorization and evolution scheme for collinear parton densities is fixed (i.e. the choice of scale for the evolution of the strong coupling \(\alpha_{s}\), which implies also a different form of the non-perturbative Sudakov form factor). The non-perturbative Sudakov form factor, fixed from inclusive distributions, plays an important role in the description of the Drell-Yan transverse momentum spectrum.
We have investigated the effect of soft gluons for the transverse momentum spectrum of emitted partons and hadrons. While soft gluons are essential to describe the complete parton density and the low \(p_{\rm T}\)-spectrum of, for example, Drell-Yan lepton pairs, the effect of soft gluons on the spectra of produced particles (and jets) is negligible.
## 6 Appendix: The Sudakov form factor
The DGLAP splitting functions in evolution equation eq.(1) can be written in different forms, where the plus-prescription acts on different terms++:
Footnote ‡: In the derivation we only consider \(\alpha_{s}(\mu)\)
\[P_{ab}(z,\alpha_{s}) = P(z,\alpha_{s})_{+} \tag{9}\] \[= D_{ab}(\alpha_{s})\delta(1-z)+K_{ab}(\alpha_{s})\frac{1}{(1-z)_{ +}}+R_{ab}(z,\alpha_{s}) \tag{10}\]
The plus prescription is given by:
\[\int_{0}^{1}dz\frac{\phi(z)}{(1-z)_{+}} = \int_{0}^{1}dz\frac{\phi(z)-\phi(1)}{(1-z)} \tag{11}\] \[= \lim_{\epsilon\to 0}\int_{0}^{1-\epsilon}dz\frac{\phi(z)}{(1-z)}- \lim_{\epsilon\to 0}\int_{0}^{1-\epsilon}dz\frac{\phi(1)}{(1-z)}\, \tag{12}\]
where the last line is used for expanding the plus-prescription. Applying eq.(9) and eq.(12) leads to:
\[\mu^{2}\frac{\partial f_{a}(x,\mu^{2})}{\partial\mu^{2}} = \sum_{b}\int_{x}^{1}\frac{dz}{z}\ P_{ab}\left(z\right)\ f_{b}\left( \frac{x}{z},\mu^{2}\right) \tag{13}\] \[= \sum_{b}\int_{0}^{1}\frac{dz}{z}\ P_{ab}\left(z\right)\ f_{b} \left(\frac{x}{z}\right)-\sum_{b}\int_{0}^{x}\frac{dz}{z}\ P_{ab}(z)\ f_{b} \left(\frac{x}{z}\right)\] (14) \[= \sum_{b}\int_{0}^{1}dz\ \hat{P}_{ab}\left(z\right)\left(\frac{1}{z} \ f_{b}\left(\frac{x}{z}\right)-f_{b}(z)\right)-\sum_{b}\int_{0}^{x}\frac{dz} {z}\ \hat{P}_{ab}\left(z\right)\ f_{b}\left(\frac{x}{z}\right)\] (15) \[= \sum_{b}\int_{x}^{1}\frac{dz}{z}\ \hat{P}_{ab}\left(z\right)\ f_{b} \left(\frac{x}{z}\right)-f_{a}(x)\sum_{b}\int_{0}^{1}dz\ \hat{P}_{ab}\left(z\right)\] (16) \[= \sum_{b}\int_{x}^{1}\frac{dz}{z}\ \hat{P}_{ab}\left(z\right)\ f_{b} \left(\frac{x}{z}\right)+f_{a}(x)\frac{\mu^{2}}{\Delta_{a}}\frac{\partial \Delta_{a}}{\partial\mu^{2}} \tag{17}\]
where we dropped \(\alpha_{s}\) and \(\mu^{2}\) dependence for better reading. The unregularized splitting function is denoted by \(\hat{P}\) (without the \(D_{ab}\) piece, and replacing \(1/(1-z)_{+}\) by \(1/(1-z)\)), and the Sudakov form factor \(\Delta_{a}(\mu^{2})\) is given by:
\[\Delta_{a}(\mu^{2})=\exp\left(-\sum_{b}\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\mu ^{\prime 2}}{\mu^{\prime 2}}\int_{0}^{1-\epsilon}dz\hat{P}_{ab}(\alpha_{s},z)\right) \tag{18}\]
With this, the evolution equation for \(f_{a}\left(x,\mu^{2}\right)/\Delta_{a}(\mu^{2})\) is given by (see [10]):
\[\mu^{2}\frac{\partial}{\partial\mu^{2}}\frac{f_{a}(x,\mu^{2})}{\Delta_{a}(\mu ^{2})}=\sum_{b}\int_{x}^{1}\frac{dz}{z}\ \hat{P}_{ab}\left(z\right)\ \frac{f_{b}\left(\frac{x}{z},\mu^{2}\right)}{\Delta_{a}(\mu^{2})} \tag{19}\]
If instead we start from eq.(10), and apply the same steps as above, we obtain:
\[\mu^{2}\frac{\partial f_{a}(x,\mu^{2})}{\partial\mu^{2}} = \sum_{b}\int_{x}^{1}\frac{dz}{z}\ P_{ab}\left(z\right)\ f_{b} \left(\frac{x}{z},\mu^{2}\right) \tag{20}\] \[= \sum_{b}\int_{0}^{1}\frac{dz}{z}\ \left(D_{ab}\delta(1-z)+K_{ab} \frac{1}{(1-z)_{+}}+R_{ab}(z)\right)\ f_{b}\left(\frac{x}{z}\right)\] \[= \sum_{b}\int_{0}^{1}\frac{dz}{z}\left(\frac{K_{ab}}{1-z}+R_{ab}( z)\right)f_{b}\left(\frac{x}{z}\right)\] (22) \[-f_{a}(z)\int_{0}^{1}dz\left(\frac{k_{a}}{(1-z)}-d_{a}\delta(1-z)\right)\]
The Sudakov form factor \(\Delta_{S}(\mu^{2})\) is now defined as (see eq.(4)):
\[\Delta_{a}^{S}(\mu^{2})=\exp\left(-\int_{\mu_{0}^{2}}^{\mu^{2}}\frac{d\mu^{ \prime 2}}{\mu^{\prime 2}}\int_{0}^{1-\epsilon}dz\left[\frac{k_{a}}{(1-z)}-d_{a} \delta(1-z)\right]\right)\, \tag{24}\]
leading to an evolution equation for \(f_{a}\left(x,\mu^{2}\right)/\Delta_{a}^{S}(\mu^{2})\).
|
2306.00146 | Supercurrent through a single transverse mode in nanowire Josephson
junctions | Hybrid superconductor-semiconductor materials are fueling research in
mesoscopic physics and quantum technology. Recently demonstrated smooth
$\beta$-Sn superconductor shells, due to the increased induced gap, are
expanding the available parameter space to new regimes. Fabricated on
quasiballistic InSb nanowires, with careful control over the hybrid interface,
Sn shells yield measurable switching currents even when nanowire resistance is
of order 10kohm. In this regime Cooper pairs travel through a purely 1D quantum
wire for at least part of their trajectory. Here, we focus on the evolution of
proximity-induced supercurrent in magnetic field parallel to the nanowire. Long
decay up to fields of 1T is observed. At the same time, the decay for higher
occupied subbands is notably faster in some devices but not in others. We
analyze this using a tight-binding numerical model that includes the Zeeman,
orbital and spin-orbit effects. When the first subband is spin polarized, we
observe a dramatic suppression of supercurrent, which is also confirmed by the
model and suggests an absence of significant triplet supercurrent generation. | B. Zhang, Z. Li, H. Wu, M. Pendharkar, C. Dempsey, J. S. Lee, S. D. Harrington, C. J. Palmstrom, S. M. Frolov | 2023-05-31T19:37:18Z | http://arxiv.org/abs/2306.00146v2 | # Supercurrent through a single transverse mode in nanowire Josephson junctions
###### Abstract
Hybrid superconductor-semiconductor materials are fueling research in mesoscopic physics and quantum technology. Recently demonstrated smooth \(\beta\)-Sn superconductor shells, due to the increased induced gap, are expanding the available parameter space to new regimes. Fabricated on quasiballistic InSb nanowires, with careful control over the hybrid interface, Sn shells yield critical current-normal resistance products exceeding temperature by at least an order of magnitude even when nanowire resistance is of order \(10\)k\(\Omega\). In this regime Cooper pairs travel through a purely 1D quantum wire for at least part of their trajectory. Here, we focus on the evolution of supercurrent in magnetic field parallel to the nanowire. Long decay up to fields of 1T is observed. At the same time, the decay for higher occupied subbands is notably faster in some devices but not in others. We analyze this using a tight-binding numerical model that includes the Zeeman, orbital and spin-orbit effects. When the first subband is spin polarized, we observe a dramatic suppression of supercurrent, which is also confirmed by the model and suggests an absence of significant triplet supercurrent generation.
## Introduction
_Context_. Semiconductor nanowire-based Josephson junctions (JJs) have been explored as elements for superconducting transmon qubits [1; 2] and Andreev qubits [3]. The same one-dimensional(1D) super-semi hybrid system also fulfills basic requirements for emerging topological superconductivity and Majorana bound states (MBS) at nanowire ends [4; 5; 6]. Attempts at exploring MBS in nanowire JJs were through the search for factional a.c Josephson effect [7; 8; 9]. Topological qubits based on nanowire JJs containing MBS have been proposed theoretically [10; 11]. In the simplest and purest form, Majorana modes are envisioned in a single-subband nanowire.
_Previous work: 1D QPC in nanowires_ In JJs, quantum point contact (QPC) can be used to accurately investigate microscopic electrical transport properties of Josephson current [12]. In 2-dimensional electron gas (2DEG), the correlation between normal state conductance and magnitude of the supercurrent is studied using QPC defined by a piezo element [13], local electrostatic gate voltages [14], and physical confinement to atomic point contacts [15]. With well-established normal state conductance quantized in the unit of \(2\)e\({}^{2}\)/h, doubling of conductance is observed in the open regime of N-S device [16] and JJs involving a single ballistic channel has been demonstrated [17].
On the other hand, QPC are extensively employed to study supercurrents through nanowire-based JJs. InAs nanowires with various kinds of superconductor leads have established transparent contacts featuring quantized normal conductance [18; 19; 20; 21]. When the junction length is significantly smaller than the coherence length (L = 30 nm \(\ll\)\(\xi\)), the supercurrent in InAs has been shown to reach the theoretically expected value [22].
_Challenge: Small IcRn in current nanowire JJs_. While studies of supercurrents in nanowires or point contacts have led to an array of interesting discoveries so far [19; 20; 23; 24; 25; 26], one outstanding challenge remaining is that supercurrent in the last occupied transverse mode (single subband) is strongly suppressed. It is either not observed or provides with a single too low to enable detailed studies [17; 18; 20; 22]. The reasons for this are not fully understood, however they are likely related to either finite interface transparency such as in devices involve ex-situ shell deposition, smaller induced gap such as in Al-InAs structures, residual scattering or other effects.
_Approach_. Our approach is to combine InSb nanowires with Sn shells. Through advances in vapor-liquid-solid
Figure 1: (a) Schematic of a shadow nanowire Josephson Junction device. The wire is along \(\hat{x}\) which is also the direction of external magnetic field \(B_{x}\). (b) Cartoon of a single-mode junction controlled by electrostatic gates \(V_{g1}\) and \(V_{g2}\). The leads are shown with higher subbands occupied. The lower panel corresponds to a chemical potential profile \(\mu\)(x) used in tight-binding simulations.
growth quasi-ballistic InSb nanowires were achieved [27; 28]. Quantized non-superconducting conductance has been established in InSb nanowires with normal metal electrodes [29; 30]. The recently introduced superconducting Sn shells facilitate transparent contacts, and critical current-normal state resistance products exceeding temperature and those previously available in Al-based nanowire junctions [31]. Junctions in the Sn shell are defined on InSb nanowires by nanowire shadowing, which reduces processing and increases the likelihood of ballistic devices. Junction made with InSb nanowires and NbTiN leads are studied with same method, and the measurement results are compared with that in Sn-InSb devices as additional supplementary materials.
_Results list_. Subband-resolved transport is verified through the measurements of conductance at finite bias, and the evolution of it with gate voltage, source drain bias voltage and magnetic field. In the gate voltage and conductance range that corresponds to the first occupied transverse mode, we observe supercurrents as high as 20 nA. We investigate the gate voltage and magnetic field evolution of supercurrents in several nanowire devices both in the single-mode and in the multi-mode regimes. The mechanisms of decay of supercurrent with magnetic field are studied including the relative contributions of orbital interference phenomena, residual disorder, spin-orbit and Zeeman effects by comparing the data to a tight-binding model. The spin-polarization of the first subband at finite field suppressed supercurrent dramatically, leading us to conclude that no triplet supercurrent was generated.
Fig. 1(a) presents a schematic diagram of the nanowire Josephson junction device. An InSb nanowire (blue) is half-covered by a Sn shell (silver) and positioned above local gate electrodes (gold), with Ti/Au contacts (gold). In order to prepare the junction itself, standing InSb nanowires approximately 100 nm in diameter are coated with a 15 nm layer of Sn [31]. In front of the nanowire, another nanowire shadows the flux of Sn to create two disconnected Sn segments. Nanowires with such shadow-defined junctions are transferred onto chips patterned with local electrostatic gate electrodes covered by HfOx dielectric. Contacts to wires are made using standard electron beam lithography and thin film deposition of Ti/Au layers.
The supercurrent flows along the \(\hat{x}\) direction, and an external magnetic field, \(B_{x}\), is applied parallel to the wire. A current bias, \(I_{bias}\), is applied across the device (illustrated by a black arrow), and the voltage across the device is measured using DC and AC multimeters in a two-point measurement setup. Two local bottom gates, \(V_{g1}\) and \(V_{g2}\), are located beneath the junction region. Measurements are performed in a dilution refrigerator with a base temperature of \(\sim\)50 mK equipped with a 3D vector magnet.
Figure 1(b) uses cartoons to demonstrate the control of transverse mode numbers in the nanowire by local bottom gates during experiments, as well as the definition of local chemical potential in simulations. In the illustration, gate voltage \(V_{g1}\) precisely adjusts the number of conduction channels, resulting in a single occupied transverse mode in the region labeled QPC (quantum point contact). Meanwhile, gate voltage \(V_{g2}\) tunes one of the adjacent region which can have a higher subband occupancy. To emulate the realistic conditions in numerical simulations, we use two chemical potential \(\mu\) settings, one for leads and one for the QPC.
In Fig. 2(a) we plot two gate traces of conductance, one at zero magnetic field and one at large magnetic field (B=8T). The zero-field trace contains non-monotonic resonances due to quantum interference caused by backscattering, as well as charge jumps. This is in line with previous reports of quantum point contact behavior in nanowires [29; 30]. Backscattering can be suppressed by large magnetic field, therefore the high magnetic field trace demonstrates a sequence of spin-resolved plateaus
Figure 1: Device Description
Figure 2: (a) Differential conductance \(G(dI/dV)\) taken at finite bias \(V_{bias}\)=2mV at B=0T and at zero bias for B=8T. (b) Current bias measurement of differential resistance \(R(dV/dI)\) showing the evolution of supercurrent at B=0T. In this figure \(V_{g2}=5\)V. Green dashed line is used to indicate the approximate boundary between the first mode and higher modes based on conductance. Panels (a) and (b) are from separate measurements on Device A.
at \(G_{0}/2=1\times e^{2}/h\) values. Using the high magnetic field trace as reference, we approximately identify the gate voltage interval that corresponds to the single occupied mode at B=0 (\(V_{g1}<1.1\) V, green dashed line), around conductance value of \(G_{0}=2e^{2}/h\). For the data shown, we estimate the highest conductance to be \(6G_{0}\), corresponding to a maximum of 6 transverse modes. Comprehensive evidence of QPC behavior in device A is obtained from the magnetic field evolution of finite-voltage bias conductance maps for various gate voltage settings, which demonstrate diamond-shaped regions of relatively flat conductance in bias-gate space, and Zeeman splitting of these plateaus (see supplementary materials and Fig.3).
In Fig. 2(b) we plot the current bias data that closely corresponds to the conductance traces from panel (a). The supercurrent appears as dark-blue regions around zero bias. On the left side of the green line, for more negative gate voltages, supercurrent is carried by the first transverse mode. The magnitude of the first mode switching current \(I_{sw}\) in these and other data is from 10 to 20 nA. Given the normal state resistance around 13 k\(\Omega\) (\(1G_{0}\)), the \(I_{sw}R_{N}\) product falls within the range of 150-250 \(\mu\)eV(Fig. S13). This value somewhat suppressed compared to the more open regime, but is of the same order as the superconducting gap gap of Sn (\(\Delta=650\)\(\mu\)eV) and is consistent with values reported in previous studies [26; 31]. Thus signal levels allow for a deeper investigation of supercurrent in the few-subband regime.
The first question we investigate is the evolution of supercurrent as spin polarization develops in the QPC region, when the plateau \(G_{0}/2=1\times e^{2}/h\) develops at finite magnetic field. The emergence of the plateau is demonstrated in Fig. 3(a) in the transconductance map - the characteristic "V"-shaped region corresponds to the spin-polarized plateau. Spin polarization becomes resolved at fields between 0.5-1.0T, while supercurrents are generally observed up to 2T. This allows for the study of the effect of spin polarization on supercurrent.
Fig. 3(b) shows supercurrents extracted from gate voltage sweeps at fields between 0 and 2T. A switching current \(I_{sw}\) is a current at which finite voltage develops across the junction, this may or may not closely follow the Josephson critical current which is a measure of Josephson coupling energy. In data processing, switching current is defined as \(I_{bias}\) for which differential resistance exceeds 2k\(\Omega\). We superimpose the boundary of the spin-polarized region, extracted from panel (a) using peak finding (green squares). Note that panel (a) is at \(V_{bias}=2\ mV\) while panel b is at zero voltage, meaning the boundaries may be shifted. To quantify the decay rate of \(I_{sw}\) in magnetic fields and compare it for different gate voltages, we normalize the magnitude of the switching current as \(NorI_{sw}(B,V)=I_{sw}(B,V)/I_{sw}(B=0,V)\) and plot it as a function of \(B_{x}\) and \(V_{g}\).
Overall, the first mode supercurrent exhibits a slow, long decay up to \(B_{x}=1\)T, eventually disappearing as the system transits to the normal state. However, we found no supercurrent within the spin-up band, or at least the signal is rapidly suppressed in that region. At the same time, supercurrent found in the region of the phase diagram directly adjacent to the spin-polarized "V". This serves as an additional confirmation that it corresponds to the single subband regime with two spin channels.
We conduct a numerical investigation of the system's microscopic properties using a tight-binding model that mirrors the experimental geometry, as shown in Fig.1(b). This model was designed to study supercurrent interference in nanowires using KWANT [25; 32; 33]. It includes the effects of spin-orbit interaction, the orbital vector potential, Zeeman splitting, electron temperature, and on-site disorder. To reproduce the quantized conductance results observed in our experiment, no disorder is considered here (\(\delta U=40meV\)), same as what the initial version of the model did [25].
The new aspect here is the varied chemical potential along the nanowire, allowing for the possibility that the number of occupied subbands is not constant throughout
the device. For the results in Fig. 3(c) we set chemical potentials in both the leads to \(\mu_{lead}=20meV\), corresponding to a few occupied subbands (\(\approx\)6), we then vary \(\mu_{QPC}\). The boundary between the lowest spin-full and spin-polarized modes is extracted from the conductance calculation and plotted alongside the normalized \(I_{c}\) in Fig.3(d) in a manner similar to how the experimental data are presented. The electron spin in each mode is labeled with white arrows. It should be noted that the values in the simulation do not exactly correspond to those in the experimental device but serve as indicators. For example, the boundary between \(G_{0}\) and \(1.5G_{0}\) converges at \(B_{x}=0.9T\) in the simulation, whereas they never intersect in the experiment, even up to \(B_{x}=8T\). In the \(1.5G_{0}\) mode, we find that the supercurrent persists to a larger field compared to the first mode supercurrent, which is not observed experimentally but is likely due to microscopic details of the simulation.
In agreement with the experimental finding, the normalized \(I_{c}\) does not survive in the spin-polarized regime at \(0.5G_{0}\). This indicates that the model correctly captures the system and does not predict triplet supercurrent under the most basic conditions. Experiments on superconductor-ferromagnet-superconductor structures found that in junctions with pristine interfaces supercurrent vanishes in shorter junctions, while counterintuitively it survives to in longer junctions when interfaces are disordered [34]. A disordered interface is believed to facilitate spin flipping into triplet state and flipping back to singlet at the second interface. It is worth investigating if interface roughness, in combination with spin-orbit interaction and/or magnetic impurities could extend supercurrents into the spin-polarized regime in superconductor-semiconductor junctions, hinting at the generation of triplet supercurrents.
Earlier work on nanowires suggested that the single-mode supercurrents are unique, because without other occupied subbands there is no inter-subband interference. This interference was associated with rapid decay of supercurrents in magnetic field, and therefore a slower decay is expected for a single-subband junction [25].
In Fig. 4 we show switching current maps in gate-field space from three devices. Device A is the source of data in Figs. 2 and 3, where the few mode regime was carefully explored. Devices B and C are fabricated using the same Sn-InSb nanowires and in the same geometry (see supplementary materials for device images and additional data). While conductance steps are observed, the QPC evidence is less comprehensive in devices B and C. On the other hand, B and C are studied into the more open regime. The number of occupied subbands is estimated from conductance and indicated above the figures.
In all three junctions, supercurrents grow at more pos
Figure 4: (a)-(c) Magnitude of switching current \(I_{sw}\) extracted for gate scans and plotted as a function of gate voltages \(V_{g}\) and parallel external field \(B_{x}\) measured in Device A, B, and C. (d) Additional data for Fig. 3(b) (Device A) is presented, showcasing normalized \(I_{sw}\) at higher gate voltages (more modes). (e) and (f): Normalized \(I_{sw}\) as a function of \(B_{x}\) and \(V_{g}\) is measured in Device B and Device C. The boundary between different subbands are extracted at zero field and indicated by the yellow dashed line. The number of the highest transverse mode being occupied is labeled above each panel.
Figure 4: single-mode versus multi-mode regimes
itive gate voltages that correspond to higher subband occupations [Figs. 4(a)(b)(c)]. In magnetic field, signal is observed up to, and occasionally beyond B=2T. The magnetic field decay rate of switching current can be explored in maps normalized to zero-field values [Figs. 4(d)(e)(f)]. Devices B and C clearly exhibit a more rapid relative decay of switching current in the multi-mode regime, compared with the 1-2 mode regime. The rapid decay takes place at fields below approximately 0.5T followed by a persistent lower signal. In the single-mode regime, no rapid decay is observed, yet the overall signal level is smaller and comparable to that seen at higher fields in the multi-mode regime (see supplementary information for detailed magnetic field dependences of current-voltage characteristics for all three devices.) Results in device A do not extend to more modes, but from the data available they are not in contradiction with findings from B and C.
### Alternative explanations for magnetic field decay rates
We discuss the decay rates of supercurrent in magnetic field from the experimental and theory points of view. On the experiment side, one concern that should be mentioned is that smaller switching currents can deviate substantially from the actual Josephson critical currents. At the same time, in these junctions signficant \(I_{sw}R_{N}\) products exceeding temperature by an order of magnitude are not expected to result in significant premature switching in junctions. Another experimental concern is that the shapes of \(I_{sw}(B)\) can be affected by presence of finite voltage resonances such as due to Multiple Andreev Reflections (MAR) frequently observed in these junctions leading to peculiar non-monotonic field dependences (see supplementary information for examples, and future work for an in-depth study.)
### Figure 5: Tight-binding simulation of junction with leads
On the theoretical side, we started with a basic assumption of only a single mode occupied in the device. While this may be the case in the middle of the junction, the leads may have higher subband occupations. Interference between supercurrents carried by different subbands can take place in the leads and result in faster decay. The earlier model did not include this consideration, so we address it here.
Using KWANT, we numerically calculate the evolution of \(I_{c}\) in the parallel field \(B_{x}\) [Fig.5]. The chemical potential in the leads and junction region is locally adjusted to 5 meV (one mode) or 20 meV (six modes). Spin-orbit interaction and Zeeman effect are present in all calculations. The effects of small disorder are explored in the supplementary information. In agreement with earlier results [25], the model demonstrates the slowest decay when both the leads and the QPC region are set to one occupied subband. The decay is more rapid when the leads are set to 6 subbands, and opening the QPC to 6 subbands accelerates the decay further. The difference between QPC and 1 and at 6 is not dramatic when the leads are open.
Experimentally it is much more challenging to realize a device where the entire nanowire is in the single subband regime, compared with realizing a short QPC region. Supplementary information shows more maps for different settings of \(V_{g2}\) for device A. More negative settings of \(V_{g2}\) result, in principle, in a lower subband occupation in one of the leads. While the decay rates become more uniform at negative \(V_{g2}\), the limited range of \(V_{g1}\) and the inability to tune the second lead region prevent us from concluding on the origins of this effect and whether the numerical simulations explain it. Since single-mode nanowires are desirable for Majorana zero mode experiments, future work should focus on realizing this regime and the insights obtained in the present work may be helpful.
## Acknowledgements
### Funding
**Data Availability**
Curated library of data extending beyond what is presented in the paper, as well as simulation and data processing code are available at [35].
**Duration and Volume of Study**
This project was started in June 2018 and ended in December 2022. The simulation analysis phase was completed in March 2023. Within this report, a total of six devices were studied. Two of these devices were fabricated using pure InSb nanowires, with ex-situ deposited
Figure 5: Normalized critical current as a function of parallel field \(B_{x}\) for different combinations of terms in the Hamiltonian of the numerical model. The Zeeman effect (\(g=50\)) and spin-orbit strength \(\alpha=100nm\cdot meV\) are present in all curves. The local chemical potential and corresponding number of conduction channels in the leads and QPC region are illustrated with a small diagram in each panel. \(\mu=5\) and 20 meV result in one and six spin-full transverse modes in the nanowires, respectively. The other simulation parameters are consistent with those used in Fig. 3 (b) and (d).
Figure 5: Tight-binding simulation of junction with leads
NbTiN leads, and were fabricated in 2018. The remaining four devices were fabricated using nanowires that are reported in Ref. [31]. For the purpose of this project, 59 devices across 8 chips were fabricated and measured during 11 cooldowns in dilution refrigerators. These measurements yielded approximately 5900 datasets, of which around 2700 datasets were shared with the project reported in Ref.[26].
|
2310.20570 | Correlation-pattern-based Continuous-variable Entanglement Detection
through Neural Networks | Entanglement in continuous-variable non-Gaussian states provides
irreplaceable advantages in many quantum information tasks. However, the sheer
amount of information in such states grows exponentially and makes a full
characterization impossible. Here, we develop a neural network that allows us
to use correlation patterns to effectively detect continuous-variable
entanglement through homodyne detection. Using a recently defined stellar
hierarchy to rank the states used for training, our algorithm works not only on
any kind of Gaussian state but also on a whole class of experimentally
achievable non-Gaussian states, including photon-subtracted states. With the
same limited amount of data, our method provides higher accuracy than usual
methods to detect entanglement based on maximum-likelihood tomography.
Moreover, in order to visualize the effect of the neural network, we employ a
dimension reduction algorithm on the patterns. This shows that a clear boundary
appears between the entangled states and others after the neural network
processing. In addition, these techniques allow us to compare different
entanglement witnesses and understand their working. Our findings provide a new
approach for experimental detection of continuous-variable quantum correlations
without resorting to a full tomography of the state and confirm the exciting
potential of neural networks in quantum information processing. | Xiaoting Gao, Mathieu Isoard, Fengxiao Sun, Carlos E. Lopetegui, Yu Xiang, Valentina Parigi, Qiongyi He, Mattia Walschaers | 2023-10-31T16:00:25Z | http://arxiv.org/abs/2310.20570v1 | # Correlation-pattern-based Continuous-variable Entanglement Detection through Neural Networks
###### Abstract
Entanglement in continuous-variable non-Gaussian states provides irreplaceable advantages in many quantum information tasks. However, the sheer amount of information in such states grows exponentially and makes a full characterization impossible. Here, we develop a neural network that allows us to use correlation patterns to effectively detect continuous-variable entanglement through homodyne detection. Using a recently defined stellar hierarchy to rank the states used for training, our algorithm works not only on any kind of Gaussian state but also on a whole class of experimentally achievable non-Gaussian states, including photon-subtracted states. With the same limited amount of data, our method provides higher accuracy than usual methods to detect entanglement based on maximum-likelihood tomography. Moreover, in order to visualize the effect of the neural network, we employ a dimension reduction algorithm on the patterns. This shows that a clear boundary appears between the entangled states and others after the neural network processing. In addition, these techniques allow us to compare different entanglement witnesses and understand their working. Our findings provide a new approach for experimental detection of continuous-variable quantum correlations without resorting to a full tomography of the state and confirm the exciting potential of neural networks in quantum information processing.
_Introduction.--_The study of quantum entanglement is experiencing a thorough theoretical development and an impressive experimental progress [1; 2], leading to important applications in quantum cryptography [3], quantum metrology [4] and quantum computation [5]. It is, therefore, crucial to find reliable and practical methods to detect entanglement. Especially in the continuous variable (CV) regime, significant breakthroughs have recently been achieved in the experimental preparation of non-Gaussian entanglement [6; 7]. Such entangled states have been proven to be essential resource for entanglement distillation [8; 9; 10], quantum-enhanced imaging [11; 12] and sensing [13; 14], and to reach a quantum computational advantage [15]. However, entanglement detection in such complex systems turns out to be a challenging problem.
The conventional entanglement criteria which rely on the knowledge of reconstructed density matrix, such as the positive partial transpose (PPT) criterion [16] or the quantum Fisher information (QFI) criterion proposed in Ref. [17], are experimentally infeasible for general non-Gaussian states. A common thought is to avoid the time-consuming process of two-mode tomography [18], which requires performing joint measurements on many quadrature combinations [19]. Only for some states with specific analytic structures [20], this demanding procedure can be simplified to a series of single-mode homodyne tomography [21]. An innovative approach to overcome this issue is provided by deep neural networks [22], which can work with limited amounts of data from actual measurements. Recently, neural networks have found extensive applications in quantum physics and quantum information science [23; 24], including detecting quantum features [25; 26; 27; 28], characterizing quantum states and quantum channels [29; 30; 31; 32; 33; 34; 35], and solving many-body problems [36; 37; 38]. A key step thus lies in selecting an appropriate training data set to ensure that the networks can effectively and universally learn the features of the quantum system. Keeping our focus on the homodyne measurements which are feasible in CV experiments, we seek to answer the following question in this paper: Can neural networks be used to detect entanglement for general non-Gaussian states?
In this work, we develop a deep learning algorithm to detect entanglement for arbitrary two-mode CV states, including experimentally relevant photon subtracted states [7]. Instead of extracting entanglement properties from the density matrices, our neural network is only based on four correlation patterns, which can be easily measured via homodyne measurements. It can be found that our algorithm achieves much higher accuracy than PPT and QFI criteria based on the maximum likelihood (MaxLik) algorithm with the same homodyne data. Our network also shows strong robustness for single-photon subtracted states. Furthermore, with a visualization tool, namely a t-SNE algorithm, we show that elusive entanglement information is hidden in the correlation patterns, hence in the joint probability distributions. It can be seen that the neural network is indeed able to correctly sort out data: clusters of entangled states clearly emerge after neural network processing. Therefore, our findings
provide an approach for detecting CV entanglement in experimentally-accessible ways and confirm the deep neural network approach as a powerful tool in quantum information processing.
_Generation and selection of CV states._--We start by generally characterizing two-mode CV quantum states in order to generate an abundant training data set. To do so, we rely on the recently developed stellar formalism [39, 40, 41]. In this formalism, we analyse a pure state \(|\psi\rangle\) in terms of its stellar function \(F_{\phi}^{*}(\mathbf{z})\). To define this function, we start by considering the state's decomposition in the Fock basis, i.e., \(|\psi\rangle=\sum\limits_{\mathbf{n}\geq 0}\psi_{\mathbf{n}}|\mathbf{n}\rangle\in\mathcal{H}^{ \otimes 2}\) with \(\mathbf{n}=(n_{1},n_{2})\), such that the stellar function can be written as
\[F_{\phi}^{*}(\mathbf{z})\equiv e^{\frac{1}{2}|\mathbf{n}|^{2}}\,\langle\mathbf{z}^{*}| \psi\rangle=\sum\limits_{n_{1},n_{2}}\frac{\psi_{\mathbf{n}}}{\sqrt{n_{1}!n_{2}!}} z_{1}^{n_{1}}z_{2}^{n_{2}},\ \ \forall\ \mathbf{z}=(z_{1},z_{2})\in\mathbb{C}^{2} \tag{1}\]
where \(|\mathbf{z}\rangle=\mathrm{e}^{-\frac{1}{2}|\mathbf{n}|^{2}}\sum\limits_{n_{1},n_{2}} \frac{z_{1}^{n_{1}}z_{2}^{n_{2}}}{|n_{1},n_{2}\rangle}|n_{1},n_{2}\rangle\) is a coherent state of complex amplitude \(\mathbf{z}\). The stellar rank \(r\) of \(|\psi\rangle\) is defined as the number of zeros of its stellar function, representing a minimal non-Gaussian operational cost. For instance, \(r=0\) means that the state is Gaussian, while \(r=1\) corresponds to a class of non-Gaussian states that contains, both, single-photon added and subtracted states [41]. Any multimode pure state \(|\psi\rangle\) with finite stellar rank \(r\) can be decomposed into \(|\psi\rangle=\hat{G}|C\rangle\), where \(\hat{G}\) is a given Gaussian operator acting onto the state \(|C\rangle\), which is called core state; it is a normalized pure quantum state with multivariate polynomial stellar function of degree \(r\), equal to the stellar rank of the state. It then follows immediately that Gaussian operations \(\hat{G}\) must preserve the stellar rank [40].
We generate states \(\hat{\rho}\) in our data set by first creating a core state \(|C\rangle\) with a given stellar rank \(r\leq 2\) and random complex coefficients for the superposition in the Fock basis. Then, according to the Bloch-Messiah decomposition, any multimode Gaussian unitary operation \(\hat{G}\) can be decomposed as \(\hat{G}=\hat{\mathcal{U}}(\varphi)\hat{\mathcal{S}}(\xi)\hat{\mathcal{D}}( \alpha)\hat{\mathcal{V}}(\phi)\), where \(\hat{\mathcal{U}}\) and \(\hat{\mathcal{V}}\) are beam-splitter operators, \(\hat{\mathcal{S}}\) is a squeezing operator and \(\hat{\mathcal{D}}\) is a displacement operator. We choose random values for the parameters \(\varphi\), \(\xi\), \(\alpha\) and \(\phi\) of the different operators and apply the corresponding operation \(\hat{G}\) to the core state \(|C\rangle\) to produce the random state \(|\psi\rangle=\hat{G}|C\rangle\). Finally, by adding optical losses to simulate an experimental loss channel, a lossy two-mode state \(\hat{\rho}\) is generated. More details can be found in A.
_Homodyne data._--The aim of our work is to feed our neural network with data that are directly accessible in experiments. To this goal, we focus on quadrature statistics, which in quantum optics experiments can be directly obtained through homodyne detection. The quadrature observables \(\hat{x}_{k}\) and \(\hat{p}_{k}\) in the modes \(k=1,2\) can be defined as the real and imaginary parts of the photon annihilation operator \(\hat{a}_{k}\) such that \(\hat{x}_{k}\equiv(\hat{a}_{k}+\hat{a}_{k}^{\dagger})\) and \(\hat{p}_{k}\equiv i(\hat{a}_{k}^{\dagger}-\hat{a}_{k})\). Homodyne detection then corresponds to a projective measurement on eigenstates of these quadrature operators. Hence, we define these eigenstates as \(\hat{x}_{k}|X_{k}\rangle=X_{k}|X_{k}\rangle\) and \(\hat{p}_{k}|P_{k}\rangle=P_{k}|P_{k}\rangle\), where \(X_{k}\) and \(P_{k}\) describe the continuum of real measurement outcomes for the quadrature measurements in the mode \(k\).
For any given state \(\hat{\rho}\), we can obtain the joint quadrature statistics \(\mathcal{P}(X_{1},X_{2})\equiv\langle X_{1};X_{2}|\hat{\rho}|X_{1};X_{2}\rangle\) (other joint statistics are defined analogously). The density matrix is known in its Fock basis decomposition, we can explicitly calculate the joint quadrature statistics as
\[\mathcal{P}(X_{1},X_{2})=\sum\limits_{\begin{subarray}{c}n_{1},n_{2}\\ n_{1},n_{2}\end{subarray};n_{1}^{\prime},n_{2}^{\prime}}\langle X_{1}|n_{1} \rangle\langle X_{2}|n_{2}\rangle\langle X_{1}|n_{1}^{\prime}\rangle^{*} \langle X_{2}|n_{2}^{\prime}\rangle^{*}. \tag{2}\]
These quantities can be evaluated directly using the wave functions of Fock states which are given by \(\langle X_{k}|n_{k}\rangle=H(n_{k},X_{k}/\sqrt{2})e^{-\frac{\pi^{2}}{4}}/[(2\pi )^{1/4}\sqrt{2^{n_{k}}n_{k}!}]\), where \(H(n_{k},X_{k}/\sqrt{2})\) denotes a Hermite polynomial. Other joint quadrature distributions can be calculated analogously, using \(\langle P_{k}|n_{k}\rangle=i^{n_{k}}H(n_{k},P_{k}/\sqrt{2})e^{-\frac{\pi^{2}}{4 }}/[(2\pi)^{1/4}\sqrt{2^{n_{k}}n_{k}!}]\).
_Training process._--The method described above to generate random states is repeated \(15,000\) times to obtain a set \(\varrho\) with a wide variety of density matrices. To evaluate the performance of the model on an independent data set, we use a cross-validation method to tune the hyperparameters of the model and split the entire data set into two parts: 70% for training and 30% for validation. For each density matrix \(\hat{\rho}\), four joint quadrature statistics \(\mathcal{P}(X_{1},X_{2})\), \(\mathcal{P}(X_{1},P_{2})\), \(\mathcal{P}(P_{1},X_{2})\), \(\mathcal{P}(P_{1},P_{2})\), see Eq. (2), and the corresponding output entanglement label \(\mathcal{E}_{\hat{p}}^{\text{True}}\) are calculated, see Fig. 1 for a scheme of the training data processing. Since the joint probability distributions are continuous over the whole phase space, they need to be discretized before we can feed the neural network with them. To that end, we restrict the region of phase space with values going from -6 to 6 and bin each distribution into a \(24\times 24\) correlation pattern. Every pixel is given by the median value of the joint probability distribution in the corresponding grid. For each state \(\hat{\rho}\), we thus obtain a \(24\times 24\times 4\)-dimensional tensor \(\mathcal{M}_{\hat{\rho}}\). Then for the full set of density matrices, \(\mathcal{M}_{\varrho}\) together with the entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) are used for training the neural network with Adam optimization algorithm. As shown in Fig. 1, each node of
Figure 1: Scheme of the training data processing. The generation of the training data set begins with a series of random density matrices \(\varrho\). Then for each density matrix one generates 24\(\times\)24\(\times\)4-dimensional correlation patterns \(\mathcal{M}_{\varrho}\) as input data of the neural network. At the output, 3 entanglement labels \(\mathcal{E}^{\text{Phot}}\) are computed from \(\varrho\) and fed into the neural network for training. The loss function is evaluated between the true entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) and the predicted labels \(\mathcal{E}^{\text{Phot}}\) output from the neural network.
the three hidden fully connected layers performs a unique linear transformation on the input vector from its previous layer, followed by a nonlinear ReLU activation function. The loss function is evaluated between the true entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}\) and the predicted labels \(\mathcal{E}^{\text{Pred}}\) output from the neural network with the binary cross-entropy. The backward processes are iterated until the loss function converges.
The binary entanglement labels \(\mathcal{E}_{\varrho}^{\text{True}}=\{E_{\text{PPT}},E_{\text{QPI}}^{(1)},E_{ \text{QPI}}^{(2)}\}_{\varrho}\) are obtained for the classification task to detect whether the state in set \(\varrho\) is entangled or not via PPT [16] and QFI [17] criteria. In PPT criterion, the two-mode state is entangled if the partial transpose over one of its modes has negative eigenvalues, which leads to \(\|\hat{\varrho}^{\text{T}_{\text{R}}}\|_{1}>1\). Here, we label the states that satisfy this inequality as \(E_{\text{PPT}}=1\) and the rest as \(E_{\text{PPT}}=0\). The metrological-entanglement-based QFI criterion, is based on estimating a parameter \(\theta\) which is implemented through \(\hat{\rho}_{\theta}=e^{-i\theta\hat{A}}\hat{\rho}e^{i\theta\hat{A}}\), where \(\hat{A}=\sum_{i=1}^{2}\hat{A}_{i}\) is a sum of arbitrary local observable \(\hat{A}_{i}\) for the \(i\)th mode. The intuitive idea behind the QFI criterion is to detect entanglement by showing \(\hat{\rho}\) allows us to estimate \(\theta\) with a precision that is higher than what could have been achieved with a separable state with the same variances fr the generators. More rigorously, the criterion is given by \(E[\hat{\rho},\hat{A}]=F_{Q}(\hat{\rho},\hat{A})-4\sum_{i=1}^{2}\text{Var}[\hat {\rho},\hat{A}_{i}]\), where \(F_{Q}\) is the quantum Fisher information of state \(\hat{\rho}\), and \(\text{Var}[\cdot]\) denotes the variance. The entanglement witness depends on the choice of the observable \(\hat{A}_{i}\), which can be constructed by optimizing over an arbitrary linear combination of operators (namely, \(\hat{A}_{i}=\mathbf{n}\cdot\mathbf{H}=\sum_{k=1}n_{k}\hat{H}_{k}\)) [17]. At the first order, \(\mathbf{H}\) takes the form of \(\mathbf{H}^{(1)}=(\hat{x}_{i},\hat{p}_{i})\) with \(i=1,2\). To capture more correlations of non-Gaussian states, we can extend the set of operators by adding three second-order nonlinear operators: \(\mathbf{H}^{(2)}=(\hat{x}_{i},\hat{p}_{i},\hat{x}_{i}^{2},\hat{p}_{i}^{2},(\hat {x}_{i}\hat{p}_{i}+\hat{p}_{i}\hat{x}_{i})/2)\). If \(E[\hat{\rho},\hat{A}]>0\), the state is identified as QFI-type entangled and labeled as \(E_{\text{QFI}}^{(n)}=1\), otherwise it is labeled as \(E_{\text{QPI}}^{(n)}=0\), where \(n\) refers to the set \(\mathbf{H}^{(n)}\) which is used to compute \(E[\hat{\rho},\hat{A}]\).
After \(3,000\) epochs of training the loss function has converged and the network has captured the features mapping the correlation patterns to the entanglement labels, without providing the full density matrices \(\varrho\). This is a crucial element since experiments generally do not have access to the full density matrix of a produced state, and what can be acquired is partial correlation information from measurements.
_Testing process.--_To test the network with experimental-like data, we simulate the homodyne measurement outcomes via a Monte Carlo sampling method. The test data are obtained from previously unseen quantum states, denoted as \(\varrho_{\text{test}}\). For each pattern of the state in \(\varrho_{\text{test}}\), we perform \(N\) repetitions of sampling to simulate the joint measurement events for each mode, forming a \(2\times N\)-dimensional outcomes and used to recover the joint probability distributions. However, directly feeding the raw sampling results into the neural network is infeasible, as the input layer of our trained network requires \(24\times 24\times 4\)-dimensional data. Thus, we also bin each \(2\times N\) sampling points into a \(24\times 24\)-dimensional matrix. Figure 2(a) shows the discretized correlation patterns with different numbers of sampling points \(N\). The plot with \(N=\infty\) is directly obtained from discretizing the theoretical joint probability distributions. As the number of samples \(N\) increases from \(10\) to \(100,000\), the Monte Carlo sampling result converges towards the \(N=\infty\) case.
We compare the performance of the neural network with PPT and QFI entanglement predictions estimated from the MaxLik algorithm by quantifying the ratio of states the different algorithms correctly classified within the set \(\varrho_{\text{test}}\). The MaxLik algorithm is a statistical method and can process the same Monte Carlo sampling outcomes to conduct the state tomography and reconstruct corresponding density matrices \(\varrho_{\text{MaxLik}}\)[18]. After \(20\) iterations of the algorithm, the entanglement labels \(\mathcal{E}^{\text{MaxLik}}\) can be derived using PPT and QFI criteria. A significant difference between deep learning and MaxLik is that the former excels at extracting features and insights from a large data set, enabling the neural network to extrapolate and make accurate predictions for unseen data, while the latter reconstructs each state separately without relying on prior experience.
In Figs. 2(b) and (c), the orange and blue lines show the accuracy of \(\mathcal{E}^{\text{Pred}}\) predicted by our neural network. The gray lines show the accuracy of \(\mathcal{E}^{\text{MaxLik}}\) based on MaxLik. For PPT-type entanglement \(E_{\text{PPT}}\), the accuracy of the neural network achieves \(0.993\) when the number of Monte Carlo sampling points is \(N=100,000\). On the contrary, even though the average fidelity between \(\varrho_{\text{test}}\) and the reconstructed density matrix \(\varrho_{\text{MaxLik}}\) increases from \(0.734\) to \(0.945\), the accuracy of MaxLik remains unchanged when the number of samples increases. For QFI-type entanglement \(E_{\text{QPI}}^{(n)}\), the accuracy of the neural network achieves \(0.913\) and \(0.923\) for the first and second order when \(N=100,000\), respectively. With the same amount of joint homodyne measurement data, the
Figure 2: (a) Binned correlation patterns of a state \(\hat{\rho}\in\varrho_{\text{test}}\) when the number of Monte Carlo sampling points \(N=10,100,1000,10,000\) and \(100,000\). The plot with \(N=\infty\) shows the correlation patterns directly discretized from the theoretical joint probability distributions. (b) Accuracy of PPT-type entanglement prediction from the neural network (orange line) and MaxLik algorithm (gray line) against the same value of \(N\) in (a). (c) Accuracy of QFI-type entanglement prediction from the neural network (blue lines) and MaxLik (gray lines) algorithm against \(N\). Solid and dashed lines represent the first and second-order QFI, respectively.
neural network shows better performance than the standard state reconstruction process through the MaxLik algorithm.
Furthermore, we test our network on lossy single-photon subtracted states [42]. This class of states is included in our training data set since it can be decomposed as
\[\ket{\psi}_{\text{sub}} \propto(\text{cos}\gamma\hat{a}_{1}+\text{sin}\gamma\hat{a}_{2}) \hat{S}_{1}(\xi_{1})\hat{S}_{2}(\xi_{2})\ket{00} \tag{3}\] \[=\hat{S}_{1}(\xi_{1})\hat{S}_{2}(\xi_{2})(\text{cos}\gamma\text{ sinh}r_{1}\text{e}^{i\omega_{1}}\ket{10}+\text{sin}\gamma\text{sinh}r_{2} \text{e}^{i\omega_{2}}\ket{01}).\]
where \(\hat{S}_{i}(\xi_{i})\) is the single-mode squeezing on mode \(i\) with squeezing parameter \(\xi_{i}=r_{i}\text{e}^{i\omega}\), and \(\gamma\) is the angle controlling the probability in which mode the single photon is subtracted. Hence we can also test how our network performs for this kind of state. The accuracy to predict PPT-type entanglement achieves \(0.985\) when the number of sampling points is \(N=100,000\). We compare the robustness of our network with another existing experiment-friendly criterion based on Fisher information reported in Ref. [43] for a state \(\ket{\psi}_{\text{sub}}\) with squeezing parameters \(r_{1}=2\text{dB}\) and \(r_{2}=-3\text{dB}\), and \(\omega_{1}=\omega_{2}=0\). While the other criterion cannot detect the entanglement when the loss coefficient \(\eta\) is larger than \(0.06\), our neural network performs with a much stronger robustness leading to a loss threshold \(\eta=0.33\) for the first-order QFI, and beyond \(\eta=0.5\) for PPT and second order QFI (see Fig. 6 of the Appendix).
_Clustering process._--To visualise how our neural network processes input correlation patterns, we use t-Distributed Stochastic Neighbor Embedding (t-SNE) [44] as a dimension reduction technique to compare our data before and after the processing by the neural network. Different from the supervised classifier neural networks, the t-SNE algorithm is an unsupervised learning approach dealing with unlabeled data. This visualization tool has been widely used in many biology and marketing problems, like cell population identification [45; 46] and customer segmentation [47]. It can be seen that clusters of entangled states clearly emerge after neural network processing.
Figure 3(a) shows two \(24\times 24\times 4\)-dimensional correlation patterns of a state with non-entangled labels \(\{0,0,0\}\) and a state with entangled labels \(\{1,1,1\}\), marked with red and yellow triangles, respectively. These high-dimensional data can be mapped into a two-dimensional plane through the t-SNE algorithm and the results are shown in Figs. 3(b)(c)(d). The left plot of Fig. 3(b) exhibits clusters formed from the discretized correlation patterns of the whole training data set (\(n=15,000\)). Each point represents a quantum state, colored by its PPT-type entanglement label \(E_{\text{PPT}}=0\) (dark green) or \(E_{\text{PPT}}=1\) (light green). The right plot of Fig. 3(b) reveals clusters formed from output vectors of the last hidden layer of the neural network with 64 neurons. After the neural network processing, the two overlapped dark green and light green clouds have largely detached from each other, forming disjoint clusters. We can clearly see that the two triangles significantly separate and now belong to their respective cluster.
Similarly, clusters in Figs. 3(c) and (d) are of the same shape as in (b), but colored with the first-order and second-order QFI-type entanglement labels \(E_{\text{QPI}}^{(1)}\) and \(E_{\text{QFI}}^{(2)}\). Again, the two clouds are better separated after the neural network processing [right plots of (c) and (d)]. Comparing (c) with (d), we can see that the light green cluster in (d) covers more area than in (c), which intuitively shows that, as expected, second-order QFI-type criterion finds more entangled states than first-order. These results clearly show how different clusters of states have different metrological capabilities.
Even though there is no explicit boundary between the two classes (entangled or not) in the left cluster of Fig. 3(b), where the neural network has not been applied yet, the two classes of states already tend to cluster together. This implies that the correlation patterns inherently contain entanglement-related regularities for the studied states, which makes it more feasible for the deep learning algorithm to learn from the training data. Therefore, this visualization method provides us with a technique to select appropriate input data when detecting a specific quantum feature through the deep learning algorithm.
In conclusion, we develop a neural network to detect
Figure 3: Two-dimensional clusters of two-mode CV states. (a) Examples of \(24\times 24\times 4\)-dimensional correlation patterns for two input states. (b) Left: The 2-dimensional clustering of states before being fed into the network, where t-SNE preserves the pairwise similarities between data points. Right: The same dimension reduction process is conducted on the 64-dimensional array from the last hidden layer of the neural network. Points representing PPT-type entangled (\(E_{\text{PPT}}=1\)) are colored in light green, while others are colored in deep green, (c) and (d) use the same method as (b) but are colored according to the first-order and second-order QFI-type entanglement labels \(E_{\text{QFI}}^{(2)}\) and \(E_{\text{QFI}}^{(2)}\), respectively.
CV entanglement for general two-mode states with finite stellar rank, only using correlation patterns obtained through homodyne detection. We test the performance of the network on patterns generated by Monte Carlo with different numbers of sampling points. With the same limited patterns, our method provides higher accuracy compared to the entanglement that can be extracted through a full state tomography. Meanwhile, the neural network shows strong robustness to losses, which we illustrate for the specific case of single-photon-subtracted states. Finally, we use the t-SNE algorithm to visualize the clusters formed from abundant correlation patterns before and after they are fed into the network. This helps us validate the suitability of the input for detecting target quantum features. This can be further used to identify and detect more refined kinds of quantum correlations, like not passive separability, i.e., the fact that the entanglement cannot be undone with optical passive transformations (beamsplitters, phase shifters), a strong feature of non-Gaussian states necessary to reach quantum advantage [48].
###### Acknowledgements.
We acknowledge enlightening discussions with Nicolas Treps. This work is supported by the National Natural Science Foundation of China (Grants No. 11975026, No. 12125402, No. 12004011, and No. 12147148), Beijing Natural Science Foundation (Grant No. Z190005), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301500). M.I., C.E.L., and M.W. received funding through the ANR JCJC project NoRdiC (ANR-21-CE47-0005). V.P. and M.W. acknowledge financial support from the European Research Council under the Consolidator Grant COQCOoN (Grant No. 820079).
|
2310.20216 | Does GPT-4 pass the Turing test? | We evaluated GPT-4 in a public online Turing test. The best-performing GPT-4
prompt passed in 49.7% of games, outperforming ELIZA (22%) and GPT-3.5 (20%),
but falling short of the baseline set by human participants (66%).
Participants' decisions were based mainly on linguistic style (35%) and
socioemotional traits (27%), supporting the idea that intelligence, narrowly
conceived, is not sufficient to pass the Turing test. Participant knowledge
about LLMs and number of games played positively correlated with accuracy in
detecting AI, suggesting learning and practice as possible strategies to
mitigate deception. Despite known limitations as a test of intelligence, we
argue that the Turing test continues to be relevant as an assessment of
naturalistic communication and deception. AI models with the ability to
masquerade as humans could have widespread societal consequences, and we
analyse the effectiveness of different strategies and criteria for judging
humanlikeness. | Cameron R. Jones, Benjamin K. Bergen | 2023-10-31T06:27:52Z | http://arxiv.org/abs/2310.20216v2 | # Does GPT-4 Pass the Turing Test?
###### Abstract
We evaluated GPT-4 in a public online Turing Test. The best-performing GPT-4 prompt passed in 41% of games, outperforming baselines set by ELIZA (27%) and GPT-3.5 (14%), but falling short of chance and the baseline set by human participants (63%). Participants' decisions were based mainly on linguistic style (35%) and socio-emotional traits (27%), supporting the idea that intelligence is not sufficient to pass the Turing Test. Participants' demographics, including education and familiarity with LLMs, did not predict detection rate, suggesting that even those who understand systems deeply and interact with them frequently may be susceptible to deception. Despite known limitations as a test of intelligence, we argue that the Turing Test continues to be relevant as an assessment of naturalistic communication and deception. AI models with the ability to masquerade as humans could have widespread societal consequences, and we analyse the effectiveness of different strategies and criteria for judging humanlikeness.
**Keywords:** Turing Test, Large Language Models, GPT-4, interactive evaluation
## 1 Introduction
Turing (1950) devised the _Imitation Game_ as an indirect way of asking the question: "Can machines think?". In the original formulation of the game, two witnesses--one human and one artificial--attempt to convince an interrogator that they are human via a text-only interface. Turing thought that the open-ended nature of the game--in which interrogators could ask about anything from romantic love to mathematics--constituted a broad and ambitious test of intelligence. The Turing Test, as it has come to be known, has since inspired a lively debate about what (if anything) it can be said to measure, and what kind of systems might be capable of passing (French, 2000).
Large Language Models (LLMs) such as GPT-4 (OpenAI, 2023) seem well designed for Turing's game. They produce fluent naturalistic text and are near parity with humans on a variety of language-based tasks (Chang and Bergen, 2023; Wang et al., 2019). Indeed, there has been widespread public speculation that GPT-4 would pass a Turing Test (Bievere, 2023) or has implicitly done so already (James, 2023). Here we address this question empirically by comparing GPT-4 to humans and other language agents in an online public Turing Test.
Figure 1: Chat interface for the Turing Test experiment featuring an example conversation between a human interrogator (in green) and GPT-4.
Since its inception, the Turing Test has garnered a litany of criticisms, especially in its guise as a yardstick for intelligence. Some argue that it is too easy: human judges, prone to anthropomorphizing, might be fooled by a superficial system (Marcus et al., 2016; Gunderson, 1964). Others claim that it is too hard: the machine must deceive while humans need only be honest (Saygin et al., 2000). Moreover, other forms of intelligence surely exist that are very different from our own (French, 2000). Still others argue that the test is a distraction from the proper goal of artificial intelligence research, and that we ought to use well-defined benchmarks to measure specific capabilities instead (Srivastava et al., 2022); planes are tested by how well they fly, not by comparing them to birds (Hayes and Ford, 1995; Russell, 2010). Finally, some have argued that _no_ behavioral test is sufficient to evaluate intelligence: that intelligence requires the right sort of internal mechanisms or relations with the world (Searle, 1980; Block, 1981).
It seems unlikely that the Turing Test could provide either logically sufficient _or_ necessary evidence for intelligence. At best it offers probabilistic support for or against one kind of humanlike intelligence (Oppy and Dowe, 2021). At the same time, there may be value in this kind of evidence since it complements the kinds of inferences that can be drawn from more traditional NLP evaluations (Neufeld and Finnestad, 2020). Static benchmarks are necessarily limited in scope and cannot hope to capture the wide range of intelligent behaviors that humans display in natural language (Raji et al., 2021; Mitchell and Krakauer, 2023). Interactive evaluations like the Turing Test have the potential to overcome these limitations due to their open-endedness (any topic can be discussed) and adversarial nature (the interrogator can adapt to superficial solutions).
Regardless of its sensitivity to intelligence, there are reasons to be interested in the Turing Test that are orthogonal to this debate. First, the specific ability that the test measures--whether a system can deceive an interlocutor into thinking that it is human--is important to evaluate _per se_. There are potentially widespread societal implications of creating "counterfeit humans", including automation of client-facing roles (Frey and Osborne, 2017), cheap and effective misinformation (Zellers et al., 2019), deception by misaligned AI models (Ngo et al., 2023), and loss of trust in interaction with genuine humans (Dennett, 2023). The Turing Test provides a robust way to track this capability in models as it changes over time. Moreover, it allows us to understand what sorts of factors contribute to deception, including model size and performance, prompting techniques, auxiliary infrastructure such as access to real-time information, and the experience and skill of the interrogator.
Second, the Turing Test provides a framework for investigating popular conceptual understanding of human-likeness. The test not only evaluates machines; it also incidentally probes cultural, ethical, and psychological assumptions of its human participants (Hayes and Ford, 1995; Turkle, 2011). As interrogators devise and refine questions, they implicitly reveal their beliefs about the qualities that are constitutive of being human, and which of those qualities would be hardest to ape (Dreyfus, 1992). We conduct a qualitative analysis of participant strategies and justifications in order to provide an empirical description of these beliefs.
### Related Work
Since 1950, there have been many attempts to implement Turing Tests and produce systems that could interact like humans. Early systems such as ELIZA (Weizenbaum, 1966) and PARRY (Colby et al., 1972) used pattern matching and templated responses to mimic particular personas (such as a psychotherapist or a patient with schizophrenia). The Loebner Prize (Shieper, 1994)--an annual competition in which entrant systems attempted to fool a panel of human expert judges--attracted a diverse array of contestants ranging from simple chatbots to more complex AI systems. Although smaller prizes were awarded each year, the grand prize (earmarked for a system which could be said to have passed the test robustly) was never awarded and the competition was discontinued in 2020.
Most relevant to our current work, Jannai et al. (2023) conducted a large-scale public Turing Test on an online platform: humanornot.com. Their approach is similar to ours in that participants briefly conversed with an LLM or another human and had to decide which it was. They found that humans were 68% accurate overall: 73% when their partner was human, 60% when their partner was a bot. While these results suggest that current LLMs pass the test around \(~{}40\%\) of the time, several features of their design potentially limit the generalizability of this finding. First, conversations were limited to
2 minutes, and individual turns to 20s, precluding complex questions and responses or deeper interrogation. Second, there was no clear delineation of roles between interrogator and witness, meaning that human judges were also motivated to spend time defending their own humanity. Finally, the task did not include a baseline against which to measure model performance, making it hard to contextualise results and raising the possibility that participants were merely inattentive or negligent. We address these limitations in the present work.
## 2 Methods
We adopt a two-player implementation of the Turing Test, where an interrogator asks questions of a single witness and must decide if they are human or an AI. This differs from Turing's original three-person formulation, but obviates the need to have multiple human participants online for each game, and has become a relatively standard interpretation of the game (Saygin et al., 2000).
### AI Witnesses
We created 25 LLM Witnesses that responded to user queries using the OpenAI API. Each LLM witness consisted of a model (GPT-3.5 or GPT-4), a temperature setting (0.2, 0.5, or 1.0) and a prompt. The prompt described the context of the interaction and instructions on how the agent ought to respond (see Figure 2). We varied several aspects of the prompts including whether the model should make spelling, grammar, and capitalization errors; the suggested personality the model should adopt; and whether the agent was told that it was a human or an AI pretending to be human. In addition, all prompts included generic information about the instructions of the game, and important events that happened after their training cutoff. A suggested name for the model's persona was generated from a list at the start of each game. The prompt also included timestamps for the game's creation and the most recent message. Language agent messages were sent with a delay of \(0.3s/char+\Gamma(1.5,2.5)\) to mimic human typing. The majority of agents used GPT-4 (OpenAI, 2023) as their LLM, as we expected this to be the best performing model. As baselines, we used one of the prompts published in (Jannai et al., 2023) with GPT-4, two of our own prompts with GPT-3.5 (Brown et al., 2020), and an implementation of ELIZA using the DOCTOR script (Brainerd, 2023; Weizenbaum, 1966).
### Interface
The chat interface was designed to look like a conventional messaging app (see Figure 1). The Interrogator sent the first message and each user could only send one message at a time. There was a 5 minute time limit on the entire conversation and a 300 character limit on individual messages. Users were prevented from pasting into the text input and a content filter was used to prevent abusive messages. The game countdown timer appears in the top right of the screen alongside buttons to open overlays containing instructions and a form to report abusive users. To improve user engagement, a 'typing' animation appeared after \(2+\mathcal{U}(0,3)\,\mathrm{s}\) regardless of whether the witness was a human or AI. At the end of the game, the interrogator used a decision interface to provide i) a decision (whether the witness is a human or an AI); ii) their confidence in the decision (0-100); and iii) a reason for their decision (free-text, optional). The interrogator could trigger this decision at any point during the game using the gavel button in the bottom left cor
Figure 2: An example prompt, _Juliet_, used to provide situational context to the LLM and instructions on how to respond to user prompts.
ner. After their first game, participants completed an optional demographic survey asking their year of birth, level of education, frequency of interaction with AI chatbots and familiarity with LLMs.
### Procedure
The research was approved by UC San Diego's IRB. The experiment was hosted online at turingtest. live. Participants were recruited online via social media. They read instructions that explained the rules of the game, and consented to their data being used for scientific research (see Appendix A).
Participants then entered a waiting room where they were randomly assigned to play with either another human or an AI. The wait time for AI games was set to the mean wait time for the last 5 human matches to prevent wait time being an indication of match type. If no human match was found after a timeout of \(45+\mathcal{N}(1,7)\) s, players would be matched with an AI. The two participants in human matches were randomly assigned to the interrogator and witness roles. Witnesses were instructed to convince the interrogator that they were human. Players matched with AI models were always interrogators. AI models were selected using an explore/exploit strategy. For each model we draw a sample from a distribution
\[x_{i}\sim\text{Beta}(\alpha_{i},\beta_{i}),\quad\text{for }i=1,\dots,N,\]
where \(N\) is the number of models, and \(\alpha\) and \(\beta\) are the number of wins and losses in the model's game history respectively, weighted by the interrogator's confidence in each verdict. The model that yielded the highest sample value was selected.
## 3 Results
652 participants completed 1810 games in total. We removed 389 games where the interrogator had played against an AI in their previous 3 games, which could happen when there was only a single human player online. User feedback suggested this led to an _a priori_ belief that the witness was not human. We removed a further 11 games where the interrogator indicated that they personally knew a human witness. We analyse the remaining 1405 games (260 with Human witnesses). 51% of participants played only one game and 91% played fewer than 4 games. Games had a mean length of \(187s\) (\(\sigma=95.3\)), 8 messages (\(\sigma=5.16\)), and 320 words (\(\sigma=210\)).
### Success Rate
We define success rate (SR) as the proportion of games in which the interrogator decided that the witness was human (i.e. the witness was successful). Human witnesses performed best, with an SR of 63% (see Figure 4). The best performing GPT-4 prompt (Sierra) achieved an SR of 41%, with several other GPT-4 prompts performing similarly (November, 40%; Papa, 36%). However, there was large variance in performance by prompt. AI21, a baseline prompt from (Jannai et al., 2023),
Figure 3: Four example extracts from game conversations. The interrogator’s messages are on the right, in green. Footers contain the verdict given by the interrogator, their self-reported confidence and justification, and the true identity of the witness.
achieved 19%, and the worst GPT-4 prompt (Foxtrot) achieved only 6% SR.
GPT-3.5 performance ranged from 5% (Victor) to 14% (Juliet). GPT-4 systematically outperformed GPT-3.5 on each of these prompts (Victor: 19%, Juliet: 26%). Finally, ELIZA--a rules-based baseline (Weizenbaum, 1966)--achieved 27% SR, outperforming all of the GPT-3.5 witnesses and several GPT-4 prompts.
Interrogator confidence was fairly well calibrated in AI games, but confidence was not predictive of accuracy for Human games (see Figure 5). Because the sampling algorithm preferred better performing models, the lowest SRs are based on the fewest games and therefore may be less reliable. Sierra and November SRs are based on 79 and 141 games respectively, while Victor (GPT-3.5) was sampled only 21 times (see Table 1).
We hypothesised that models would enjoy an advantage on the interrogator's first game, before the interrogator has had an opportunity to familiarize themselves. The best-performing witnesses did similarly on first games to their overall SR (see Figure 12). Human witnesses were successful in 64% of games, and the Sierra and November GPT-4 prompts achieved 40% and 41% respectively. Lower-performing GPT-4 witnesses saw a boost in SR for first games, including Foxtrot (6% to 11%) and AI21 (19% to 29%). Notably, Quebec achieved a much higher SR of 45%, and ELIZA's SR fell to 13%. However, the sample of first games was roughly half the size of overall games, making these estimates less robust (see Table 1).
### Interrogator Demographics
We tested whether interrogators' discriminative abilities were predicted by demographic factors known to influence human-computer interaction, including age (Xie, 2003), gender, and level of education (Cooper, 2006). We also tested whether frequency of interaction with chatbots, and level of knowledge about LLMs would give interrogators an edge via familiarity with LLMs' mechanisms and limitations. However, we observed no significant effects of any of these factors on accuracy: the proportion of the time the interrogator correctly guessed whether their interlocutor was a human or an AI (see Figure 6). Binomial linear regressions predicting accuracy on the basis of these factors did not show significant effects (all \(p>0.15\)). Re-analysing the data using a more sensitive signal detection measure (d') produced functionally similar results (see Appendix E).
Secondly, we hypothesized that experience with the task would improve participant performance. However, we found no evidence for a learning ef
Figure 4: Overall Turing Test Success Rate (SR) for a subset of witnesses. Human witnesses perform best, with 63% SR. GPT-4 performance varies greatly by prompt from 41% (Sierra, best) to 6% (Foxtrot, worst). ELIZA achieves 27%, outperforming the best GPT-3.5 prompt (Juliet, 14%), GPT-4 performance with that prompt (26%), and a baseline prompt from Jannai et al. (2023), AI21 (19%).
Figure 5: Interrogator confidence vs accuracy and witness type. Against AI witnesses, interrogators were well calibrated—that is, their confidence was positively correlated with accuracy. However, there was no relationship between confidence and accuracy for guesses about human witnesses.
fect within the experient (see Figure 6, bottom right). We fit a binomial mixed effects model predicting accuracy as a function of the number of games an interrogator had played, with random slopes by interrogator. The analysis showed no effect of number of games (\(z=0.186,p=0.853\)) for either human witnesses (\(z=0.544,p=0.587\)) or AI witnesses (\(z=0.430,p=0.667\)).
### Strategies
Using the categorization scheme in Appendix G, we classified the strategies that interrogators deployed in a sample of 300 games. Figure 7 shows the top 20 strategies across all games. The most common kinds of strategies were Small Talk (27%) asking the witness what they were doing that day and other personal details about their life. Other common strategies involved Knowledge & Reasoning (23%; including asking about general or specialised knowledge and current events), and Situational Awareness (17%; e.g. testing the witness's awareness of the game, the time, and their surroundings). Many participants also asked for witnesses' opinions on subjective questions, directly asked if the witness was a bot, or said something unusual to see how the witness would react.
In order to analyse the effectiveness of different strategies, we found the mean accuracy of games where the strategy was deployed (Figure 8). The strategies that were most likely to be used in successful games were speaking in a language other than English, asking about the time or current events, and directly asking or accusing the witness of being an AI. It is likely that the accusation strategy in particular is not causally related to the interrogator's success, but tends to precede successful identifications. The strategies least likely to be used in accurate games were asking about the witness's mood and opinions, or their awareness of the game, the weather, and their surroundings.
Finally, we asked whether interrogator strategies changed over time. As shown in (Figure 8), interrogators tended to use socio-emotional strategies (like asking the witness to say something taboo) and situational awareness questions (about weather and time) in their earliest games. More experienced interrogators tended to ask more questions about the witness's knowledge and opinions.
### Reasons
Where participants provided reasons for their decision we manually classified these using the categorization scheme in Appendix F. Reasons were divided into 6 high-level categories: Linguistic Style, Social & Emotional, Knowledge & Content, Situational Awareness, Meta (aspects of the game outside the model's control), and Uninformative (vague or irrelevant reasons). The top 20 reasons for interrogators deciding that an AI was in fact an AI are shown in Figure 10. The most frequent
Figure 6: Interrogator accuracy in deciding whether the witness was human or an AI was not predicted by a variety of different demographic information, including their knowledge about LLMs, frequency of interaction with chatbots, and the number of games they had played on the platform.
reasons were linguistic (such as the model being too informal, overusing affectations that made the persona sound forced, or using a repetitive sentence structure) and socio-emotional (such as the model lacking a personality--being boring and generic-- or producing 'unnatural responses' that a human would not produce in that situation). There were relatively few reasons provided on AI verdicts for human witnesses [(31)], so it was difficult to compare the relative frequency of reasons between witness types. Frequencies for all reasons by verdict and witness type are shown in Appendix H.
Reasons for a human judgement followed a similar pattern. For both human and AI witnesses, interrogators often cited linguistic factors such as spelling and grammar (SPAG) errors and a 'human-like' tone, as well as socioemotional factors like humor, plausibility, and being uncooperative. Interrogators frequently mentioned the informal tone of AI witnesses (e.g. slang, abbreviations) as a reason for a human judgement, but rarely did so for real human witnesses. Conversely, interrogators often mentioned a plausible backstory for human but not AI witnesses. Interrogators thought that slow responses were indicative of a human witness, but did so with roughly equal frequency for human and AI witnesses, suggesting that the delay function was reasonably well calibrated.
## 4 Discussion
### Does GPT-4 pass the Turing Test?
_I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about \(10^{9}\), to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning._
-- (Turing, 1950, p.442)
The results presented here suggest that certain GPT-4 witnesses met Turing's criterion of being misidentified 30% of the time (even if they are 2 decades later and several orders of magnitude larger than Turing anticipated). However, it is not clear that 30% is an appropriate benchmark for success at the imitation game. A more conventional benchmark would be 50%. This could suggest that interrogators are, on average, incapable of distinguishing the model from a human. But this chance baseline suffers from the drawback that it could be achieved by random guessing, for instance if a witness provided no information whatsoever.
A more stringent test, at least insofar as humans outperform the chance baseline, would require an AI witness to be deemed human as frequently as human witnesses are. This would suggest that the AI witness is as good at the imitation game as a human. This comparison, however, may be unfair
\begin{table}
\begin{tabular}{l l l l} \hline \hline \begin{tabular}{l} Witness \\ Type \\ \end{tabular} & \begin{tabular}{l} Witness \\ SR (n) \\ \end{tabular} & \begin{tabular}{l} Overall \\ SR (n) \\ \end{tabular} \\ \hline \hline
\begin{tabular}{l} ELIZA \\ \end{tabular} & ELIZA & 0.27 (60) & 0.16 (32) \\ \hline GPT-3.5 & Victor & 0.05 (21) & 0.00 (7) \\ GPT-3.5 & Juliet & 0.14 (21) & 0.17 (12) \\ \hline GPT-4 & Foxtrot & 0.06 (17) & 0.11 (9) \\ GPT-4 & Victor & 0.19 (32) & 0.27 (15) \\ GPT-4 & AI21 & 0.19 (42) & 0.29 (21) \\ GPT-4 & Juliet & 0.25 (63) & 0.33 (30) \\ GPT-4 & Quebec & 0.31 (59) & **0.45** (20) \\ GPT-4 & Papa & 0.36 (87) & 0.33 (33) \\ GPT-4 & November & 0.40 (140) & 0.41 (69) \\ GPT-4 & Sierra & **0.41** (79) & 0.40 (35) \\ \hline Human & Human & 0.63 (260) & 0.64 (108) \\ \hline \end{tabular}
\end{table}
Table 1: Success Rate (SR) and No. Games (n) for select witnesses, overall and in interrogators’ first games. GPT-4 Sierra performs best overall with an SR of 0.41 in 79 games. Quebec reaches 0.45 in first games, but with a small sample. Lower-performing GPT-4 models tend to perform better in first games. Of 25 models, only a subset are shown here.
Figure 7: Top 20 types of strategy that interrogators used to interrogate witnesses.
on AI witnesses, who must deceive the interrogator while humans need only be honest. Turing's original description of the game overcomes this problem by having a man and a machine both pretending to be women (Saygin et al., 2000). While this creates a balanced design, where both witnesses must deceive, it also conceals from the interrogator that some witnesses may not be human. If the interrogator thinks they are making a gender judgement, they will ask entirely different questions, which might lead to a weaker and less adversarial test.
It is worth noting that in Turing's original 3-player formulation of the game, the distinction between the chance and human baselines is elided as each game is a zero-sum competition between a human and a machine. The 2-person format was adopted here for simplicity. The 3-player format might be more demanding in that it allows the interrogator to directly compare responses, and should be explored in future work.
A further problem for adjudicating success at the Turing Test is that it seems to require confirming the null hypothesis (i.e. providing evidence that there is no difference between AI performance and a chosen baseline; Hayes and Ford, 1995). This is a well-established problem in experimental design: any claim to have not found anything can be met with the rejoinder that one did not look hard enough, looked in the wrong way, or looked in the wrong place. One solution is to include additional baselines (such as ELIZA and GPT-3.5 used here). Showing that there is a significant difference between human witnesses' performance and ELIZA's, but not between humans and a target system, can serve as a "manipulation check": demonstrating that the design is sufficiently powerful in principle to detect differences. A more conservative solution is to require that the AI system perform significantly _above_ the chance or human baselines. In theory, this bar is unnecessarily high. In practice, it might be the simplest way to demonstrate Turing Test acumen with frequentist statistics.
None of the AI witnesses tested here met either the 50% success or human parity criteria. Therefore, given the prompts used here, we do not find evidence that GPT-4 passes the Turing Test. Even if one of the prompts had surpassed these criteria, the design and analyses used here limit the strength of the inferences that could be drawn. Compelling support for the claim that a system can pass the Turing Test would require pre-registration of the systems and criteria to be used, random sampling of participants, and control for multiple comparisons.
### Could GPT-4 pass the Turing Test?
We found substantial variation in performance depending on the prompt that was used (see Figure 4). Given our relatively limited exploration of possible prompts, it seems _a priori_ likely that a prompt exists which would outperform the ones
Figure 8: Mean accuracy of games by the strategies that the interrogator employed: the strategies most and least likely to be deployed in games with accurate verdicts.
Figure 9: Mean interrogator game index (the number of games an interrogator has played) of the strategies used by the most and least experienced interrogators.
tested here, and perhaps also the 50% and human baseline criteria. Moreover, several of the reasons that interrogators gave for deciding that a witness was an AI would be relatively easy to address, such as being too informal, using a repetitive sentence structure, and avoiding taboo topics. Equally, however, prompt engineering is challenging Reynolds and McDonell (2021). In our experience designing these prompts, making changes that solve one problem would often introduce new indicators that the system is not human. Several of the best-performing prompts achieved around 40% SR. This figure is similar to the one reported in Jannai et al. (2023), which could suggest that it is representative of current LLM capabilities.
### Interrogator demographics
It is striking that none of the demographic data collected here--including frequency of interaction with chatbots like ChatGPT, level of knowledge about LLMs, and number of games played--had any significant impact on interrogator accuracy. This suggests that experience interacting with LLMs might not be helpful for distinguishing them from humans. These null effects could in part be explained by the fact that the prompts were designed to address common cues from services like ChatGPT, such as being verbose, formal, polite, or having perfect spelling and grammar. Participants who were very familiar with these aspects of typical LLM chatbots may have been most surprised by their absence in our witnesses, eliminating any other advantages that come from their experience. Alternatively, the null results could be due to selection bias. Our sample, recruited via social media posts from the authors, likely over-represented men in their 20s-30s with postgraduate education and an interest in LLMs (see Appendix D, Figure 15). This relatively homogeneous sample might lack sufficient variability to show significant effects.
### The ELIZA effect
Another striking feature of the results is the relative success of ELIZA, a rules-based chatbot from Weizenbaum (1966). ELIZA uses a combination of pattern matching and substitution to generate template responses interleaved with extracts from the users input. Despite its simplicity, Weizenbaum reported that many users attributed human feelings to the program. The _ELIZA effect_, as this phenomenon has come to be known, is a common criticism of the claim that success at the Turing Test is indicative of intelligence Marcus et al. (2016); Mitchell and Krakauer (2023). However, to our knowledge, there have been no systematic attempts to evaluate ELIZA's performance at the test.
We found that ELIZA successfully fooled human interrogators in 27% of games, outperforming several GPT-4 witnesses and all of the GPT-3.5 witnesses. Qualitative analysis of dialogues and reasons suggests several potential explanations for
Figure 11: Top 20 types of reason that interrogators gave for deciding that the witness was a Human, broken down by ground truth witness type.
Figure 10: Top 20 types of reason that interrogators gave for deciding that an AI witness was in fact an AI.
why ELIZA may have performed so well (see Appendix C for example conversations and verdict reasons for ELIZA). First, ELIZA's responses tend to be conservative. While this generally leads to the impression of an uncooperative interlocutor, it prevents the system from providing explicit cues such as incorrect information or obscure knowledge. Second, ELIZA does not exhibit the kind of cues that interrogators have come to associate with assistant LLMs, such as being helpful, friendly, and verbose. Finally, some interrogators reported thinking that ELIZA was "too bad" to be a current AI model, and therefore was more likely to be a human intentionally being uncooperative.
The results serve as support for the claim that the Turing Test is not a sensitive test of intelligence, and that the ELIZA effect continues to be powerful even among participants who are familiar with the capabilities of current AI systems. They are also an indication of the higher-order reasoning which goes into the interrogator's decision, and that preconceived notions about AI capabilities and human idiosyncrasies can skew judgments.
### Strategies
Interrogators deployed a wide variety of strategies to identify AI models, underlining the potential flexibility of the Turing Test as an interactive evaluation. A large number of interrogators merely engaged in small talk with the model, asking about emotional state, daily activities, and personal details. In principle, AI models might be very good at imitating humanlike conversation in these typical interactions, and so this seems to be a relatively unreflective strategy. Indeed, none of these strategies were particularly effective (see Figure 8).
The most effective strategy was to use a language other than English; interrogators presumably judged that a proficient response was more likely to come from a multilingual language model than a human speaker of the same language. Other effective strategies exploited the model's lack of real time connection to the world by asking about time or current events. Although the prompts were furnished with timestamps in UTC, the models often made errors when converting between timezones.
Many interrogators directly asked if the witness was an AI, to which it should have no trouble responding "no". However, this strategy was fairly effective. Models would often 'try too hard' to convince the interrogator that they were human, rather than answer this in the off-hand way that a human typically would.
### Reasons
Interrogator's reasons for making a correct AI verdict most often pertained to Linguistic Style. In spite of models' being trained on an enormous amount of human text, producing content in a style that was appropriate to the context remained a challenge. Interrogators detected models that were too formal or too informal; whose grammar was too good or unconvincingly bad; that were too verbose or too concise. This likely suggests that i) the appropriate style in this quite unusual context is subtle and ii) that different interrogators had different expectations around style: no one style will be convincing to every interrogator.
The second most frequent reason category provided was Social & Emotional, especially comments that models' responses were generic or unnatural. LLMs learn to produce highly likely completions and are fine-tuned to avoid controversial opinions. These processes might encourage generic responses that are typical overall, but lack the idiosyncracy typical of an individual: a sort of ecological fallacy.
The reasons that interrogators gave for human verdicts invite a similar picture. Interrogators did not expect AI to make spelling and grammar errors, use an informal tone, or be concise. Interrogators also focused on social and emotional factors such as sense of humor, or being uncooperative in the game. The distribution of reasons for human verdicts looks relatively similar for human and AI witnesses (see Figure 11), suggesting that models are capable of imitating these traits in many cases.
Notably, fairly few reasons pertained to witnesses' knowledge or reasoning abilities, providing further evidence that intelligence in the classical sense is not sufficient to pass the Turing Test. The distribution of verdict reasons could indicate that models are already sufficiently intelligent, and so socio-emotional cues and stylistic fluency are more salient to interrogators. Alternatively, these cues may be more salient in general, and so the test will not be sensitive to intelligence for models who have not mastered them.
## 5 Conclusion
The Turing Test has been widely criticised as an imperfect measure of intelligence: both for being too
easy and too hard. In our public implementation, we find some evidence to support these criticisms. ELIZA, a rules-based system with scant claim to intelligence, was successful in 27% of games, while human participants were judged to be human only 63% of the time.
Nevertheless, we argue that the test has ongoing relevance as a framework to measure fluent social interaction and deception, and for understanding human strategies to adapt to these devices. The most cited reasons for AI verdicts pertained to linguistic style and socio-emotional factors, suggesting that these may be larger obstacles for (current) AI systems than traditional notions of intelligence. Our demographic analyses suggest that interaction with LLMs, or familiarity with how they work, may not be sufficient for correctly identifying them.
The best performing GPT-4 prompt was successful in 41% of games, outperforming GPT-3.5 (14%), but falling short of chance. On the basis of the prompts used here, therefore, we do not find evidence that GPT-4 passes the Turing Test. Despite this, a success rate of 41% suggests that deception by AI models may already be likely, especially in contexts where human interlocutors are less alert to the possibility they are not speaking to a human. AI models that can robustly impersonate people could have widespread social and economic consequences. As model capabilities improve, it will become increasingly important to identify factors that lead to deception and strategies to mitigate it.
## Limitations
As a public online experiment, this work contains several limitations which could limit the reliability of the results. First, participants were recruited via social media, which likely led to a biased sample that is not representative of the general population (see Figure 15). Secondly, participants were not incentivised in any way, meaning that interrogators and witnesses may not have been motivated to competently perform their roles. Some human witnesses engaged in 'trolling' by pretending to be an AI. Equally some interrogators cited this behavior in reasons for human verdicts (see Figure 20. As a consequence, our results may underestimate human performance and overestimate AI performance. Third, some interrogators mentioned that they personally knew the witness (e.g. they were sitting in the same room). We excluded games where interrogators mentioned this in their reason, but to the extent that this occurred and interrogators did not mention it, we may have overestimated human performance. Fourth, sometimes only one participant was online at a time, meaning that they would be repeatedly matched up with AI witnesses. This led participants to have an _a priori_ belief that a given witness was likely to be AI, which may have led to lower SR for all witness types. We tried to mitigate this by excluding games where an interrogator had played against an AI \(\geq 3\) times in a row, however, this bias likely had an effect on the presented results. Finally, we used a relatively small sample of prompts, which were designed before we had data on how human participants would engage with the game. It seems very likely that much more effective prompts exist, and therefore that our results underestimate GPT-4's potential performance at the Turing Test.
## Ethics Statement
Our design created a risk that one participant could say something abusive to another. We mitigated this risk by using a content filter to prevent abusive messages from being sent. Secondly, we created system to allow participants to report abuse. We hope the work will have a positive ethical impact by highlighting and measuring deception as a potentially harmful capability of AI, and producing a better understanding of how to mitigate this capability.
## Acknowledgements
We would like to thank Sean Trott, Pamela Riviere, Federico Rossano, Ollie D'Amico, Tania Delgado, and UC San Diego's _Ad Astra_ group for feedback on the design and results.
|
2309.13967 | No free lunch theorems for quantum state measurements as resources in
classical sampling and generative modelling | We prove that $\textit{almost all}$ quantum states, when sampled according to
the Haar measure over the unitary group, have the following property: if copies
of the state are measured to provide latent random variables which are taken as
an input in a classical generative model or sampling algorithm, then any
alternative state whose measurements can generate the same set of target
distributions will do so with the same overall cost. Here, we define the
overall cost as the aggregate computational complexity of sampling from all
possible distributions that can be prepared from the given input distribution.
Our result holds for any length of input and output bitstring and when a
uniformly random bitstring of any length is optionally provided as an
additional resource. As it is easy to construct scenarios where a pair of
alternative candidate states are such that classical simulation of the
preparation thereof is easy in one case and hard in the other, the result can
be viewed as decoupling how hard it is to obtain a latent random variable, and
how useful it is as a resource in classical sampling and generative modelling. | Steven Herbert | 2023-09-25T09:05:16Z | http://arxiv.org/abs/2309.13967v2 | No free lunch theorems for quantum state measurements as resources in classical sampling and generative modelling
###### Abstract
We prove that _almost all_ quantum states, when sampled according to the Haar measure over the unitary group, have the following property: if copies of the state are measured to provide latent random variables which are taken as an input in a classical generative model or sampling algorithm, then any alternative state whose measurements can generate the same set of target distributions will do so with the same overall cost. Here, we define the overall cost as the aggregate computational complexity of sampling from all possible distributions that can be prepared from the given input distribution. Our result holds for any length of input and output bitstring and when a uniformly random bitstring of any length is optionally provided as an additional resource. As it is easy to construct scenarios where a pair of alternative candidate states are such that classical simulation of the preparation thereof is easy in one case and hard in the other, the result can be viewed as decoupling how hard it is to obtain a latent random variable, and how useful it is as a resource in classical sampling and generative modelling.
+
Footnote †: Contact: [email protected]
## 1 Introduction
The occurrence of correlations that cannot be prepared (efficiently) by classical means is a characteristic feature of quantum mechanics. These arise in two essentially guises: firstly, using entangled systems it is possible to, for example, violate the Bell inequalities - a process often described as obtaining "stronger than classical correlations"; and secondly, it is known that certain polynomial-size quantum circuits, when measured, sample random processes which are expected to take exponential classical resources to prepare (e.g., Ref. [1]). In this paper, it is the latter of these with which we are primarily concerned and, moreover, sampling classically hard-to-prepare probability distributions is the only thing at which quantum computers can outperform classical computers at present [2, 3] (even that being hotly debated [4]).
In this paper, we are interested in the question of whether such classically hard-to-prepare samples can act as generically useful resources in classical sampling and generative modelling. For example, if we supply quantum state measurements as latent variables in a classical generative model, we ask whether it is the case that samples obtained by measuring a certain quantum state are more useful "on the whole". This question is aptly formalised through resource theory [5, 6]: the task of classical sampling / generative modelling may be summarised as the task of converting some resource randomness (e.g., latent random variables in generative modelling) into samples from the target distribution, and we thus ask whether certain latent distributions reduce the overall resources needed to prepare some distribution chosen uniformly from a family of target distributions. Note the importance of formalising this question in terms of a _family_ of target distributions - as the ideal resource distribution for a _single_ target distribution is, of course, always the target distribution itself.
Should such a generically beneficial resource distribution exist, then the advantages to classical generative modelling are immediate: a parameterised quantum circuit (PQC), executed on a present or near-term quantum computer could be trained to sample the generically beneficial resource distribution; samples from
which could then be fed into classical generative models as the latent variables. As a result, the classical generative models would be more expressive compared to those fed with standard "classical" randomness (we later elaborate upon why should such a generically beneficial resource distribution exist, then the result would be enhanced expressivity). The training of the PQC would be a one-off event, and the PQC would then provide an enduring supply of enhanced randomness, and so the cost of performing the training the PQC could be amortised across an unbounded number of uses of the resource (so that the cost to any particular generative model could be neglected).
According to the literature there is reason to suppose directly contradictory things about what would make a good resource for this purpose. On the one hand, the fact that there exist quantum states that sample classically hard-to-prepare distributions when measured (as introduced above) suggests that complex highly-entangled states may have higher resource value; on the other hand, the fact that a resource theory of _uncomplexity_ has recently been proved for quantum states in general [7] gives us a hint that simple, unentangled states may have higher resource value. (Note that the resource theory of uncomplexity addresses a different question to ours: it is concerned with the resource value of quantum states that may then undergo further unitary evolution - or in fact slightly noisy evolution - whereas we are concerned with those states being measured to create classical randomness.)1
Footnote 1: Another interesting, and very recent resource theory is that of tensor networks [8], demonstrating the general importance of resource theories to our understanding of the role that quantum information can play in computation.
With these two incompatible precedents in place, it is perhaps unsurprising that neither is true - at least in the most general sense - and in fact in this paper we prove that almost all quantum states (when copies of the state are measured to provide latent variables for a classical generative model) have equal resource value to any suitable alternative - _when all of the target distributions that can be prepared with that latent distribution are treated as equally valuable._ We prove two results that we term _no free lunch theorems for quantum state measurements as resources in classical sampling and generative modelling_, following the lead of similar-in-spirit no free lunch theorems of optimisation and machine learning [9, 10]. The remainder of this paper is organised as follows: in Section 2 we define the set-up in which we prove the no free lunch theorems; to complete the set-up it is incumbent upon us to prove some preliminary results in Section 3; which leads into the main results (the no free lunch theorems themselves) in Section 4; finally in Section 5 we discuss our results and their practical implications.
## 2 No free lunch theorem set-up
In order to prove the no free lunch theorems, we first need to define:
1. what, exactly, is meant by measuring a quantum state to obtain resource samples (Section 2.1);
2. a suitable general framework that captures classical sampling and generative modelling (Section 2.2);
3. the cost of preparing the family of target distributions (Section 2.3);
### Measuring a quantum state to obtain resource samples
When performing a measurement on a quantum state, in general, one has a number of options: firstly, one may choose to completely or partially measure the state; and secondly, one is free to choose a measurement basis. On the former, we suppose that all qubits are measured and presented as the resource sample (the downstream algorithm that consumes that sample can, of course, not use certain bits obtained from the measurement, and hence there is no real loss in generality in assuming this). On the latter, the measurement basis can be chosen in one of two ways:
1. the measurement basis can be fixed;
2. the measurement basis can be selected at random according to some probability distribution.
(As we are concerned with the state preparation and measurement being a stand-alone, isolated procedure that is undertaken to prepare the resource samples, we can omit from consideration more elaborate possibilities such as the measurement basis being dependent on the result of some other algorithmic component.)
In the case where the measurement basis is fixed, we can always consider that unitary evolution is applied to the state such that the measurement is in the computational basis - and without loss of generality we can then absorb this evolution into the state preparation itself, such that the goal is to prepare a state that is to be measured in the computational basis.
In the case where the measurement basis is chosen at random, without loss of generality we can represent this as shown in the left-hand side of the following figure, which is equivalent to the right-hand side by the principle of deferred measurement [11, p. 186]:
(Here the control - classical and quantum - is intended to represent the fact that a different unitary can be applied for each computational basis state.) So we can see that even a random choice of measurement basis can be represented as a fixed quantum state that is then measured in the computational basis - this process is essentially akin to mixed state purification [11, Chapter 9]. We can thus see that a random choice of measurement basis amounts to nothing more than a special case of measuring a larger state in the computational basis, and so henceforth we consider the resource distribution to always be sampled by a (complete) computational basis measurement of the resource state.
The final question we must address to complete this part of the set-up is to specify the set of possible quantum states that are to be measured to yield the classical resource samples. Moreover, as quantum states vary continuously, in order to quantify the proportion of states that have some property, we must specify both the set and a distribution over this set. We take the conventional (and appropriate) approach of considering quantum states prepared by some Haar random unitary, \(U\), applied to the zero state: so we consider states of the form \(U\ket{0^{n}}\). Thus the set is the set of all complex unit vectors (of the appropriate size) and by this definition the Haar measure quantifies the occurrence of each.
### A framework for classical generative models and sampling algorithms
A generative model maps latent random variables to random variables sampled from the target distribution2. A _classical_ generative model must further be such that this map is realised by a classical model of computation. For our purposes we can work directly with logic circuits as a suitable model of classical computation3. Moreover, we can further specify that a reversible form of classical logic gates should be used. The gateset {NAND, FANOUT} is universal for classical computation - and these can be reversibly executed with the Toffoli gate along with a supply of ancillas, at least two of which must be initialised in the \(\ket{1}\) state. For our purposes it is convenient to assume that all of the ancillas are initialised in the \(\ket{0}\) state, and so we formulate the generative model is a circuit of gates from \(\{\text{Toffoli},X\}\) applied to the latent random variable. Both of these gates are permutations of the computational basis states, and furthermore we assume that the one ancilla (initialised in the \(\ket{0}\) state) is available to guarantee that any computational basis state permutation can be achieved, with the ancilla uncomputed back to \(\ket{0}\) (see, for example, Ref. [13] for constructions to perform arbitrary computational basis state transpositions - which generate the full group of permutations). As well as a resource random variable, obtained by measuring a quantum state, \(\ket{\psi}\), we also allow the generative model access to a supply of uniform random bits. So it follows that our classical generative model is represented:
Footnote 2: Whilst this description is correct and apt for our purposes, proponents of generative modelling may deem such an abstract definition as missing some important subtleties. _Generative adversarial networks_ are one of the most widely-used forms of generative model, and the reader who is interested in understanding these in more detail is pointed to Ref. [12].
Footnote 3: Turing machines would provide an alternative, formal model of computation – and this would be crucial if we were to address instances of a family of problems of varying sizes – however here we are interested in what can be achieved with a finite set of computational resources of fixed size, and hence there is no loss in generality in working directly with circuits.
**Definition 1**.: _A generative model is a circuit of the form:_
_where \(P\) is any permutation. Here we have standardised the placement of the registers for: the supply of \(n_{0}\) ancillas; the supply of \(n_{+}\) uniform random bits; the resource latent sample (\(n_{q}\) bits) ; and the output random variable, \(|y\rangle\) (\(n_{y}\) bits).4_
Footnote 4: These standard forms may appear “upside down” compared to the usual convention (for example in quantum circuit oracles) of placing the input on the top-most qubits (followed by the pool of ancillas) and the output on the bottom-most qubits, however the placement is arbitrary (as long as it remains fixed) and this ordering slightly simplifies the following analysis.
Here (and throughout the remainder of the paper) we have used quantum circuit notation, even though after the measurements all of the states are computational basis states, i.e., classical information. A sampling problem differs in that, rather than just mapping a latent random variable to a target distribution, there is additionally a classical input bitstring that parameterises the sampling problem, such that (in general) a different distribution must be prepared for each input (see e.g., Ref. [14, Definition 11]).
**Definition 2**.: _A classical sampling algorithm is a circuit of the form:_
_where \(P\) is any permutation. Here we have standardised the placement of the registers for: the supply of \(n_{0}\) ancillas; the supply of \(n_{+}\) uniform random bits; the resource latent sample (\(n_{q}\) bits); the \(n_{x}\)-bit input; and the output random variable, \(|y\rangle\) (\(n_{y}\) bits)._
On a more general note, it is worth observing that sampling problems generalise decision problems (in a decision problem we sample a single bit, and interpret the outcome as the true / false decision), however it is a little misleading to infer that our no free lunch theorem applies also to decision problems. This is because the no free lunch theorem is expressed in terms of there being no resource distribution that is generically useful independently of some restriction on the family of target distributions and / or the acceptable approximation: in the case of decision problems, the acceptable approximation typically _is_ explicitly dictated _a priori_ (for instance the familiar \(2/3\) rule for defining complexity classes such as BPP). For this reason, any connection to decision problems should be downplayed, and it is a fascinating open question whether a no free lunch theorem could be proven for quantum state measurements as resources in (classical) decision problems - and one that is unlikely to be resolved without a radically different approach to that which we take here.
Finally, we can handle the analysis a little more concisely if we defer the measurements:
**Lemma 3**.: _Circuits of the form:_
_sample the same distribution, for any given input quantum state \(|\psi\rangle\) and permutational circuit, \(P\)._
Proof.: Consider the probability of measuring some output bitstring \(x\). First addressing circuit (b), in which case the probability is \(|\left\langle x\right|P\left|\psi\right\rangle|^{2}\). Now consider the circuit (a), in which case the probability of sampling the bitstring \(x\) is \(|\left\langle y\right|\psi\rangle|^{2}\) where \(\left|y\right\rangle=P^{\dagger}\left|x\right\rangle\). That is, because \(P\) is a permutation, \(x\) will be sampled if and only if the measurement outcome is \(y\) such that \(P\left|y\right\rangle=PP^{\dagger}\left|x\right\rangle=\left|x\right\rangle\). The probability of this happening is:
\[|\left\langle y\right|\psi\rangle|^{2}=|(\left|y\right\rangle)^{\dagger}\left| \psi\right\rangle|^{2}=|(P^{\dagger}\left|x\right\rangle)^{\dagger}\left| \psi\right\rangle|^{2}=|\left\langle x\right|P\left|\psi\right\rangle|^{2} \tag{1}\]
and thus the probability of measuring any bitstring, \(x\), is identical for the two circuits (a) and (b).
**Remark 4**.: _This result could be obtained by multiple application of the principle of deferred measurement, as the computational (\(Z\)) basis measurements need only be commuted past control, and \(X\) operations - and if we are careful with the order, the latter can always be uncontrolled \(X\) gates (that is, devoid of quantum control)._
**Remark 5**.: _This result can actually trivially be generalised to the case where \(P\) is any circuit that implements a computational basis state permutation up to relative phase. Whilst this is not important for our purposes, this feature may be of interest more generally._
Noting that as the supply of \(n_{0}\) ancillas is already a computational basis state - and so we could add a measurement of these ancillas before \(P\) in the diagrams in Definitions 1 and 2 without changing the operation at all - we can equally think of the classical generative model as:
and a classical sampling algorithm as:
### The cost of preparing the family of target distributions
From the preceding subsection, finding a generative model to sample a certain target distribution amounts to selecting a suitable permutation matrix, \(P\). So it follows that there for the no free lunch theorem there must be a defined notion of cost associated therewith. For instance, if the generative model is actually executed as a quantum circuit, then the gate count may provide a suitable cost; additionally, the cost may incorporate memory concerns, for example by incentivising not exhausting all of the available ancillas when possible (even at the cost of an increased gate count). The cost may alternatively be some quantification of the resources needed to execute the corresponding logical circuit in a classical computer: for instance by identifying the size of artificial neural network that could encapsulate \(P\). The no free lunch theorems we prove are general, in the sense that no specific notion of cost is required: all that we need is that there is a well-defined cost associated with every \(P\). For this, we define the function \(\mathrm{Cost}_{\mathrm{single}}(P)\) as a function that takes a permutation as the input and returns the cost - in fact, even the term "cost" is suggestive of a single value, whereas the cost could be multi-argument, for example the returned cost could be [gate count, number of ancillas used].
As the generative model is a permutation (i.e., Definition 1), for a fixed input there is a finite number of preparable distributions. Furthermore, in general, a set of different permutations may prepare the same distribution.
**Definition 6**.: _Suppose there are some \(M\) distinct distributions that can be prepared given the latent distribution (and supply of uniform random bits and ancillas), then we define some \(M\) sets - which we term "distribution equivalence classes" - in each case whose members are permutations that prepare the same distribution, given the input. These we denote, \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{M}\)._
From this we can define the _aggregate_ cost:
\[c_{\text{gen-model}}=\min_{P_{1}\in\mathcal{P}_{1},P_{2}\in\mathcal{P}_{2}, \ldots,P_{M}\in\mathcal{P}_{M}}\left[\text{Cost}_{\text{aggregate}}\big{(} \text{Cost}_{\text{single}}(P_{1}),\text{Cost}_{\text{single}}(P_{2}),\ldots, \text{Cost}_{\text{single}}(P_{M})\big{)}\right] \tag{2}\]
which is to say that a member of each equivalence class is chosen such that the overall cost is minimised.
The form of the overall cost is deliberately very general - as the no free lunch theorem is not restricted to any particular way of evaluating the cost - however one should note a couple of crucial features. Firstly, \(\text{Cost}_{\text{aggregate}}\) takes an an input a tuple that includes the cost for preparing each distribution (i.e., by selecting one permutation from each equivalence class). In defining the aggregate cost in this way, we have implicitly recognised that the family of target distributions is the entire set of preparable distributions, and this is appropriate for a no free lunch theorem. Secondly, in order for aggregate cost to be appropriate for a no free lunch theorem, it should uniformly weight all of the preparable distributions (that is to say, that every distribution that can be prepared is treated as equally valuable). This is achieved when \(\text{Cost}_{\text{aggregate}}\) returns the same value when the order of the input tuple elements undergoes any permutation. Examples of appropriate aggregate cost functions include:
1. the average cost;
2. the maximum cost (for any of the preparable distributions);
3. the negation of the number of distributions that can be prepared within a certain cost.
The third case applies, for example, if we want to maximise the number of distributions that can be prepared by training a certain machine learning model in which case the "certain cost" that we want to be within is the cost associated with this model. Specifically, we mean that if we fix the architecture of a machine learning model (the number of layers and neurons, the activation function etc.) then we want to assess whether certain latent distributions enable a greater range (number) of target distributions to be sampled with the fixed architecture (and the neural network trained accordingly).
In machine learning parlance, this latter cost function may be more naturally thought of as rewarding latent distributions that increase the expressivity of the model at hand. It is worth discussing this matter a little further. In particular, our set-up _only_ associates cost with sample preparation hardness - that is, the computational cost required to map the latent distribution to the target distribution (which, as above, we can connect to expressivity). In machine learning, we typically care not just about expressivity, but also trainability and generalisation capability. Of these, we can immediately see that the setting we consider renders the latter two moot: by taking the set of permutations needed to prepare the family of target distributions as our starting point, we bypass any consideration of how hard it is to determine which permutation best models a certain training set (trainability) and how well that choice fits unseen data from the same target random process (generalisation). Nevertheless, the (in a sense) simpler setting we consider allows us to focus directly on the question that motivated this work in the first place, namely whether selecting a certain latent random process could make it easier to prepare distributions (noting that certain latent random processes are hard to prepare). This rationale is perhaps even clearer when we move onto classical sampling algorithms in general.
With regards to sampling algorithms (as distinct from generative models, according to Definition 2), if the input has some \(n_{x}\) bits, then there are some \(2^{n_{x}}\) different inputs and hence \(2^{n_{x}}M\) possibilities that must be prepared (different inputs can be used to prepare the same output distribution). As such, (permutation) matrices of the following form must be prepared:
\[\tilde{P}=\sum_{x\in\{0,1\}^{n_{x}}}P_{x}\otimes\left|x\right\rangle\left\langle x\right| \tag{3}\]
where \(P_{x}\) are \(N\times N\) permutation matrices. We now define further "secondary" distribution equivalence classes, \(\tilde{\mathcal{P}}_{1},\tilde{\mathcal{P}}_{2},\ldots,\tilde{\mathcal{P}}_{2 ^{n_{x}}M}\) where two matrices of the form \(\tilde{P}\) in (3) are in the same secondary distribution
equivalence class if they prepare the same distribution for every input, \(\left|x\right\rangle\). From this, we observe the following:
**Remark 7**.: _If \(n_{0}\), \(n_{+}\), \(n_{y}\) and \(\left|\psi\right\rangle\) are fixed for some generative model, then the distribution equivalence classes determine the secondary distribution equivalence classes for a sampling algorithm with the same \(n_{0}\), \(n_{+}\), \(n_{y}\) and \(\left|\psi\right\rangle\) and any \(n_{x}\)._
From this we can obtain the aggregate cost for sampling algorithms:
\[c_{\text{samp-alg}}=\min_{\tilde{P}_{1}\in\tilde{\mathcal{P}}_{1},\tilde{P}_{ 2}\in\tilde{\mathcal{P}}_{2},\ldots,\tilde{P}_{2^{n_{x}M}\in\tilde{\mathcal{P} }_{2^{n_{x}M}}}}\left[\text{Cost}_{\text{aggregate}}\big{(}\text{Cost}_{\text{ single}}(\tilde{P}_{1}),\text{Cost}_{\text{single}}(\tilde{P}_{2}),\ldots,\text{Cost}_{ \text{single}}(\tilde{P}_{2^{n_{x}M}})\big{)}\right] \tag{4}\]
## 3 Preliminary results
Before giving the main results, we first prove some necessary preliminaries. These include some simple, but important properties of Haar random states5. We take one way of constructing Haar random matrices, from Ref. [16], as a suitable definition for our purposes:
Footnote 5: In this section we use some elementary results from _random matrix theory_, the interested reader is pointed to Ref. [15] for a more comprehensive look at the field.
**Definition 8**.: _A Haar random unitary, \(U\), can be constructed by first filling an appropriately sized matrix, \(A\), with elements of the form \(a_{i,j}+b_{i,\mathrm{j}}\mathrm{i}\) where \(a_{i,j}\) and \(b_{i,j}\) are sampled independently, and identically distributed (iid) from a unit Gaussian6 (and \(\mathrm{i}=\sqrt{-1}\)); then performing a QR decomposition on \(A\), that is such that \(A=QR\) where \(Q\) is unitary and \(R\) is upper-triangular; and finally setting \(U=Q\Lambda\) where \(\Lambda=\mathrm{diag}(R_{ii}/|R_{ii}|)\)._
Footnote 6: That is, sampling from the Ginibre ensemble [16, Section 3]
**Definition 9**.: _An \(n\)-qubit Haar random state may be obtained by applying a \(2^{n}\times 2^{n}\) Haar random unitary to \(\left|0\right\rangle^{n}\). That is, by taking the first column of a Haar random unitary._
For our purposes, it is convenient to obtain Haar random samples by sampling the Rayleigh distribution:
\[\text{Rayleigh}(x;\sigma^{2})=\frac{x}{\sigma^{2}}\exp\left(-\frac{x^{2}}{2 \sigma^{2}}\right) \tag{5}\]
From which we get:
**Lemma 10**.: _The magnitudes of the \(2^{n}\) elements of an \(n\)-qubit Haar random state, \(\left|\psi\right\rangle\), are iid \(\sigma^{2}=1\) Rayleigh random variables, with the state then normalised._
Proof.: Following Definition 8, each element of \(A\) is distributed iid as a standard complex normal random variable, and therefore (by a well-known result), the first column of \(A\) can be constructed by sampling \(2^{n}\) iid random variables, represented as the elements of the vector \(\boldsymbol{\alpha}\), such that for \(i\in\{1,2^{n}\}\)\(\alpha_{i}\sim\text{Rayleigh}(\alpha_{i}|1)\) and \(2^{n}\) then setting:
\[A_{i,1}=\alpha_{i}\exp(2\pi\mathrm{i}\phi_{i}) \tag{6}\]
for some \(\boldsymbol{\phi}\) that does not concern us here.
Next, we note that the first column of \(Q\) for the canonical \(QR\) decomposition of some matrix, \(A\), is obtained simply by normalising the first column of the original matrix, \(A\). According to Definition 8, the first column of the Haar random unitary (prior to normalisation) is then obtained by multiplying by \(\Lambda_{i,i}\). However, \(\Lambda_{1,1}\) is a unit-magnitude complex number, and so the multiplication does not affect the magnitude. Hence, up to relative phase differences between the computational basis state vectors, the Haar random state can be obtained by normalising \(A_{i,1}\).
We now use this result about Haar random states in the specific context of resource states for generative models. Specifically, in the following, we are concerned with some family of generative models with arbitrary but fixed \(n_{0}\), \(n_{+}\), \(n_{q}\) and \(n_{y}\), and we let \(n=n_{0}+n_{+}+n_{q}\); \(N=2^{n}\). In the following it is important to note that \(1\leq n_{y}\leq n\), that is, at least one qubit is measured, and at most all of the qubits are measured.
**Definition 11**.: _Let \(\left|\psi\right\rangle\) be an \(n_{q}\)-qubit quantum state. If \(\left|\psi\right\rangle\) is such that each element (when represented as a \(2^{n_{q}}\) element vector of computational basis state coefficients) has a distinct magnitude then we call \(\left|\psi\right\rangle\) strongly distinct._
**Definition 12**.: _Let \(\left|\psi\right\rangle\) be an \(n_{q}\)-qubit quantum state. We construct an \(N\)-element set containing \(2^{n_{+}}\) copies of each element7 of \(\left|\psi\right\rangle\), each divided by \(\sqrt{2^{n_{+}}}\), and \(N-2^{n_{+}+n_{q}}\) additional elements equal to zero. We further partition this set into \(2^{n_{y}}\) subsets each with \(2^{n-n_{y}}\) elements. For each subset we sum the magnitudes of the squares the elements to obtain a value, in particular, doing this for each subset, we obtain \(2^{n_{y}}\) positive numbers that sum to one. If, for every possible set of \(2^{n_{y}}\) positive numbers generated in this way from \(\left|\psi\right\rangle\), we cannot obtain the same set of values in two or more different ways (i.e., by performing two or more different partitions), then we say that \(\left|\psi\right\rangle\) is strongly distinct._
Footnote 7: Here and throughout the paper, we interchangably use the \(n_{q}\)-qubit state \(\left|\psi\right\rangle\) and the corresponding \(2^{n_{q}}\) element vector: where each element is the coefficient of the corresponding computational basis state. Therefore the subscript \(i\) in \(\left|\psi\right\rangle_{i}\) is to be read as referring to the \(i\)th element of this vector. (Similarly, the later equation (7) defines the \(N\) elements of \(\left|\tilde{\psi}\right\rangle\) when represented as a vector.)
**Lemma 13**.: _Let \(\left|\psi\right\rangle\) be an \(n_{q}\)-qubit Haar random state, then almost surely (a.s.) the state will be distinct and strongly distinct._
Proof.: We begin with the distinct case. As we are concerned only with the magnitudes of the computational basis state coefficients, we need concern ourselves just with the statistical properties of \(\{\alpha_{i}\}_{i\in\{1,\ldots,2^{n_{q}}\}}\). In this case (prior to normalisation), we sample a value supported continuously in \((\mathbb{R}^{+})^{2^{n_{q}}}\), which is further such that no singular point is supported with finite probability. If we now consider (for instance) the probability that the first two elements have equal magnitude, we note that they necessarily have equal magnitude prior to normalisation. This means that we are concerned with a subset of the distribution whose support can be represented over \((\mathbb{R}^{+})^{2^{n_{q}}-1}\). Hence the support of the subset in question is measure-zero in the original set \((\mathbb{R}^{+})^{2^{n_{q}}}\). There are a finite number of ways in which at least two elements can be equal in magnitude. Following the above argument, each of these is supported on a measure-zero subset of \((\mathbb{R}^{+})^{2^{n_{q}}}\), and by the union bound the total probability is upper-bounded by the sum thereof, and hence a.s. the Haar random state will be such that all \(2^{n_{q}}\) computational basis state magnitudes are distinct.
Moving onto the strongly distinct case. It follows from the construction, that if strong-distinctness fails to hold, there will be two non-equal \(2^{n-n_{y}}\)-element subsets containing elements of \(\left|\psi\right\rangle\) which have the property that if for each set the squares of the magnitudes of the elements are summed, the same value is obtained in each case. As these two subsets are not equal, there must be at least one element of \(\left|\psi\right\rangle\) that appears a different number of times in each of the two sets, we call this element \(\left|\psi\right\rangle_{*}\). We now consider the possibility of strong distinctness failing to hold because this particular pair of unequal sets is such that the magnitude of the elements squared and summed yields the same value in each case. In particular, for any particular realisation of the elements of \(\left|\psi\right\rangle\) except \(\left|\psi\right\rangle_{*}\), this can hold for at most one value of the magnitude of \(\left|\psi\right\rangle_{*}\) (and again this fact holds both before and after normalisation). Therefore the states for which strong-distinctness fails to hold by this particular failure mode are supported on a subset of the total support that may be represented over \((\mathbb{R}^{+})^{2^{n_{q}}-1}\) which is therefore measure-zero in the original set \((\mathbb{R}^{+})^{2^{n_{q}}}\). As there is a finite (albeit large) number of such pairs of sets for which strong-distinctness can fail to hold, again by the union bound we have that strong-distinctness fails to hold for measure-zero subset of the support, and thus a.s. a Haar random state is strongly distinct.
With these properties of Haar random states proven, we progress to give some more necessary preliminary results. First, we note that according to the formulation in Section 2.2, if some \(\left|\psi\right\rangle\) is measured to form the _resource state_, then the _input state_ to the classical generate model (with the measurement treated as deferred) is of the form:
\[\left|\tilde{\psi}\right\rangle=\frac{1}{\sqrt{2^{n_{+}}}}\Bigg{[}\ \underbrace{\left|\psi\right\rangle_{1},\left|\psi\right\rangle_{1},\ldots \left|\psi\right\rangle_{1}}_{2^{n_{+}}\text{ times}},\ \underbrace{\left|\psi\right\rangle_{2},\left|\psi\right\rangle_{2},\ldots \left|\psi\right\rangle_{2}}_{2^{n_{+}}\text{ times}},\ \ldots\underbrace{\left|\psi\right\rangle_{2^{n_{q}}},\ \ \left|\psi\right\rangle_{2^{n_{q}}},\ldots \left|\psi\right\rangle_{2^{n_{q}}},\ \underbrace{0,0,\ldots,0}_{N-2^{n_{+}+n_{q}}\text{ times}}} \Bigg{]}^{T} \tag{7}\]
We are now in a position to elaborate on the distribution equivalence classes and their significance. To do so, we first define two sets of block matrices, \(\mathcal{V}\) and \(\mathcal{W}\).
**Definition 14**.: \(\mathcal{V}\) _is the set of \(N\times N\) matrices, where \(V\in\mathcal{V}\) if and only if:_
\[V=\begin{bmatrix}\tilde{V}_{1}&0&\dots&0&0\\ 0&\tilde{V}_{2}&\dots&0&0\\ \vdots&\vdots&\ddots&0&0\\ 0&0&0&\tilde{V}_{2^{n_{q}}}&0\\ 0&0&0&0&\tilde{V}_{2^{n_{q}}+1}\end{bmatrix} \tag{8}\]
_where \(\tilde{V}_{1}\dots\tilde{V}_{2^{n_{q}}}\) are each any \(2^{n_{+}}\times 2^{n_{+}}\) permutation matrix and \(\tilde{V}_{2^{n_{q}}+1}\) is any \((N-2^{n_{+}+n_{q}})\times(N-2^{n_{+}+n_{q}})\) permutation matrix._
**Definition 15**.: \(\mathcal{W}\) _is the set of \(N\times N\) block diagonal matrices where \(W\in\mathcal{W}\) if and only if:_
\[W=\begin{bmatrix}\tilde{W}_{1}&0&\dots&0\\ 0&\tilde{W}_{2}&\dots&0\\ \vdots&\vdots&\ddots&0\\ 0&0&0&\tilde{W}_{2^{n_{y}}}\end{bmatrix} \tag{9}\]
_where \(\tilde{W}_{1}\dots\tilde{W}_{2^{n_{y}}}\) are each any \(2^{n-n_{y}}\times 2^{n-n_{y}}\) permutation matrix._
**Remark 16**.: \(V\in\mathcal{V}\implies V^{\dagger}\in\mathcal{V}\) _and \(W\in\mathcal{W}\implies W^{\dagger}\in\mathcal{W}\)._
We now use these forms of matrices to define a second notion of equivalence:
**Definition 17**.: _Let \(P\) and \(S\) be \(N\times N\) permutation matrices. We say \(P\) and \(S\) belong to the same "multiplicative equivalance" class if \(P\) can be expressed as \(P=WSV\) for some \(W\in\mathcal{W}\) and \(V\in\mathcal{V}\)._
**Remark 18**.: _Owing to the fact that \(V\in\mathcal{V}\implies V^{\dagger}\in\mathcal{V}\) and \(W\in\mathcal{W}\implies W^{\dagger}\in\mathcal{W}\), if \(P\) can be expressed \(P=WSV\) for \(V\in\mathcal{V}\) and \(W\in\mathcal{W}\) then \(S=W^{\prime}PV^{\prime}\) for some \(V^{\prime}\in\mathcal{V}\) and \(W^{\prime}\in\mathcal{W}\)._
**Remark 19**.: _Specifying \(n_{0}\), \(n_{+}\), \(n_{q}\), and \(n_{y}\) fixes the multiplicative equivalence classes, and their members._
**Lemma 20**.: _Let \(P\) and \(S\) be \(N\times N\) permutation matrices_
1. _for any resource state,_ \(|\psi\rangle\)_, if_ \(P\) _can be expressed as_ \(P=WSV\) _for some_ \(W\in\mathcal{W}\) _and_ \(V\in\mathcal{V}\) _then_ \(P\) _and_ \(S\) _prepare the same distribution;_
2. _for any resource state,_ \(|\psi\rangle\)_, which is distinct and strongly distinct, if_ \(P\) _cannot be expressed as_ \(WSV\) _for any_ \(W\in\mathcal{W}\) _and_ \(V\in\mathcal{V}\)_, then_ \(P\) _and_ \(S\) _prepare different distributions._
_Therefore for any resource state which is distinct and strongly distinct, it follows that \(P\) and \(S\) are in the same distribution equivalence class (Definition 6) if and only if they are in the same multiplicative equivalence class (Definition 17)._
Proof.: Starting with claim (i): for any resource state, \(|\psi\rangle\) we obtain the input state, \(|\tilde{\psi}\rangle\) from (7). Using the decomposition \(P=WSV\) for \(W\in\mathcal{W}\) and \(V\in\mathcal{V}\), we can easily see \(V\,|\tilde{\psi}\rangle=|\tilde{\psi}\rangle\) as the operation of \(V\) is simply to permute elements that are, by construction, equal. We also have \(V\,|\tilde{\psi}\rangle=|\tilde{\psi}\rangle\implies SV\,|\tilde{\psi} \rangle=S\,|\tilde{\psi}\rangle\). Owing to the fact that only the first \(n_{y}\) qubits are measured to form the sample, any permutation of the elements \(\{c\times 2^{n-n_{y}}+1,(c+1)\times 2^{n-n_{y}}\}\) of the prepared state for any \(c\in\{0,2^{n_{y}}-1\}\) does not change the distribution being sampled. It follows that the application of any matrix \(W\in\mathcal{W}\) does not affect the distribution sampled. Therefore we have that \(WS|\tilde{\psi}\rangle\) samples the same distribution as \(S|\tilde{\psi}\rangle\); but from the above \(WS|\tilde{\psi}\rangle=WSV|\tilde{\psi}\rangle=P\,|\tilde{\psi}\rangle\) and so for any \(|\tilde{\psi}\rangle\), \(P|\tilde{\psi}\rangle\) and \(S|\tilde{\psi}\rangle\) sample the same distribution, as claimed.
For claim (ii), we consider two ways in which \(P\) and \(S\) may sample the same distribution when applied to the input, \(|\tilde{\psi}\rangle\):
1. The same distributions will be prepared if \(P\left|\tilde{\psi}\right\rangle\) and \(S\left|\tilde{\psi}\right\rangle\) differ only by elements in \(2^{n-n_{y}}\) sized blocks on the leading diagonal being permuted. In this case, we can write \(P\left|\tilde{\psi}\right\rangle=WS\left|\tilde{\psi}\right\rangle\) for some \(W\in\mathcal{W}\). Therefore we can easily rearrange to give \(P^{\dagger}WS\left|\tilde{\psi}\right\rangle=\left|\tilde{\psi}\right\rangle\). However, we have restricted to the case where the resource state is distinct, and so this holds if and only if \(P^{\dagger}WS=V^{\dagger}\implies P^{\dagger}WSV=I\implies WSV=P\), for some \(V^{\dagger}\in\mathcal{V}\). Finally, we recall that if \(V^{\dagger}\) is a member of \(\mathcal{V}\) then so is \((V^{\dagger})^{\dagger}=V\), and so in this case we have that \(P=WSV\) for \(W\in\mathcal{W},V\in\mathcal{V}\). However, this contradicts the premise that they are not in the same multiplicative equivalence class.
2. \(P\left|\tilde{\psi}\right\rangle\) and \(S\left|\tilde{\psi}\right\rangle\) may differ more generally. However, the possibility that these then prepare the same distribution is exactly that which is precluded by the definition of strong distinctness (Definition 12) of \(\left|\psi\right\rangle\).
From these two cases, as per the second claim, for any resource state which is distinct and strongly distinct, \(P\) and \(S\) prepare different distributions if they cannot be written as \(P=WSV\); that is, if they are in different multiplicative equivalence classes.
We need one final lemma to prove the no free lunch theorems in the next section.
**Lemma 21**.: _Let \(\left|\omega\right\rangle\) be an \(n_{q}\)-qubit resource state. Further let \(M^{*}\) be the number of multiplicative equivalence classes for an \(n_{q}\) qubit resource state, given some arbitrary (but fixed) \(n_{0}\), \(n_{+}\) and \(n_{y}\). If either of the following fail to hold:_
* \(\left|\omega\right\rangle\) _is distinct;_
* \(\left|\omega\right\rangle\) _is strongly distinct;_
_then the number of distribution equivalence classes will be strictly less than \(M^{*}\)._
Proof.: Addressing the first case. From the first claim in Lemma 20, permutation matrices in the same multiplicative equivalence class prepare the same distribution, for _any_ resource state, and so the number of distributions that can be prepared is at most the number of multiplicative equivalence classes. In the case of resource states that are distinct and strongly distinct, permutations in different multiplicative equivalence classes prepare distinct distributions (the second claim in Lemma 20) and so this case the maximum possible number of distributions can be prepared. We will now show that, for a resource state, \(\left|\omega\right\rangle\), with at least two of its computational basis state magnitudes equal (i.e., distinctness fails to hold), at least two distinct multiplicative equivalence classes will map to the same distribution equivalence class, and hence this suffices to show that strictly fewer than \(M^{*}\) distributions will be preparable.
Consider \(\left|\omega\right\rangle\), which is such that (say) the \(i\)th and \(j\)th elements have equal magnitude and further define \(i^{*}=2^{n_{+}}i\), \(j^{*}=2^{n_{+}}j\). (That is, \(i^{*}\) indexes an element of a resource state that is equal to the \(i\)th element of the corresponding input state, renormalised; and \(j^{*}\) indexes an element of a resource state that is equal to the \(j\)th element of the corresponding input state, renormalised.) We now consider two permutations:
\[P= T(i^{*},1)T(j^{*},N)\] \[S= PT(i^{*},j^{*})\]
where \(T(.,.)\) denotes the transposition of elements at locations indexed by the two arguments. The action of \(P\) is to transpose the first element and the \(i^{*}\)th, such that the \(i^{*}\) contributes probability mass to the smallest measurement outcome \(\left|00\ldots 0\right\rangle\) and also transposes the final (\(N\)th) element and the \(j^{*}\)th, such that the \(j^{*}\) contributes probability mass to the largest measurement outcome \(\left|11\ldots 1\right\rangle\). The action of \(S\) is the same, but with the \(i^{*}\)th and \(j^{*}\)th elements swapped first, such that the \(j^{*}\) contributes probability mass to the smallest measurement outcome, \(\left|00\ldots 0\right\rangle\), and the \(i^{*}\) contributes probability mass to the greatest measurement outcome \(\left|11\ldots 1\right\rangle\). We further let \(\left|\psi\right\rangle\) be an \(n_{q}\) qubit resource state which is distinct and strongly distinct, and consider the following rationale:
* As the \(i\)th and \(j\)th computational basis states of \(\left|\psi\right\rangle\) have distinct magnitudes, \(P\left|\tilde{\psi}\right\rangle\) and \(S\left|\tilde{\psi}\right\rangle\) prepare different distributions: therefore \(P\) and \(S\) must be in different distribution equivalence classes;
* in the case of resource states which are distinct and strongly distinct, the mutliplicative equivalence classes are equal to the distribution equivalence classes, and so as \(P\) and \(S\) are in different distribution equivalence classes, they are also in different multiplicative equivalence classes (recall also that the multiplicative equivalence classes depend only on \(n_{0}\), \(n_{+}\), \(n_{q}\), and \(n_{y}\) and not the resource state, i.e., Remark 19);
* however, as the \(i\)th and \(j\)th computational basis states of \(\ket{\omega}\) have the same magnitude, \(P\ket{\tilde{\omega}}\) and \(S\ket{\tilde{\omega}}\) prepare the same distribution;
* so it is the case that for the input \(\ket{\tilde{\omega}}\) two different multiplicative equivalence classes (i.e., those that each of \(P\) and \(S\) belong two) prepare the same distribution, and so the total number of distribution equivalence classes is smaller than the number of multiplicative equivalence classes.
Thus in the case of distinctness failing to hold, we indeed have that the number of distribution equivalence classes is strictly smaller than \(M^{*}\).
For the case of a resource state, \(\ket{\omega}\) for which strong distinctness fails to hold, the construction has already done a lot of work for us. In particular, notice that the set partitioned in Definition 12 is exactly the elements of \(\ket{\tilde{\omega}}\) (constructed from \(\ket{\omega}\) according to (7)). We can further think of the partition as the process of constructing a vector where the first \(2^{n-n_{y}}\) elements are the elements of one of the subsets, the next \(2^{n-n_{y}}\) elements are the elements of another subset and so on. Using the fact that strong distinctness fails to hold for \(\ket{\omega}\), we can therefore define a pair of permutation matrices \(P\) and \(S\) which: (i) when applied to \(\ket{\tilde{\omega}}\) prepare the same distributions; (ii) when applied to a state, \(\ket{\tilde{\psi}}\) (obtained from a resource state \(\ket{\psi}\) that is distinct and strongly distinct) prepare different distributions. From this we directly obtain the fact that \(P\) and \(S\) must be in different multiplicative distribution classes and so the fact that the number of preparable distributions is less the \(M^{*}\) follows directly from the fact that matrices from two different multiplicative equivalence classes prepare the same distribution in the case of resource state \(\ket{\omega}\).
## 4 Main results: no free lunch theorems
**Theorem 22**.: _Almost all quantum states (obtained by sampling uniformly according to the Haar measure) have the following property: if copies of the state are measured, with the outcomes used as latent random variables in a classical generative model that also has access to any number of (or no) uniformly random bits and some specified number of ancillas, then any alternative quantum state, used in place of the first state, and whose measurements can generate the same set of target distributions will do so with the same overall cost._
Proof.: Let \(\ket{\psi}\) be an \(n_{q}\)-qubit Haar random state, from Lemma 13 a.s. it will be distinct and strongly distinct. From Lemma 21, any candidate alternative resource state which is not both distinct and strongly distinct will prepare strictly fewer distributions than in the case of using \(\ket{\psi}\), and so only alternative resource states which are distinct and strongly distinct magnitudes are viable (as the stated requirement is that any alternative resource state prepares the same set of target distributions, and clearly this cannot be possible if the total number of preparable distributions is less than for \(\ket{\psi}\)).
According to Lemma 20, for resource states which are distinct and strongly distinct, the distribution equivalence classes are the same as the multiplicative equivalence classes. However, from Remark 19, the multiplicative equivalence classes depend only on the parameters \(n_{0}\), \(n_{+}\), \(n_{q}\) and \(n_{y}\) and not on the resource state. It is therefore the case that, for resource states which are distinct and strongly distinct, the distribution equivalence classes are independent of the resource state. Notably this setting applies to the sampled resource state if it is distinct and strongly distinct (which a.s. it will be) _and_ therefore to any viable alternative resource states.
Finally, according to (2) the aggregate cost depends only on the distribution equivalence classes, and hence as these are a.s. identical for the sampled resource state and any viable alternative, it follows that the aggregate cost is identical.
**Corollary 23**.: _Almost all quantum states (obtained by sampling uniformly according to the Haar measure) have the following property: if copies of the state are measured, with the outcomes used as resources in a
classical sampling algorithm that also has access to any number of (or no) uniformly random bits and some specified number of ancillas, then any alternative quantum state, used in place of the first state, and whose measurements can generate the same set of target distributions will do so with the same overall cost._
Proof.: From Remark 7, for generative models with some fixed \(n_{0}\), \(n_{+}\), \(n_{y}\) and \(|\psi\rangle\) the distribution equivalence classes determine the secondary distribution equivalence classes for sampling algorithms with the same \(n_{0}\), \(n_{+}\), \(n_{y}\) and \(|\psi\rangle\) and any \(n_{x}\). According to Theorem 22, the distribution equivalence classes may be determined by \(n_{0}\), \(n_{+}\), \(n_{y}\) and \(n_{q}\) when \(|\psi\rangle\) is distinct and strongly distinct (that is, the final quantity, \(n_{q}\), sufficing in place of \(|\psi\rangle\) when it is distinct and strongly distinct), and hence the secondary distribution equivalence classes are determined by \(n_{0}\), \(n_{+}\), \(n_{y}\), \(n_{q}\) and \(n_{x}\) when the resource state is distinct and strongly distinct. From (4), the secondary distribution equivalence classes suffice to determine the cost of the sampling algorithm, and following the same rationale as in Theorem 22, a resource state will a.s. be distinct and strongly distinct, and therefore any viable alternative resource state (i.e., that can prepare the same set of target distributions) will also be distinct and strongly distinct; thus in each case the aggregate cost of classical sampling will be the same
## 5 Discussion
The results that we have derived in this paper arise from a simple starting principle, namely that classical sampling algorithms / generative models are nothing more than computational basis state permutations, and so the cost of preparing every distribution that can be sampled is the aggregate cost of performing a set of permutations. We show that, in almost all cases, this set is independent of the specific resource state and so it follows that there is no free lunch: any resource state that is well-suited to prepare certain distributions, will be such that it is expensive to prepare other distributions.
As a consequence of this result the classical hardness-to-sample is not indicative of a probability distribution being more useful in a general sense. For instance, one can easily see that a sufficient condition for two \(n_{q}\)-qubit candidate resource states to prepare the same family of target distributions is that they differ by a computational basis state permutation (so when written as \(2^{n_{q}}\) element vectors, the same values appear in each case, but with the elements they reside in permuted). However, it is easy to construct examples of a pair of states that differ by a computational basis state permutation where (i) one state is a tensor-product state and the other is (potentially highly-) entangled; (ii) one state is a stabiliser state and the other is not; (iii) the preparation of one state can be efficiently simulated by tensor-network contraction methods, and the other cannot. In each of these three cases, we have a pair of states of equal resource value (in the general sense considered herein), even though the preparation of one of the pair is hard to classically simulate, whilst the other is easy.
Turning now to another aspect of our result, the general formulation we use is perhaps a little artificial in three ways:
1. assuming the resource state can be prepared with infinite precision is a little unrealistic - any finite universal gateset can only reach a finite number of states within a fixed number of operations, and even when using gatesets that theoretically include continuously varying parameters (i.e., continuous rotation angles), in practice it is realistic to assume that the parameter has some finite resolution [17];
2. on a similar note, the no free lunch theorems concern _exact_ sampling, but in practice preparing the desired distribution up to a (defined) level of acceptable approximation is likely to suffice (for example, the definition of a sampling problem in Ref. [14, Definition 11] is such that there is automatically some level of acceptable approximation - in a sense our result emphasises the importance of the exact value of this approximation);
3. assuming that the family of target distributions includes all possible distributions that can be prepared from the input is probably unnecessarily general for real-world applications.
These three items, in turn, reveal the essential nature of the no free lunch theorem:
No quantum state when measured to prepare samples taken as latent random variables in classical generative modelling (or general resource samples in classical sampling algorithms) is generically more useful as
a resource, _independently of further information being specified: namely, the level of approximation at which slightly different distributions are taken as the same and / or some restriction on the family of preparable distributions that are of interest_.
If we now consider how this further information would impact upon our analysis: (i) allowing some level of approximation would mean that we could merge certain distribution equivalence classes, as preparing the same distribution up to the level of approximation given; and (ii) considering a restricted family of target distributions would mean that we could omit from the cost function those distribution equivalence classes that prepare the distributions we do not care about. Crucially, the identity of the equivalence classes to merge / omit would _not_ be independent of the resource state. Different resource states would lead to different merges / omissions, and hence the analysis used to prove the no free lunch theorems would break down. In such a scenario, it may be the case that certain latent distributions are more valuable than others.
One way to view the no free lunch theorems that we present in this paper is that they complement existing results about the expected absence of quantum advantage in training a PQC to sample a single target distribution. Results concerning barren plateaus [18] and the general hardness of trainability [19] have thrown serious doubt on the possibility of obtaining (useful) quantum advantage in sampling, in many cases when the target distribution is too specific (i.e., a single target); and our results show that there is no quantum advantage (in the sense that classical hardness of preparation of the resource state has no bearing on its overall utility) when the target state is too general (i.e., is itself to be a latent distribution used to prepare any one of the possible target distributions in a downstream generative model). For example, the hope offered in Ref. [20] that "since the distributions from which near-term quantum computers can efficiently sample are particularly rich and complex, they are promising candidates for representing latent spaces within generative models" will need further clarification and justification to be realised - and cannot hold in the most general sense.
But between these two extremes (complete specificity and complete generality), we may ask if there is a "happy medium" to be found? That is, are there natural restrictions on the family of target distributions and relaxations in acceptable approximation that would mean that some quantum state can be measured to supply generically beneficial resources / latent variables? Or, to put it another way, that it is possible to "match" (in some sense) some family of relevant target distributions with a latent distribution that is easy to prepare quantumly but hard to prepare classically. Certainly such a possibility is an enticing one - for the reasons outlined in Section 1 - however, if nothing else our result has shown that such an outcome will require careful deliberation and further analysis.
## Acknowledgements
Thanks to Alexandre Krajenbrink and Julien Sorci for reviewing an earlier version of this article and providing many valuable suggestions.
|
2304.00059 | Resolving power: A general approach to compare the distinguishing
ability of threshold-free evaluation metrics | Selecting an evaluation metric is fundamental to model development, but
uncertainty remains about when certain metrics are preferable and why. This
paper introduces the concept of resolving power to describe the ability of an
evaluation metric to distinguish between binary classifiers of similar quality.
This ability depends on two attributes: 1. The metric's response to
improvements in classifier quality (its signal), and 2. The metric's sampling
variability (its noise). The paper defines resolving power generically as a
metric's sampling uncertainty scaled by its signal. The primary application of
resolving power is to assess threshold-free evaluation metrics, such as the
area under the receiver operating characteristic curve (AUROC) and the area
under the precision-recall curve (AUPRC). A simulation study compares the AUROC
and the AUPRC in a variety of contexts. It finds that the AUROC generally has
greater resolving power, but that the AUPRC is better when searching among
high-quality classifiers applied to low prevalence outcomes. The paper
concludes by proposing an empirical method to estimate resolving power that can
be applied to any dataset and any initial classification model. | Colin S. Beam | 2023-03-31T18:21:14Z | http://arxiv.org/abs/2304.00059v2 | Resolving power: A general approach to compare the discriminating capacity of threshold-free evaluation metrics
###### Abstract
This paper introduces the concept of _resolving power_ to describe the capacity of an evaluation metric to discriminate between models of similar quality. This capacity depends on two attributes: 1. The metric's response to improvements in model quality (its signal), and 2. The metric's sampling variability (its noise). The paper defines resolving power as a metric's sampling uncertainty scaled by its signal. Resolving power's primary application is to compare the discriminating capacity of threshold-free evaluation metrics, such as the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC). A simulation study compares the AUROC and the AUPRC in a variety of contexts. The analysis suggests that the AUROC generally has greater resolving power, but that the AUPRC is superior in some conditions, such as those where high-quality models are applied to low prevalence outcomes. The paper concludes by proposing an empirical method to estimate resolving power that can be applied to any dataset and any initial classification model.
## 1 Introduction
A large and growing collection of metrics are used to evaluate binary classification models. Model evaluation serves a variety of functions (Raschka 2018). One is to provide a good description, meaning that the metric is sensitive to aspects of quality that are relevant to the user. Simple classification accuracy, for example, can be misleading if there is a large skew in the class distribution where one class occurrs much more frequently than the other. Evaluation is also used to select the best model from a collection of competitors. This includes selection between different model classes, such as between a simple baseline model and more complex machine learning models. And it includes selection within a model class, as occurs with hyperparameter search during model tuning. Another function of evaluation is to estimate how well a given model will perform on future, unseen cases (Saito and Rehmsmeier 2015). Metric sampling uncertainty impacts all aspects of evaluation, though is especially important for model selection.
The Receiver Operator Characteristic (ROC) curve has become a favored evaluation method for binary classification models, in part due to the shortcomings of simple classification accuracy (Fawcett 2006). More recently, many have argued that the precision-recall curve (PRC) is preferable when there is a strong class imbalance and where there is low value in true-negative predictions (Boyd, Eng, and Page 2013; Saito and Rehmsmeier 2015; Davis and Goadrich 2006). Relative to the ROC curve, the PRC gives more weight to the highest-ranked cases located in the "early retrieval area" of ROC space. These cases are especially important when capacity to act is limited, such as when a health system has resources to intervene on only their sickest patients.
Sampling uncertainty is a neglected aspect of model evaluation within the field of machine learning, (Vabalas et al. 2019) but it is essential to account for when data is limited (Boyd, Eng, and Page 2013; Dietterich 1998). There has been scant research that compares the sampling precision of PRC and ROC curves. One exception is found in Zhou (2023), who used a link prediction task on a toy network model with a tunable noise parameter. For this network model problem Zhou concludes that the AUROC and AUPRC are much
more discriminating than balanced precision, and that the AUROC is slightly more discriminating than AUPRC.
This paper seeks a general approach for comparing the discriminating capacity of evaluation metrics. At the same time, it seeks specific conclusions about when and by how much some metrics are better than others. This project is conceptually difficult since evaluation metrics themselves are used to measure quality, each encoding different assumptions about what makes a model better or worse. The paper's strategy is to use a collection of sampling models to construct a quality dimension that serves as the common standard by which to compare evaluation metrics. The sampling models are used to assess how an evaluation metric responds to changes in model quality (its signal) and how much variability it has at a given level of quality (its noise). These two quantities are combined to form an evaluation metric's _resolving power_, which is a type of signal-to-noise ratio. The resolving power of a microscope is its capacity to distinguish between two close objects. By analogy, the resolving power of an evaluation metric describes how well it discriminates between models of similar quality. More specifically, resolving power is defined as a metric's sampling uncertainty mapped to a common scale.
The resolving power approach draws inspiration from Mazzanti (2020), who compares how the AUROC and the AUPRC respond to improvements in classifier quality. Mazzanti's analysis is concerned with each metric's adequacy as a description of performance, so does not consider the issue of sampling variability. It is also limited to just one specific mechanism for model improvement. This paper provides a more general account and considers how conclusions may change under various sequences of improving classifiers. The remainder of the paper presents the resolving power methodology and then applies it to compare the AUROC with the AUPRC.
## 2 ROC and PR curves
Our interest is in classification models that map cases to predicted classes. A _discrete_ classifier is one that only outputs a class label. Applying a discrete classifier to test data produces a 2x2 confusion matrix, with rows corresponding to the predicted class and columns giving the true class. A _scoring_ classifier outputs a number on a continuous scale, such as an estimated probability, that represents the degree to which a case belongs to a class (Fawcett 2006). A scoring classifier can be used to construct a discrete classifier by adopting a decision threshold.
A variety of familiar evaluation metrics may be calculated for discrete classifiers, such as accuracy, recall (hit rate, sensitivity, true positive rate), precision (positive predicted value), specificity, and the F1-score. These are known as single-threshold (or threshold-dependent) metrics since they depend on choosing a specific decision threshold. In contrast, threshold-free metrics use the full range of the original scores. Examples of threshold-free metrics include the AUROC, the AUPRC, the area under the precision-recall-gain curve (AUPRG), and the Brier score, among others. Threshold-free metrics are advantageous for multi-objective optimization since they allow users to adapt the model to a specific context (Flach and Kull 2015). The AUROC and AUPRC are preferred metrics when the primary goal is to achieve good discrimination so that cases are efficiently sorted into the positive and negative classes.
The ROC curve depicts the tradeoff between the true positive rate (tpr) on the y-axis and the false positive rate (fpr) on the x-axis. A discrete classifier only gives a single point in ROC space, corresponding to its one confusion matrix. A scoring classifier gives points for every possible confusion matrix that can be formed by varying the decision threshold. The empirical ROC curve interpolates between these points to create a step
\begin{table}
\begin{tabular}{l|c|c} \hline & **actual +** & **actual -** \\ \hline
**predicted +** & TP & FP \\ \hline
**predicted -** & FN & TN \\ \hline
**total** & P & N \\ \hline \end{tabular}
\end{table}
Table 1: An example confusion matrix
function. And as the number of points approaches infinity, the empirical curve approaches the population ROC curve.
If a decision threshold is selected to flag 50 percent of all cases and the classifier is no better than random guessing then we expect it to identify half of the positives and half of the negatives, yielding the point \((0.5,0.5)\) in ROC space. Similarly, a random classifier flagging 20 percent of cases is expected to have a recall of 20 percent and a false positive rate of 20 percent. The random guessing classifier, then, is given by the \(y=x\) line in ROC space. A perfect classifier ranks all positive cases above all negative cases, so it corresponds to the step function from \((0,0)\) to \((0,1)\) for all the positives, and then from \((0,1)\) to \((1,1)\) for all the negatives. Classifiers that lie above the identity line but below the perfect step function represent intermediate performance with better classifiers containing points closer to the \((0,1)\) northwest corner of ROC space.
The AUROC summarizes a classifier's performance across all decision thresholds and is found by integrating the ROC curve over the \([0,1]\) range of false positive rates. Larger AUROC values are better, with the random classifier giving an \(\text{AUROC}=0.5\) and the perfect classifier giving an \(\text{AUROC}=1\). The AUROC, as a scalar value, is especially relevant for model tuning and selection. A disadvantage of the AUROC is that it can conceal local differences in performance (Fawett 2006). For instance, one classifier may be better for highly ranked cases while another is better for those in the intermediate or lower ranks. An important statistical property of the AUROC is that it is equal to the probability that a classifier will rank a randomly chosen positive case higher than a randomly chosen negative case (Green and Swets 1966; Hanley and McNeil 1982).
Precision-recall (PR) graphs plot precision on the y-axis and recall on the x-axis. On the PR graph a random classifier corresponds to the horizontal line \(y=\frac{P}{P+N}=\text{prevalence}\) where \(P\) is the number of positive cases and \(N\) is the number of negative cases. PR curves are sensitive to class skew while ROC curves are not. This is because inputs to the ROC curve, the true and false positive rates, only depend on the column sums of the confusion matrix. Precision depends on the row sum of true and false positives so, all else equal, it will decrease with decreasing prevalence. Insensitivity to skew has been described as both an advantage (Fawcett 2006) and disadvantage (Saito and Rehmsmeier 2015) of the ROC curve.
Just like the AUROC, the AUPRC reduces a scoring classifier's performance to a single value, with larger values indicating better performance. The AUPRC may also be interpreted as a probability, namely as the classifier's "average precision" over the \([0,1]\) range of recall values. There is a one-to-one correspondence between empirical ROC and PR curves since they both chart a unique mapping from confusion matrices to points in ROC or PR space (Davis and Goodrich 2006). Davis and Goodrich (2006) show that the AUROC and AUPRC give the same model rankings when one curve "dominates" another. Informally, one curve dominates another if it lies above or equal to it across their domains. A dominating ROC curve will be northwest of the dominated curve, where its tpr is higher, its fpr is lower, or both. And a dominating PR curve will be northeast of a dominated curve, with higher precision, recall, or both across the entire domain. When there is no domination (when two curves cross) the AUROC and AUPRC can give different rankings. In cases of disagreement the AUPRC favors classifiers with better performance in the early retrieval area, which is the region of low false positive rates in ROC space.
Because it gives more weight to the early retrieval area, the precision-recall curve is often recommended for highly-skewed datasets. Yet the empirical PRC is an imprecise estimate of the true curve, especially for data with small sample sizes and strong class imbalance (Brodersen et al. 2010). This raises the question of whether the advantages of the PRC are worth its cost in precision. To answer this question we must be able to compare metrics measured on different scales.
## 3 Mapping between metrics
ROC analysis was initially developed to evaluate electronic sensors, such as radar, during World War II. In the 1950s research psychologists elaborated ROC analysis under the rubric of signal detection theory (SDT), which soon became influential within experimental psychology, psychophysics, and cognitive neuroscience (Wixted 2020). Fundamental to SDT is the specification of two probability distributions: A noise distribution for trials when the signal is absent and a signal distribution for trials when the signal is present (Green and Swets 1966). The binormal model (two Gaussians) is the most common choice for the signal and noise
distributions. The SDT framework can be described in the language of binary classification with signal present and signal absent trials considered members of the positive and negative classes, respectively.
A classification model applied to feature measurements generates class score distributions. For a simple example, suppose the two classes are adult women and men and that there is one feature (predictor) of height. The classification model will just be the identity mapping applied to the height measurements. The binormal model should then be a good approximation for the score distributions.1 Figure 1 shows the binormal model for this example, using height distribution parameters from Our World in Data (Roser, Appel, and Ritchie, 2013). Women have an average height of 164.7 cm with a standard deviation of 7.1 while men have an average height and standard deviation of 178.4 cm and 7.6 cm, respectively. The vertical dashed line is an example of a decision threshold, where any person above 171 cm is classified as a man and any below as a woman (this type of rule might be used in low visibility contexts where height is the most salient feature). Hit rates and false alarm rates can be calculated for that decision threshold, giving one point in ROC space.
Footnote 1: Height is believed to result from the sum of a large number of independent genetic and environmental effects. So by The Central Limit Theorem, the height distribution should be approximately normal.
The amount of overlap in the class score distributions indicates the quality of a classification model. Perfect classifiers have no overlap between class scores while totally overlapping distributions indicate a random classifier. A classification model hypothetically applied to an entire population of feature measurements will give population class score distributions. These distributions, in turn, imply a population ROC curve and its AUROC value. For the binormal model there is an analytic expression for the AUROC as a function of the means and variances of the score distributions (Marzban, 2004). The binormal model parameters shown in Figure 1 imply an AUROC = 0.906. Recall that this means that there is about a 90 percent chance that a randomly selected man will be taller than a randomly selected woman. In contrast to the AUROC, the AUPRC is a function of both the class score distributions and the outcome prevalence. Brodersen et al. (2010) show how to approximate the AUPRC for a given binormal model using numerical integration.
Classifier improvement, such as occurs during model tuning, can be represented as a process that diminishes the overlap in the class score distributions. An ordered sequence of increasingly separated distributions constructs a quality dimension that can be used to unify disparate metrics. For each set of score distributions in the sequence we can find the associated pairs of AUROC and AUPRC values, tracing out a curve in the \(\text{AUROC}\times\text{AUPRC}\) plane. This curve forms a mapping that we will use to compare the AUROC and the AUPRC.
The primary challenge of this strategy is that the mapping between metrics depends on how the score
Figure 1: A binormal classifier example. The distribution of men’s and women’s heights approximately follow a normal distribution. The model implies an AUROC of.906. The vertical dashed line at 171 cm is an example decision threshold.
distributions are ordered. There are myriad ways to increase separation between distributions, with each way indicative of different types of improvement. Beginning with a binormal model, we can manipulate separation by adjusting the means or variances and still retain a binormal model. One simple approach, used below, is to add fixed increments to the positive class scores. But a concern is that this may poorly approximate how improvement happens in practice. This paper solves this ambiguity by fiat: It assumes that simple manipulations of score distributions are a reasonable description of model improvement. Nonetheless, one can still test the importance of this assumption by specifying alternative distribution sequences and then assessing their impact.
We have identified the quality dimension as an ordered sequence of distributions, but how should we measure location on this dimension? One option is to just use the model rankings themselves, which will form an ordinal scale (Stevens 1946). Another option is to measure distribution overlap directly using the Bhattacharyya coefficient. Or we can use a measure that relates overlap to model quality, the AUROC and AUPRC being two examples among many. The AUROC has several properties that make it a good choice. Since the AUROC is an area (and a probability), equal differences across the scale represent equal differences in amount. Another advantage, mentioned above, is that it is unaffected by outcome prevalence. The AUROC is also the most popular threshold-free evaluation metric for binary classifiers, making it a natural choice as the standard reference metric.
For our purposes, the AUROC's biggest advantage is that it is agnostic with respect to where changes occur in the score distributions. This fact is easiest to demonstrate with an empirical score distribution, defined as a finite set of risk scores and associated outcomes. Briefly, suppose there are \(n^{+}\) positive cases, \(n^{-}\) negative cases, and that all risk scores are unique. Further suppose that the classifier is not perfect, so \(0.5\leq\text{AUROC}<1\), and we want to improve this by perturbing the risk scores. If we sort all the cases together into a single list ranked by score, then the smallest improvements occur by finding pairs of adjacent risk scores that are "out-of-order," such that the negative case has a higher score than the positive case, and re-ordering these pairs. Re-ordering a single pair will improve the AUROC by \(\frac{1}{n^{+}}\times\frac{1}{n^{-}}\) regardless of where the improvement occurs. This follows from the Riemann sum method of AUROC estimation: resolving any pair of adjacent scores will add a rectangle with base \(\frac{1}{n^{-}}\) and height \(\frac{1}{n^{+}}\) to the area under the ROC curve. In contrast, the AUPRC will improve more for resolving out-of-order pairs that are among the highest-ranked risk scores.
To summarize this section's key points: Classifier quality is gauged by its outputs, the class score distributions. A sequence of increasingly separated class distributions forms a common quality dimension that charts the relationship between different evaluation metrics. A key caveat is that the mapping between metrics is contingent on how the score distributions are separated. Characteristics of the AUROC make it a good choice as the reference measure of model quality. In particular, the AUROC always improves by a constant amount when resolving a pair of out-of-order risk scores.
## 4 Resolving power
The resolving power method can be summarized in four steps:
1. **Sampling model**: Specify the class score distributions, prevalence, and sample size.
2. **Response curves**: Use the sampling model to create a fine grid of improving classifiers. Find each metric's values across the grid.
3. **Random sampling**: Estimate metric sampling uncertainty by drawing random samples from points of interest within the quality grid.
4. **Comparison**: Use the results of steps 2 and 3 to estimate and compare resolving power.
This section illustrates the core mechanics of the approach using an idealized example while later sections move to the applications. Suppose we are interested in comparing the resolving power of the AUROC with that of the "area under the super-great curve" (the AUSGC). We construct a sampling model by specifying the class score distributions, prevalence, and sample size. Next, we create a grid of 1000 models. The first model has totally overlapping distributions for the random classifier and then we gradually shift the distributions apart so that the 1000th model has almost no overlap. Finally, we want to assess sampling
models that give AUROC values of 0.7 and 0.9. We draw many replicates from these two models to estimate the sampling variability of the two metrics. Results of the analysis are summarized in Figure 2 below.
Following the previous section's recommendation, the AUROC on the x-axis serves as the reference scale for quality. The left panel, then, just gives the identity mapping. The right panel shows how the AUSGC changes relative to the AUROC, giving the relative signal of the two metrics across the quality continuum. Unit slope indicates equal signal, a slope less than one favors the AUROC, and a slope greater than one favors the AUSGC. For a given point on the curve repeated draws from the sampling model will estimate each metric's noise distribution. Response curves then allow us to map each metric's uncertainty interval to a model quality interval, which forms the common basis for comparison.
Previously we defined resolving power generically as a metric's scaled sampling uncertainty, but now we need to make this specific. Define a metric's _resolution_ as the width of its 95 percent confidence interval mapped to the quality scale. We will denote metric resolution with the Greek letter \(\kappa\). A microscope's resolution limit is the smallest distance between two points that can still be distinguished as separate entities. Analogously, \(\kappa\) is the minimum distance for statistical discrimination using the \(\alpha=.05\) convention from statistical hypothesis testing. Resolving power is \(1/\kappa\), or just the reciprocal of the resolution distance. With AUROC as the reference scale we can form the following heuristic assessments: A resolving power of 10 is rather poor, a resolving power of 100 is decent, and a resolving power of 1000 is good. Of course, these assessments will depend on the context. A resolving power of 100 is much less impressive for a sample size of one million than for a sample size of ten thousand.
A disadvantage of the resolving power definition is that it requires on choosing an arbitrary \(\alpha\) level. Appendix A describes an alternative approach that expresses resolving power as a scaled standard error, thus eliminating this requirement. This comes at a cost of stronger assumptions: The alternate approach assumes that the response curve is well-approximated by a straight line and that the evaluation metric's sampling distribution is roughly symmetric.
Returning to the example, for the AUROC 0.7 model shown in blue, we have:
* An AUROC of.7 with a 95% confidence interval of [.65,.75].
* An AUSGC of.063 with a 95% confidence interval of [.013,.114]. This maps to an AUROC quality interval of [.53,.76].
The vertical dashed lines in Figure 2 show how the response curves map the confidence limits to a common quality scale. This is trivial for the AUROC since it is the identity mapping, but is included to make clear the mechanics of the method. For the AUROC = 0.7 sampling model we can conclude that the AUSGC
Figure 2: Response curve example. The two panels are united by the same sequence of models used to construct the quality grid. The AUROC serves as the common reference scale on the x-axis.
is much less precise with \(\kappa_{\rm SGC}=.23\) compared to \(\kappa_{\rm ROC}=.1\). Turning to the 0.9 AUROC model shown in orange, we have:
* An AUROC of.9 with a 95% confidence interval of [.85,.95]
* An AUSGC of.4 with a 95% confidence interval of [.35,.45]. This maps to an AUROC quality interval of [.89,.91].
Note that the confidence intervals in the original metrics have stayed the same width at.1 for both the AUROC and the AUSGC. However, the AUSGC is now in a steeper region of the curve, so its signal-to-noise ratio has improved. As a result, we obtain \(\kappa_{\rm SGC}=.02\), giving the AUSGC much better model resolution than the AUROC. From this analysis we can conclude that the AUSGC is only "super-great" when the search space spans a region of high-quality models.
## 5 Binormal model
The binormal model, as the most commonly used model in ROC analysis, serves as a good initial application of the resolving power approach. All code and simulation data used in the paper is available on GitHub.2 We now apply the four steps of resolving power.
Footnote 2: [https://github.com/colinbeam/resolving_power](https://github.com/colinbeam/resolving_power)
**Step 1: The sampling model.** Assume a binormal model where negative class scores have a standard normal \(\mathcal{N}(0,1)\) distribution and positive class scores have \(\mathcal{N}(\delta_{i},1)\) distributions. The analysis explores a range of prevalence comprising the values [.01,.05.,.10,.20,.30,.40,.50], which is the same set used by Mazzanti (2020). We explore a moderately sized classification task of 10,000 instances. So the lowest prevalence condition has 100 instances in the positive class.
**Step 2: Response curves.** We create a fine grid of improving models by increasing the distance \(\delta_{i}\) between distributions. The grid begins with the random classifier AUROC =.5 and ranges to a max AUROC =.99995. Each \(\delta_{i}\) is chosen to create.00005 AUROC increments between grid points. Since we have fixed three of the four binormal model parameters, we can find the shift parameter as a function of the target AUROC value (see Marzban (2004) for details). For our model it is:
\[\delta_{i}=\sqrt{2}\times\Phi^{-1}(\rm{AUROC}) \tag{1}\]
where \(\Phi^{-1}\) is the inverse cumulative standard normal distribution and \(\delta_{i}\) is the shift parameter for each grid point. Note that an evenly spaced AUROC grid will require progressively larger shifts between class distributions as model quality increases. Next, we need to find the AUPRC values associated with each AUROC grid point. The AUPRC can be found from a binormal model via numerical approximation. Let \(\alpha\) represent the outcome prevalence and \(\Phi_{+}\) and \(\Phi_{-}\) represent the cumulative Gaussian distributions for the positive and negative classes, respectively. Brodersen et al. (2010) derive the PR curve by finding precision (PPV) as a function of recall (TPR):
\[\rm{PPV}=\frac{\alpha TPR}{\alpha TPR+(1-\alpha)\left(1-\Phi_{-}\left(\Phi_{ +}^{-1}(1-TPR)\right)\right)} \tag{2}\]
And to find the AUPRC they numerically approximate the integral:
\[\rm{AUPRC}=\int_{0}^{1}PPV(TPR)\it{dTPR} \tag{3}\]
To summarize the steps: First we create an evenly spaced grid of AUROC values using the implied shift parameter values from equation (1). We then use the shift values in equation (2), specifically for the \(\Phi_{+}\) parameterization. This gives us the PR curve so that we may use equation (3) to find the associated AUPRC value.
**Step 3: Random sampling.** We wish to assess a range of quality values by evaluating models with AUROCs of \([.65,.75,.85,.95]\). In the figures below these models are respectively labeled "Poor," "Fair," "Good," and "Excellent."
Figure 3 shows the binormal response curves for each condition. The relationship between metrics becomes more curvilinear as prevalence decreases. This implies that, all else equal, the AUPRC will be relatively more discriminating among higher quality models applied within low prevalence contexts. The response curve for a prevalence of.5 is approximately a straight line with an intercept of zero and a slope of one--the identity mapping. Thus, the AUPRC and AUROC are estimating the same quantity but by using different formulas. In this condition, then, differences in resolving power will be due to differences from sampling error alone.
For the four points of model quality we take 10,000 random samples from each implied binormal model and estimate AUROC and AUPRC values with the PRROC R package (Grau, Grosse, and Keilwagen, 2015). The AUPRC is estimated using the Davis and Goadrich method (Davis and Goadrich, 2006). Ninety-five percent confidence intervals are found from the.025 and.975 quantile values of the simulation samples.
**Step 4: Comparison.** For the final step we use the curves in Figure 3 to map the AUPRC 95 percent confidence interval values to the AUROC scale. We then find the relative difference in metric resolution with the AUROC as the baseline (which is equal to the relative difference in resolving power with the AUPRC as baseline):
\[\Delta=\frac{\kappa_{\text{PRC}}-\kappa_{\text{ROC}}}{\kappa_{\text{ROC}}}= \frac{1/\kappa_{\text{ROC}}-1/\kappa_{\text{PRC}}}{1/\kappa_{\text{PRC}}}\]
To smooth out variability in the estimates the simulation was repeated three times and estimates were averaged across runs. Simulation results are shown in Figure 4.
Beginning with the prevalence =.5 "identity mapping" condition, we see that the AUPRC is usually around 10 percent more variable than the AUROC, though the disadvantage is smaller in the "Excellent" model condition. Since the grid was constructed using equal AUROC increments, an equivalent statement is that AUPRC confidence intervals contain 10 percent more models. In the remaining conditions the AUPRC suffers a greater disadvantage in the flatter portions of the response curves, corresponding to contexts of low prevalence and poor model quality. Specifically, the AUPRC is at a disadvantage for all poor (AUROC =.65), fair (AUROC =.75), and good (AUROC =.85) models across all levels of prevalence. AUPRC resolution
Figure 3: Mapping between AUROC and AUPRC for the binormal model.
widths are typically about 10 percent larger, though in the flattest portion of the response curve--the low prevalence and low model quality condition--the disadvantage is around 30 percent. The AUPRC has better resolving power only for excellent models (AUROC =.95) applied to moderately to strongly imbalanced datasets (prevalence of.2 and below).
Figure 4 shows relative differences, but it is also important to consider how absolute uncertainty varies across conditions. Figure 5 explores these relationships using Hanley and McNeil (1982)'s formula for the approximate standard error of the AUROC. The standard error is strongly decreasing in prevalence, shown by the separation between lines, and is especially large in the.01 condition, making relative imprecision even more costly in absolute terms. The standard error is mostly decreasing in model quality, though interestingly, slightly increases from an AUROC of 0.5 before reaching a maximum around 0.6. As an aside, the normal approximation confidence intervals using Figure 5 standard errors are typically close to the simulation confidence intervals. For a prevalence =.01 and AUROC =.65 the simulation 95 percent confidence interval is [0.596, 0.702] while the normal approximation is [0.591, 0.709]. The approximation does become worse as the AUROC increases: In the prevalence =.01 and AUROC =.95 condition the simulation confidence interval is [0.929, 0.967] while the normal approximation is [0.92, 0.98]. The adequacy of the normal approximation bears on the utility of the alternative method for estimating resolving power, described in Appendix A.
Thus, for moderately sized (N = 10,000) classification tasks the AUROC will typically provide better resolution. Importantly, these results are essentially unchanged for different sample size magnitudes. For both a magnitude smaller (N = 1000) and larger (N = 100,000), the AUROC is generally better, with the AUPRC showing an advantage only among excellent models with an outcome prevalence of 20 percent of less. Appendix B presents results for these additional scenarios.
If risk scores follow a binormal distribution and model improvement results in additive shifts of these scores, then it will be best to use the metric with greatest resolving power for model search. Yet since these assumptions will never fully be met in practice, this section's results should instead be received as general guidance to be weighed against other criteria, not as prescriptive. For instance, since the AUPRC suffers only a modest disadvantage for "good" models with moderate prevalence, it may still be preferable because of the greater weight it assigns to the early retrieval area.
It is uncertain how robust this section's guidance is to deviations from the binormal model. One way to address this concern is to replace the binormal model with a domain-specific data-generating process. An example of this approach from network analysis was discussed above, where Zhou (2023) constructs networks with fixed link connection probabilities that are perturbed by additive, independent noise. This framework is amenable to a resolving power analysis-In fact, Zhou's Figure 1 essentially displays response curves by showing how the AUROC, AUPRC and balanced precision respond to changes in the noise parameter values
Figure 4: Relative metric resolution by outcome prevalence and model quality for a binormal model with a sample size of N = 10,000. At each level of model quality 10,000 simulations are taken from the sampling model. Confidence limits are found from the.025 and.975 quantile values of the simulation samples.
that determine model quality.
Most applications will not be able to draw upon a quantitative framework to build domain-specific sampling models. An alternative strategy is to start with a dataset and a baseline model and then build the sampling model from an initial set of risk scores. This empirical-driven approach is explored in the next section.
## 6 Empirical sampling models
This section shows how specific problem information can be incorporated into the resolving power approach. We will explore an example task where the aim is to predict 30-day hospital readmissions among diabetes patients using features such as patient demographics, prior utilization, diagnoses, lab tests, and medications. The data for this example can be found at the UCI Machine Learning Repository.3 After applying recommended restrictions, the data includes 69,973 total records with 6,277 readmissions for an outcome prevalence of about 9 percent. So this is an example of an imbalanced class problem for which the AUPRC is often recommended.
Footnote 3: For access and a description of the original dataset go to [https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008). The post-processed data can be found at the GitHub address listed above.
Now how might we use the data to guide our choice of a sampling model? A seemingly sensible approach is to fit an initial classifier and then use its risk scores to inform the choice. The initial model might be the simplest algorithm among a set of candidates, or it could be a preferred algorithm using its default hyperparameters. We may then construct a sampling model from the empirical distribution (the set of outcomes and risk scores) in a couple of different ways. One is to find a parametric model that gives a good approximation to the empirical distribution. Another is to treat the empirical distribution as the population, as is done in resampling methods such as bootstrapping.
For the readmissions data, we use a simple logistic regression as the initial model, estimating risk scores using 5-fold stratified cross-validation. The estimated AUROC is.646 for this initial model. By the previous section's taxonomy, this is a "poor" model with about a 10 percent outcome prevalence. So the binormal model results suggests that we should prefer the AUROC to the AUPRC. The distribution of patient effects are shown in Figure 6, with a rug plot and density estimates on top and qqnorm plots below.
A normal approximation does not appear appropriate as both the positive and negative class have large clusters of scores in the lower tail. We could hunt for a better parametric appro
Figure 5: The relationship between AUROC and its standard error for different levels of outcome prevalence. The standard error is found using the approximation formula from Hanley and McNeil (1982).
of mixture distribution--but instead we will use the empirical distribution as the population sampling model. Moving to the second step, how can we use an empirical sampling model to construct a response curve? A simple option is to add small increments to all positive class scores, which is analogous to the approach we used with the binormal model. Incremental improvement will then be concentrated among the negative and positive cases with the closest risk scores, wherever they may reside in the distribution. This seems reasonable as it assumes that marginally better models will first amend the ranking of cases that need the smallest adjustments.
The previous section created an evenly-spaced grid of AUROC values using binormal model analytic results. An evenly-spaced grid is also possible for an empirical distribution: To begin, suppose there are \(n^{+}\) positive cases with risk scores \(r_{i}^{+}\) for \(i\in\{1,...,n^{+}\}\) and \(n^{-}\) negative cases with risk scores \(r_{j}^{-}\) for \(j\in\{1,...,n^{-}\}\). Further suppose all risk scores are unique and that the classifier is not perfect, so \(r_{i}^{+}<r_{j}^{-}\) for at least one \((i,j)\) pair. This implies that \(0.5\leq\text{AUROC}<1\) for the initial AUROC value. We will build the grid in the direction of improving AUROC, though it is straightforward to adapt the process for decreasing AUROC. First, find \(\delta_{1}=\min\left(r_{j}^{-}-r_{i}^{+}|r_{j}^{-}>r_{i}^{+}\right)\), so \(\delta_{1}\) is the smallest positive difference in risk scores between two cases that are out-of-order such that the negative case is assigned higher risk than the positive case. Similarly, we can find \(\delta_{2}\) as the second smallest difference, \(\delta_{3}\) as the third smallest, etc. Now if we add \(\delta_{1}+\epsilon\) to all positive class risk scores where \(\delta_{1}<\delta_{1}+\epsilon<\delta_{2}\) we will shift the positive distribution just enough to resolve one pair of out-of-order risk scores, but no more. Then by the Riemann sum method we know that the AUROC will improve by \(\frac{1}{n^{+}}\times\frac{1}{n^{-}}\). The result can also be deduced from the probabilistic interpretation of the AUROC since there are \(n^{+}\times n^{-}\) unique ordered pairs of positive and negative scores, which forms the number of events in the sample space, and so resolving one pair increases the probability by \(\frac{1}{n^{+}\times n^{-}}\). Now if instead we had added \(\delta_{2}+\epsilon\) with \(\delta_{2}<\delta_{2}+\epsilon<\delta_{3}\) then it would have fixed two pairs of scores and the improvement would have been \(\frac{2}{n^{+}\times n^{-}}\). Thus, we can precisely increase or decrease the AUROC in \(\frac{1}{n^{+}\times n^{-}}\) increments. The result is useful for determining an initial increment to shift the class scores. Achieving a fixed increment across the grid requires updating the score distance calculations after each step, but this is computationally costly and will typically be unnecessary. Instead, it is most important to choose an initial
Figure 6: Top panel: Estimated logit effects with kernel density estimates for the positive and negative class. Bottom panel: Q-Q norm plots of sample versus theoretical quantiles. Straight lines pass through the 1st and 3rd quartiles.
increment that creates a high density of points across the grid range so that the response curve may be reliably estimated.
Figure 7 shows the response curve constructed from shifting the readmissions class score distributions. The starting model AUROC and AUPRC values of.646 and 0.166 are shown by the dot. The curve is built by shifting the positive class distribution above and below the starting point, using an initial increment that produces a change of.001 AUROC units. Each empirical distribution along the grid is considered the population, so the associated population AUROC and AUPRC are just the sample values. There are a total of 1000 grid points, which range from.54 to.92 in AUROC and from.12 to.50 in AUPRC. In practice, it is rare to see substantial improvement from initial performance, so these ranges cover a larger space than is expected to be observed during model search. The shape of the curve in Figure 7 is similar to the binormal response curves: For lower AUROC values the slope is relatively flat but then it increases with improving model quality.
Moving to the third step of random sampling, the initial model is the natural choice to evaluate since improvements will be made from this starting point. We generate 10,000 samples from the empirical distribution, fixing the prevalence with stratified sampling, and then find 95 percent confidence intervals. The fourth and final step uses the response curve to estimate and compare metric resolution. Results of the sampling experiment are displayed in Table 2.
The last row uses the response curve to map AUPRC to the AUROC scale. Recall that metric resolution, \(\kappa\), is just the width of the 95 percent confidence interval in AUROC units. The AUROC has considerably better discrimination with a resolving power that is over 70 percent greater than the AUPRC. The absolute difference in confidence interval widths is about 0.011 AUROC units, which could be substantial relative to the often small improvements achieved during hyperparameter tuning.
The robustness of these results with respect to the additive improvement assumption could be checked with a sensitivity analysis that explores other paths towards class score separation. An extreme alternative would be
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**metric** & **Lower CI** & **Upper CI** & \(\kappa\) & **resolving power** \\ \hline
**AUROC** & 0.6391 & 0.6535 & 0.0144 & 69.4 \\ \hline
**AUPRC** & 0.1601 & 0.1732 & NA & NA \\ \hline
**AUPRC to AUROC** & 0.6341 & 0.6591 & 0.0250 & 40.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation results summary. Lower and upper CI bounds are for 95 percent confidence intervals.
Figure 7: An empirical response curve from the readmissions data logistic regression model. The black point shows baseline performance. The curve is constructed by incrementing positive scores above and below the baseline distribution.
to resolve errors starting with the highest risk scores first and then moving down the rankings. This would yield an immediate strong response for the AUPRC while the AUROC would be unaffected. An opposite extreme would be to improve all the lowest ranked cases first. There are also myriad intermediate strategies. One could use a mechanism similar to Mazzanti (2020), with improvements initially evenly distributed across the score distribution, but gradually becoming concentrated in the early retrieval area.
## 7 Conclusion
Resolving power allows for systematic comparison of threshold-free evaluation metrics. Central to the method is the specification of a risk score sampling model that is used to both manipulate model quality and probe sampling variability. The quality dimension, which serves as the standard for comparison, is formed as an ordered sequence of class score distributions with decreasing overlap. A response curve shows how an evaluation metric responds to changes in classifier quality (its signal). A metric's sampling variability is assessed at a point on the curve by random draws from the sampling model (its noise). The sequence of score distributions forming the quality dimension is necessarily arbitrary, so this paper elects to use simple additive shifts to increase separation. One may also conduct sensitivity analyses that test robustness to alternate sequences of distributions. Resolving power is classifier-agnostic since it operates on risk scores that are downstream of the classification model. Binormal model results provide general rules-of-thumb for when the AUROC versus the AUPRC will have relatively stronger resolving power. The empirical method allows researchers to use their data and an initial classifier of their choice to construct the sampling model used for resolving power estimation. |
2302.14349 | Advantages of Asynchronous Measurement-Device-Independent Quantum Key
Distribution in Intercity Networks | The new variant of measurement-device-independent quantum key distribution
(MDI-QKD), called asynchronous MDI-QKD or mode-pairing MDI-QKD, offers similar
repeater-like rate-loss scaling but has the advantage of simple technology
implementation by exploiting an innovative post-measurement pairing technique.
We herein present an evaluation of the practical aspects of decoy-state
asynchronous MDI-QKD. To determine its effectiveness, we analyze the optimal
method of decoy-state calculation and examine the impact of asymmetrical
channels and multi-user networks. Our simulations show that, under realistic
conditions, aynchronous MDI-QKD can furnish the highest key rate with MDI
security as compared to other QKD protocols over distances ranging from 50 km
to 480 km. At fiber distances of 50 km and 100 km, the key rates attain 6.02
Mbps and 2.29 Mbps respectively, which are sufficient to facilitate real-time
one-time-pad video encryption. Our findings indicate that experimental
implementation of asynchronous MDI-QKD in intercity networks can be both
practical and efficient. | Yuan-Mei Xie, Jun-Lin Bai, Yu-Shuo Lu, Chen-Xun Weng, Hua-Lei Yin, Zeng-Bing Chen | 2023-02-28T06:51:34Z | http://arxiv.org/abs/2302.14349v3 | Advantages of Asynchronous Measurement-Device-Independent Quantum Key Distribution in Intercity Networks
###### Abstract
The new variant of measurement-device-independent quantum key distribution (MDI-QKD), asynchronous MDI-QKD, offers similar repeater-like rate-loss scaling but has the advantage of simple technology implementation by exploiting an innovative post-measurement pairing technique. We herein present an evaluation of the practical aspects of decoy-state asynchronous MDI-QKD. To determine its effectiveness, we analyze the optimal method of decoy-state calculation and examine the impact of asymmetrical channels and multi-user networks. Our simulations show that, under realistic conditions, asynchronous MDI-QKD can furnish the highest key rate with MDI security as compared to other QKD protocols over distances ranging from 50 km to 480 km. At fiber distances of 50 km and 100 km, the key rates attain 6.02 Mbps and 2.29 Mbps respectively, which are sufficient to facilitate real-time one-time-pad video encryption. Our findings indicate that experimental implementation of asynchronous MDI-QKD in intercity networks can be both practical and efficient.
## I Introduction
Quantum key distribution (QKD) [1; 2] enables remote two parties to share secret keys protected from eavesdropping by the laws of physics. In the past forty years, QKD has achieved rapid development in terms of secret key rates [3; 4; 5; 6], transmission distance [7; 8; 9] and network deployment [10; 11; 12; 13]. Although the security of QKD has been proven in theory, the imperfections of realistic devices lead to various security loopholes [14; 15; 16], especially in detection [15].
Fortunately, measurement-device-independent (MDI) QKD is proposed [17], which assumes an untrusted intermediate node to perform two-photon Bell state measurements, thus solving all security issues at the detection side [18]. Extensive work demonstrates the potential of MDI-QKD, including experimental breakthroughs [19; 20; 21; 22; 23; 24; 25], on-chip implementations [26; 27; 28], and continuous theoretical developments [29; 30; 31; 32; 33; 34]. Moreover, users in a MDI-QKD network can share expensive detectors, and the topology of MDI-QKD is naturally suitable for deployment in star-type networks. Additionally, side-channel-secure QKD has recently been experimentally realized, which is not only MDI but also immune to potential source imperfections [35; 36]. However, the key rates of most forms of QKD are fundamentally bounded by the secret key capacity of repeaterless QKD [37; 38; 39; 40] due to photon loss in the channel. A rigorous theorem, the absolute repeaterless secret key capacity (SKC\({}_{0}\)), expresses this limit as \(R=-\log_{2}(1-\eta)\)[39], i.e., the key rate \(R\) scales linearly with the channel transmittance \(\eta\). Despite some progress in overcoming this bound [41; 42; 43; 44], such devices remain elusive.
Twin-field (TF) QKD [45] and its variants [46; 47; 48; 49; 50; 51] are proposed to break this bound. The protocols make the untrusted intermediate node use Bell state measurements based on single-photon interference, rather than two-photon interference. Numerous works have advanced theory with finite key analysis [52; 53; 54; 55]. Ref. [56] applies entangled coherent state sources as untrusted relays to further increase the transmission distance of TF-QKD by reducing the signal-to-noise ratio at the measurement nodes. Several experimental achievements have shown the performance of twin-field QKD over large loss [57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70], and the maximum distance of TF-QKD has been experimentally increased to 830 kilometers [68]. The idea of single-photon interference has also been implemented in device independent QKD [71]. Nonetheless, as TF-QKD requires stable long-distance single-photon interference, phase-tracking and phase-locking techniques are indispensable [45]. These techniques are complicated and expensive, and usually impose a negative impact on the system performance. For example, phase tracking technology requires sending strong reference light, which reduces the effective clock frequency of the quantum signal and increases background noise [61; 62; 67; 68].
Recently, the new variant [72; 73] of MDI-QKD, called asynchronous MDI-QKD [72] (also called mode-pairing MDI-QKD [73]), is proposed. It asynchronously pairs two successful clicks within a long pairing time to establish two-photon Bell state, thereby breaking SKC\({}_{0}\). Asynchronous MDI-QKD is highly practical and has a noteworthy advantage over TF-QKD in intercity-distance quantum communications, owing to its implementation simplicity and performance. Several exciting experiments have successfully verified the superior performance of asynchronous MDI-QKD with accessible technology. Ref. [74] realizes the experiment with a maximal distance of 407 km without global phase locking. Ref. [75] demonstrates the first asynchronous MDI-QKD that overcomes SKC\({}_{0}\) without global phase tracking and extends the maximal distance to 508 km. However, before asynchronous MDI-QKD can be applied in real life, many issues of practicality necessitate resolution, such
as identifying the optimal number of decoy states, determining the optimal calculation method of decoy states, and assessing the performance in asymmetric channels and networks.
In this work, we address these issues by introducing the joint-constraints technique [76] and new methods for phase error rate estimation to enable higher-rate asynchronous MDI-QKD. By employing the three-intensity protocol alongside an additional _click filtering_ operation--which is the known best choice for performance--we simulate the key rate of asynchronous MDI-QKD in multi-user networks. For a network of five users, asynchronous MDI-QKD result in the key rates of all links surpassing the secret key capacity. Furthermore, using a 4 GHz repetition rate system [68], secret key rates of 6.02 Mbps, 2.29 Mbps, and 0.31 Mbps can be achieved at fiber distances of 50 km, 100 km, and 200 km, respectively. Asynchronous MDI-QKD can achieve the highest key rate in the range of 170 to 480 km, compared with decoy-state QKD [77; 78; 79] and TF-QKD [45]. More importantly, our work provides conceptual differences between asynchronous MDI-QKD and its synchronous version (original time-bin MDI-QKD) [80]. Asynchronous MDI-QKD holds the most promising potential as a solution for short-distance quantum communication in the future, owing to its minimal detector requirements and absence of strong light feedback.
## II Protocol Description
Here, we consider an asymmetric asynchronous MDI-QKD protocol using three-intensity settings. The intensity of each laser pulse is randomly set to one of the three intensities \(\mu_{a(b)}\) (signal), \(\nu_{a(b)}\) (decoy) and \(o_{a(b)}\) (vacuum), and the intensities satisfy \(\mu_{a(b)}>\nu_{a(b)}>o_{a(b)}=0\). A successful click is obtained when one and only one detector clicks in a time bin, and we refer to \((k_{a}|k_{b})\) as a successful click when Alice sends intensity \(k_{a}\) and Bob sends \(k_{b}\). The notation \([k_{a}^{\rm tot},k_{b}^{\rm tot}]\) indicates an asynchronous coincidence where the combined intensity in the two time-bins Alice (Bob) sent is \(k_{a}^{\rm tot}\) (\(k_{b}^{\rm tot}\)). The details of the protocol are presented as follows.
_1. Preparation._ For each time bin, Alice chooses a phase value \(\theta_{a}=2\pi M_{a}/M\) with \(M_{a}\in\{0,1,...,M-1\}\) at random. Then, she selects an intensity choice \(k_{a}\in\{\mu_{a},\nu_{a},o_{a}\}\) with probabilities \(p_{\mu_{a}}\), \(p_{\nu_{a}}\), and \(p_{o_{a}}=1-p_{\mu_{a}}-p_{\nu_{a}}\), respectively. Alice prepares a weak laser pulse \(|e^{\rm i\theta_{a}}\sqrt{k_{a}})\) based on the chosen values. Similarly, Bob prepares a weak coherent pulse \(|e^{\rm i\theta_{b}}\sqrt{k_{b}})\) (\(k_{b}\in\{\mu_{b},\nu_{b},o_{b}\}\)). Finally, Alice and Bob send their optical pulses to Charlie via the quantum channel.
_2. Measurement._ For each time bin, Charlie performs a first-order interference measurement on the two received pulses, and he publicly announces whether a successful click is obtained and which detector (\(D_{L}\) or \(D_{R}\)) clicked. The first two steps will be repeated \(N\) times.
_3. Coincidence pairing._ The clicks that Alice and Bob retained for further processing depend on whether _click filtering_ is applied. If they perform _click filtering_, Alice (Bob) announces whether she (he) applied the decoy intensity \(\nu_{a}\) (\(\nu_{b}\)) to the pulse sent for each event. Then they discard clicks (\(\mu_{a}|\nu_{b}\)) and (\(\nu_{a}|\mu_{b}\)), and keep all other clicks. Otherwise, they keep all clicks.
For all kept clicks, Alice and Bob always pair a click with the nearest one within a time interval \(T_{c}\) to form a successful coincidence. They discard the lone click that failed to find a partner within \(T_{c}\). For each coincidence, Alice (Bob) computes the total intensity used between the two time bins \(k_{a}^{\rm tot}\) (\(k_{b}^{\rm tot}\)) and the phase differences between the early (\(e\)) and late (\(l\)) time bins, \(\varphi_{a(b)}=\theta_{a(b)}^{l}-\theta_{a(b)}^{c}\).
_4. Sifting._ Alice and Bob announce their computational results and then discard the data if \(k_{a}^{\rm tot}\geq\mu_{a}+\nu_{a}\) or \(k_{b}^{\rm tot}\geq\mu_{b}+\nu_{b}\). When there is a _click filtering_ operation, we define \(\tilde{k}_{a(b)}=\mu_{a(b)}\); otherwise, we define \(\tilde{k}_{a(b)}\in\{\mu_{a(b)},\nu_{a(b)}\}\). For \([\tilde{k}_{a},\tilde{k}_{b}]\) coincidence, Alice (Bob) extracts a \(\mathbf{Z}\)-basis bit 0 (1) if she (he) sends \(\tilde{k}_{a(b)}\) in the early time bin and \(o_{a(b)}\) in the late time bin. Otherwise, Alice (Bob) extracts an opposite bit. Note that we use four intensity groups (\([\mu_{a},\mu_{b}],[\mu_{a},\nu_{b}],[\nu_{a},\nu_{b}],[\nu_{a},\mu_{b}]\)) for the key generation when _click filtering_ is not applied, while existing MDI-QKD protocols typically use only one intensity group. For \([2\nu_{a},2\nu_{b}]\) coincidence, Alice and Bob calculate the relative phase difference \(\varphi_{ab}=(\varphi_{a}-\varphi_{b})\mod 2\pi\). They extract an \(\mathbf{X}\)-basis bit 0 if \(\varphi_{ab}=0\) or \(\pi\). Afterwards, Bob flips his bit value, if \(\varphi_{ab}=0\) and both detectors clicked, or \(\varphi_{ab}=\pi\) and the same detector clicked twice. The coincidence with other phase differences is discarded.
_5. Parameter estimation._ Alice and Bob group their data into different sets \(\mathcal{S}_{[k_{a}^{\rm tot},k_{b}^{\rm tot}]}\) and count the corresponding number \(n_{[k_{a}^{\rm tot},k_{b}^{\rm tot}]}\). By using all the raw data they have obtained, Alice and Bob estimate the necessary parameters to calculate the key rate. They estimate the number of vacuum events, \(s_{0}^{z}\), the number of single-photon pair events in the \(\mathbf{Z}\) basis, \(s_{11}^{z}\), the bit error rate of the single-photon pairs in the \(\mathbf{X}\) basis, \(e_{11}^{x}\), and the phase error rate associated with the single-photon pair events in the \(\mathbf{Z}\) basis, \(\phi_{11}^{z}\).
_6. Key distillation._ Alice and Bob perform an error correction step that reveals at most \(\lambda_{\rm EC}\) bits of information. Under the condition of passing the checks in the error correction and privacy amplification steps, a \(\varepsilon_{\rm tot}\)-secret key of length
\[\begin{split}\ell=&\underline{s}_{0}^{z}+\underline{s }_{11}^{z}\left[1-H_{2}\left(\overline{\phi}_{11}^{z}\right)\right]-\lambda_{ \rm EC}\\ &\log_{2}\frac{2}{\varepsilon_{\rm cor}}-2\log_{2}\frac{2}{ \varepsilon^{\prime}\bar{\varepsilon}}-2\log_{2}\frac{1}{2\varepsilon_{\rm PA}}, \end{split} \tag{1}\]
can be extracted, where \(\underline{x}\) and \(\overline{x}\) are the lower and upper bounds of the observed value \(x\), respectively; \(H_{2}(x)=-x\log_{2}x-(1-x)\log_{2}(1-x)\) is the binary Shannon entropy function. Using the entropic uncertainty relation [75], the total secure coefficient
\(2(\varepsilon^{\prime}+2\varepsilon_{e}+\hat{\varepsilon})+\varepsilon_{0}+ \varepsilon_{1}+\varepsilon_{\beta}+\varepsilon_{\text{PA}}+\varepsilon_{\text{ cor}}\), where \(\varepsilon_{\text{cor}}\) is the failure probability of error correction; \(\varepsilon_{\text{PA}}\) is the failure probability of privacy amplification; \(\hat{\varepsilon}\) and \(\varepsilon^{\prime}\) are the coefficients while using a chain-rule for smooth entropies; \(\varepsilon_{0}\), \(\varepsilon_{1}\) and \(\varepsilon_{\beta}\) are the failure probabilities for estimating the terms of \(s_{0}^{z}\), \(s_{11}^{z}\), and \(e_{11}^{x}\), respectively.
## III The key rate formula
In the following description, let \(x^{*}\) be the expected value of \(x\). In the asynchronous MDI-QKD protocol, \([\tilde{k}_{a},\tilde{k}_{b}]\) coincidence can be used to generate keys. Since the binary Shannon entropy function is concave, we can correct errors for each group \([\tilde{k}_{a},\tilde{k}_{b}]\) separately to reduce the consumption of information, which does not affect the security of the protocol. Hence the amount of information consumed in error correction can be written as
\[\lambda_{\text{EC}}=\sum_{\tilde{k}_{a},\tilde{k}_{b}}\left[n_{[\tilde{k}_{a}, \tilde{k}_{b}]}fH_{2}\left(E_{[\tilde{k}_{a},\tilde{k}_{b}]}\right)\right], \tag{2}\]
where \(f\) is the error correction efficiency and \(E_{[\tilde{k}_{a},\tilde{k}_{b}]}\) is the bit error rate of \([\tilde{k}_{a},\tilde{k}_{b}]\) coincidence. Because vacuum states contain no information about their bit values, in the asymmetric case we can separately extract higher-valued vacuum components in each group \([\tilde{k}_{a},\tilde{k}_{b}]\) to obtain higher key rates. The total number of vacuum components in the \(\mathbf{Z}\) basis can be given by
\[s_{0}^{z*}=\sum_{\tilde{k}_{a},\tilde{k}_{b}}\max\left\{\frac{e^{-\tilde{k}_{ a}}p_{[\tilde{k}_{a},\tilde{k}_{b}]}}{p_{[o_{a},\tilde{k}_{b}]}}\underline{n}_{[o_{a}, \tilde{k}_{b}]}^{*},\frac{e^{-\tilde{k}_{b}}p_{[\tilde{k}_{a},\tilde{k}_{b}]}} {p_{[k_{a},o_{b}]}}\underline{n}_{[k_{a},o_{b}]}^{*}\right\}. \tag{3}\]
Here \(p_{[k_{a}^{\text{tot}},k_{b}^{\text{tot}}]}\) is the probability that \([k_{a}^{\text{tot}},k_{b}^{\text{tot}}]\) coincidence occurs given the coincidence event, which is
\[p_{[k_{a}^{\text{tot}},k_{b}^{\text{tot}}]}=\sum_{k_{a}^{\pm}+k_{a}^{\text{tot }}=k_{a}^{\text{tot}}}\sum_{k_{b}^{\pm}+k_{b}^{\pm}=k_{b}^{\text{tot}}}\frac{p _{k_{a}^{\pm}}p_{k_{b}^{\pm}}}{p_{s}}\frac{p_{k_{a}^{\pm}}p_{k_{b}^{\pm}}}{p_{ s}}. \tag{4}\]
When _click filtering_ is not applied, \(p_{s}=1\), otherwise \(p_{s}=1-p_{\mu_{a}}p_{\nu_{b}}-p_{\nu_{a}}p_{\mu_{b}}\).
Next, we need to estimate the number and phase error rate of the single-photon pairs in the \(\mathbf{Z}\) basis, \(s_{11}^{z}\) and \(\phi_{11}^{z}\). Because the density matrices of single-photon pairs in the \(\mathbf{Z}\) basis are equal to those in the \(\mathbf{X}\) basis, the single-photon pair yields in the two bases are equal. For the same reason, we can estimate the single-photon pair phase error rate in the \(\mathbf{Z}\) basis according to the single photon-pair bit error rate in the \(\mathbf{X}\) basis. Therefore, in all single-photon pairs, the expected ratio of different intensity settings is the same as that of the emitted states,
\[\frac{\underline{s}_{11}^{z*}}{\underline{s}_{11}^{z*}}=\frac{\underline{t}_ {11}^{z*}}{\underline{t}_{11}^{z*}}=\frac{\sum_{\tilde{k}_{a},\tilde{k}_{b}} \left(\tilde{k}_{a}\tilde{k}_{b}e^{-\tilde{k}_{a}-\tilde{k}_{b}}p_{[\tilde{k}_ {a},\tilde{k}_{b}]}\right)}{4\nu_{a}\nu_{b}e^{-2\nu_{a}-2\nu_{b}}p_{[2\nu_{a}, 2\nu_{b}]}}. \tag{5}\]
Then we estimate the lower bound of \(s_{11}^{z*}\) using the decoy-state method [77, 78, 79], which can be given by
\[\underline{s}_{11}^{z*}= \frac{\sum_{\tilde{k}_{a},\tilde{k}_{b}}\left(\tilde{k}_{a} \tilde{k}_{b}e^{-\tilde{k}_{a}-\tilde{k}_{b}}p_{[\tilde{k}_{a},\tilde{k}_{b}]} \right)}{\nu_{a}\nu_{b}\mu_{a}\mu_{b}(\mu^{\prime}-\nu^{\prime})} \tag{6}\] \[\times\left[\mu_{a}\mu_{b}\mu^{\prime}\left(e^{\nu_{a}+\nu_{b}} \frac{n_{[\nu_{a},\nu_{b}]}^{*}}{p_{[\nu_{a},\nu_{b}]}}-e^{\nu_{b}}\frac{ \overline{n}_{[o_{a},\nu_{b}]}^{*}}{p_{[\alpha_{a},\nu_{b}]}}\right.\right.\] \[\left.\left.-e^{\nu_{a}}\frac{\overline{n}_{[\nu_{a},\nu_{b}]}^{ *}}{p_{[\nu_{a},\nu_{b}]}}+\frac{\underline{n}_{[o_{a},\nu_{b}]}^{*}}{p_{[ \alpha_{a},\nu_{b}]}}\right)\right.\] \[-\nu_{a}\nu_{b}\nu^{\prime}\left(e^{\mu_{a}+\nu_{b}}\frac{ \overline{n}_{[\mu_{a},\mu_{b}]}^{*}}{p_{[\mu_{a},\mu_{b}]}}-e^{\mu_{a}} \frac{\underline{n}_{[\alpha_{a},\mu_{b}]}^{*}}{p_{[\alpha_{a},\mu_{b}]}}\right.\] \[\left.-e^{\mu_{a}}\frac{\underline{n}_{[\mu_{a},\alpha_{b}]}^{*}}{ p_{[\mu_{a},\alpha_{b}]}}+\frac{\overline{n}_{[o_{a},\alpha_{b}]}^{*}}{p_{[\alpha_{a}, \alpha_{b}]}}\right)\right],\]
where
\[\begin{cases}\mu^{\prime}=\mu_{a},&\nu^{\prime}=\nu_{a},\quad\text{if}\quad \frac{\underline{\mu}_{a}}{\mu_{b}}\leq\frac{\underline{\nu}_{a}}{\nu_{b}},\\ \mu^{\prime}=\mu_{b},&\nu^{\prime}=\nu_{b},\quad\text{if}\quad\frac{\underline{ \mu}_{a}}{\mu_{b}}>\frac{\underline{\nu}_{a}}{\nu_{b}}.\end{cases} \tag{7}\]
We can use the technique of joint constraints [76] to obtain the tighter estimated value of \(s_{11}^{z*}\). The details of the analytic results of joint constraints are shown in Appendix A. Then we can obtain the lower bound of \(s_{11}^{z*}\) with Eq. (5).
The upper bound of the single-photon pair errors of the \(\mathbf{X}\) basis is
\[\overline{t}_{11}^{x}=m_{[2\nu_{a},2\nu_{b}]}-\underline{m}_{[2\nu_{a},2\nu_{b}]}^ {0}, \tag{8}\]
where \(m_{[2\nu_{a},2\nu_{b}]}\) is the observed error bit number in the \(\mathbf{X}\) basis, and \(m_{[2\nu_{a},2\nu_{b}]}^{0}\) is the error bit number in the \(\mathbf{X}\) basis given that at least one of Alice and Bob sends vacuum component. The lower bound of the expected value \(m_{[2\nu_{a},2\nu_{b}]}^{0*}\) can be given by
\[\underline{m}_{[2\nu_{a},2\nu_{b}]}^{0*}= \frac{e^{-2\nu_{a}}p_{[2\nu_{a},2\nu_{b}]}}{2p_{[\alpha_{a},2\nu_{b }]}}\underline{n}_{[o_{a},2\nu_{b}]}^{*}+\frac{e^{-2\nu_{b}}p_{[2\nu_{a},2 \nu_{b}]}}{2p_{[2\nu_{a},\alpha_{b}]}}\underline{n}_{[2\nu_{a},\nu_{b}]}^{*} \tag{9}\] \[-\frac{e^{-2\nu_{a}-2\nu_{b}}p_{[2\nu_{a},2\nu_{b}]}}{2p_{[\alpha_{a}, \alpha_{b}]}}\overline{n}_{[o_{a},\alpha_{b}]}^{*}.\]
Similarly, we obtain the tighter value of \(\underline{m}_{[2\nu_{a},2\nu_{b}]}^{0*}\) under the joint constraints [76]. According to the formula of
random sampling without replacement in Ref. [81], we can obtain the estimated value of the phase error rate of the single-photon pair events in the \(\mathbf{Z}\) basis
\[\overline{\phi}_{11}^{z}= \frac{\overline{t}_{11}^{x}}{\underline{\underline{S}}_{11}^{x}}+ \gamma\left(\underline{\underline{s}}_{11}^{z},\underline{\underline{s}}_{11} ^{x},\underline{\underline{\overline{t}}_{11}^{x}},\varepsilon_{e}\right), \tag{10}\]
where
\[\gamma^{U}(n,k,\lambda,\epsilon)=\frac{\frac{(1-2\lambda)AG}{n+k}+\sqrt{\frac{ \underline{d}^{2}G^{2}}{(n+k)^{2}}+4\lambda(1-\lambda)G}{2+2\frac{A^{2}G}{(n+ k)^{2}}}}{\text{with }A=\max\{n,k\}\text{ and }G=\frac{n+k}{nk}\ln\frac{n+k}{2\pi nk \lambda(1-\lambda)\epsilon^{2}}. \tag{11}\]
From another viewpoint, we can calculate \(t_{11}^{x*}\) by
\[\overline{t}_{11}^{x*}=m_{[2\nu_{a},2\nu_{b}]}^{*}-\underline{m}_{[2\nu_{a},2 \nu_{b}]}^{0*}, \tag{12}\]
and obtain \(\overline{t}_{11}^{z*}\) by Eq. (5). Then we calculate the observed values of \(\overline{t}_{11}^{z*}\) and \(\overline{s}_{11}^{z*}\) with the Chernoff bound(see Eqs. (11) and (12) in Appendix E). The upper bound of the single-photon pair phase error rate in the \(\mathbf{Z}\) basis can be written as
\[\overline{\phi}_{11}^{z}= \frac{\overline{t}_{11}^{z}}{\underline{\underline{s}}_{11}^{z}}. \tag{13}\]
## IV Performance
### Optimal decoy-state method
For the evaluation, we numerically optimize the secret key rate \(R:=\ell F/N\) of asynchronous-MDIQKD with Eq. (10) (original method [75]) and Eq. (13) (new method), which is shown in Fig. 1. Here \(F\) is the system clock frequency. In this work, we set failure parameters \(\varepsilon_{\text{cor}}\), \(\varepsilon^{\prime}\), \(\varepsilon_{e}\), \(\hat{\varepsilon}\), \(\varepsilon_{\beta}\), and \(\varepsilon_{\text{PA}}\) to be the same value: \(\epsilon\). The experimental parameters are set to the values used in the state-of-the-art system, as shown in Table 1. In Fig. 1, we set \(F=1\) GHz and \(l_{a}=l_{b}\), and the source parameters of Alice and Bob are all the same. The genetic algorithm is exploited to globally search for the optimal value of light intensities and their corresponding probabilities. The gray line is the results of SKC\({}_{0}\). The results show that as the distance increases, the influence of statistical fluctuations becomes increasingly significant, and the key rate advantage of the new phase error rate estimation method is also increasing. For example, at a fiber length of 600 km with \(N=10^{14}\), the secret key rate obtained by the new phase error rate estimation method is approximately 1.49 times that of the original method. In the following key rate calculations, we use the new phase error rate estimation method by default.
### Optimal protocol
Figure 2 shows a comparison of the secret key rates of asynchronous MDI-QKD with and without _click filtering_ under symmetrical \(l_{a}=l_{b}\) and asymmetrical channels \(l_{a}-l_{b}=100\) km. The parameters are listed in Table 1. \(F=1\) GHz and \(N=10^{13}\) are used. The green dotted line is results of using only \([\mu_{a},\mu_{b}]\) coincidence to form the secret key without _click filtering_. In the symmetric channel, Fig. 2(a), we can see that the key rate of asynchronous MDI-QKD with _click filtering_ is always higher than that of asynchronous MDI-QKD without _click filtering_ based on \([\mu_{a},\mu_{b}]\) group. This is expected since the filtering operation corresponds to a higher number of valid pairs and smaller statistical fluctuations in the estimation process. And the key rate of asynchronous MDI-QKD with _click filtering_ is higher than that of asynchronous MDI-QKD without _click filtering_ based on four intensity groups at short and medium distances. At a fiber length of 300 km, the secret key rate obtained with _click filtering_ is approximately 1.11 times the one without _click filtering_ based on four intensity groups, and 1.29 times the one based on \([\mu_{a},\mu_{b}]\) group. The same trend is observed for the asymmetric channel (Fig. 2(b)).
### Asynchronous MDI-QKD Networks
We provide a figure about a scalable QKD network setup consisting of numerous users who may freely join or leave the network in Fig. 3. Each user node has an asymmetric channel connected to an untrusted relay, through which it can establish a QKD link to others. The users will adjust the sending intensities and corresponding probability values so that each link can obtain the optimal key rate. The experimental parameters used
Figure 1: Secret key rates of the three-intensity asynchronous MDIQKD protocol with _click filtering_ using different phase error rate estimation methods. The numerical results here show that the new phase error rate estimation method has a notable advantage.
here are listed in Table 4. We assume a 4 GHz clock rate [68] and 22-hour transmission time (about \(3.2\times 10^{14}\) quantum pulses for asynchronous MDI-QKD).
Table 2 shows simulated secret key rates for asynchronous MDI-QKD, sending-or-not-sending QKD (SNS-QKD) with actively odd-parity pairing (AOPP) [82], and phase-matching QKD (PM-QKD) [83] in the QKD intercity network. We assume that the quantum transmission duty ratio of the SNS-QKD and PM-QKD systems is 50% [57; 67; 70]. Note that duty cycle ratios are lower in many important TF-QKD experiments, for example, the duty ratio at 402 km is 22.4% in Ref. [61], 45% in Ref. [62], and 40% in Ref. [68]. We can see that asynchronous MDI-QKD enables the key rates of all links to exceed \(\text{SKC}_{0}\). Additionally, asynchronous MDI-QKD always enjoys higher secret key rates per clock than SNS-QKD (AOPP) and PM-QKD.
### Practical advantages of asynchronous MDI-QKD
We simulate the performance of our protocol assuming a 4 GHz clock rate and 22 hours transmission time. Figure 4 presents the key rate per second versus fiber distance for asynchronous MDI-QKD, together with four-intensity time bin MDI-QKD [76], SNS-QKD (AOPP) [82], PM-QKD [83], four-phase TF-QKD [68], and four-intensity decoy-state QKD. For SNS-QKD (AOPP), PM-QKD, and four-phase TF-QKD, we set the duty cycle to 50%, Charlie's transmission loss at Alice's (Bob's) side to 2 dB, and the loss-independent misalignment error to 4.8% [70]. We assume an insert loss on Bob's side of 2 dB and a loss-independent misalignment error of \(e_{m}=0.02\) for decoy-state QKD. The interference misalignment error rate of decoy-state MDI-QKD is set to 0.04, which corresponds to 26% error rate in the \(\mathbf{X}\) basis. Device parameters are shown in Table 4. The simulation formulas of MDI-QKD and decoy-state QKD are detailed in Appendix D.2 and D.3, respectively. We also include \(\text{SKC}_{0}\) to prove the repeater-like behavior for asynchronous MDI-QKD. Simulation shows that the key rate of our protocol surpasses that of the decoy-state QKD protocol when \(l>170\) km, and it exceeds \(\text{SKC}_{0}\) when \(l>330\) km. In the 170-483 km range, the performance of our protocol is better than that of the other five protocols, especially in the range of 200-300 km. We observe that, in the simulations, the key
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Link & A-B (A-E) & B-C (C-E) & B-D (D-E) & B-E & A-C & A-D & C-D \\ \hline \(\text{SKC}_{0}\) & 5.77 \(\times 10^{3}\) & 4.80 \(\times 10^{3}\) & 1.45 \(\times 10^{4}\) & 2.30 \(\times 10^{3}\) & 1.21 \(\times 10^{4}\) & 3.64 \(\times 10^{4}\) & 3.03 \(\times 10^{4}\) \\ Asynchronous MDI-QKD & 1.47 \(\times 10^{4}\) & 1.36 \(\times 10^{4}\) & 2.05 \(\times 10^{4}\) & 9.46 \(\times 10^{3}\) & 2.36 \(\times 10^{4}\) & 4.04 \(\times 10^{4}\) & 3.56 \(\times 10^{4}\) \\ SNS-QKD (AOPP) & 1.18 \(\times 10^{4}\) & 1.09 \(\times 10^{4}\) & 1.64 \(\times 10^{4}\) & 7.53 \(\times 10^{3}\) & 1.78 \(\times 10^{4}\) & 3.05 \(\times 10^{4}\) & 2.72 \(\times 10^{4}\) \\ PM-QKD & 2.56 \(\times 10^{3}\) & 2.40 \(\times 10^{3}\) & 3.22 \(\times 10^{3}\) & 1.71 \(\times 10^{3}\) & 4.19 \(\times 10^{3}\) & 6.91 \(\times 10^{3}\) & 6.01 \(\times 10^{3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulated secret key rates per second for asynchronous MDI-QKD, SNS-QKD with the AOPP method, and PM-QKD in the QKD network shown in Fig. 3 using the parameters in Table 4. The system clock frequency is 4 GHz and the transmission time is 22 hours. Here, link A-B represents that user A communicates with user B. The sending intensities and corresponding probabilities are selected by the users to obtain the optimal key rate for each link. Note that here we consider a 50% duty cycle for the TF-type protocols [57; 67; 70].
Figure 2: Comparison of the secret key rates of asynchronous MDIQKD with and without _click filtering_ under two types of channels: (a) symmetric channel \(l_{a}=l_{b}\) and (b) asymmetric channels \(l_{a}-l_{b}=100\) km.
rates of decoy-state QKD surpass those of original time bin MDI-QKD due to the influence of the dark rate and finite key analysis. Within the 0-45km range, the original time bin MDI-QKD demonstrates a greater rate than asynchronous MDI-QKD. The lower key rate of asynchronous MDI-QKD at short distances (less than 45km) in comparison to the original time bin MDI-QKD is attributed to the stronger light intensity of the signal state in original MDI-QKD, approaching 1, which results in a higher number of single-photon pairs in the \(\mathbf{Z}\) basis. In Table 3, we present the bits-per-second (bps) values of asynchronous MDI-QKD at various typical distances, employing device parameters identical to those employed in Fig. 4. Our protocol can generate secret keys rate of 0.31 Mbps at a fiber length of 200 km, thereby rendering it adequate for secure key-demanding applications such as real-time one-time-pad secure audio encryption in intra- and inter-urban areas.
## V Discussion and Conclusion
Here, we point out two conceptual differences between asynchronous MDI-QKD and original MDI-QKD.
i. Coincidence pairing. In original MDI-QKD, the expected yields of single-photon pairs in the \(\mathbf{Z}\) and \(\mathbf{X}\) bases satisfy the relation \(Y_{11}^{z*}=Y_{11}^{z*}\)[31, 33]. However, since asynchronous MDI-QKD uses post-measurement coincidence pairing, there is no concept of the expected total pair number. Therefore, the 'gain' and 'yield' cannot be calculated.
ii. Overcompleteness. For asynchronous MDI-QKD, the so-called three-intensity and four-intensity refer to the number of light intensities sent, and the intensities at different bases after pairing are associated.
In the original MDI-QKD protocol, an important idea is to consider the double-scanning method [76]. We have applied the double-scanning method to asynchronous MDI-QKD. The derivation details of double-scanning are shown in Appendix B. However, numerical results show that the method does not work for the three-intensity asynchronous MDI-QKD protocol [84]. We remark that this phenomenon may be caused by the above two important characteristics. For three-intensity asynchronous MDI-QKD, there are three intensities in each of the \(\mathbf{Z}\) and \(\mathbf{X}\) bases after coincidence pairing. Whereas in the original three-intensity MDI-QKD, there is only one intensity in the \(\mathbf{Z}\) basis. This means that we can directly use the \(\mathbf{Z}\)-basis data to tightly estimate the number of single photon pairs in the \(\mathbf{Z}\) basis for asynchronous MDI-QKD, rather than inefficiently using data from the \(\mathbf{X}\) basis to calculate. Additionally, the overcompleteness of asynchronous MDI-QKD connotes that there is a correlation between the intensity used to estimate the number of the \(\mathbf{Z}\)-basis single-photon pairs and the intensity used to estimate the \(\mathbf{X}\)-basis phase error rate (\(\mathbf{Z}\) basis: \(\mu\) and \(\nu\); \(\mathbf{X}\) basis: \(2\mu\) and \(2\nu\)). In contrast, the intensity and decoy-state estimation for the \(\mathbf{Z}\) and \(\mathbf{X}\) bases are independent of original MDI-QKD, so double scanning is effective.
Furthermore, in the original MDI-QKD protocol, we can improve the performance of the protocol by increasing the number of decoy states, such as four-intensity MDI-QKD [31]. We also have calculated the key rate of the four-intensity asynchronous MDI-QKD protocol, in which the intensity of each laser pulse is randomly set to one of the four intensities \(\mu_{a(b)}\) (signal), \(\omega_{a(b)}\) (decoy 1), \(\nu_{a(b)}\) (decoy 2) and \(o_{a(b)}\) (vacuum), and the intensities satisfy \(\mu_{a(b)}>\omega_{a(b)}>\nu_{a(b)}>o_{a(b)}=0\). The detailed calculation of the protocol is presented in Appendix C. Comparing secret key rates of the three-intensity and
Figure 3: Example of a scalable QKD network setup consisting of numerous users who may freely join or leave the network. Each user node has an asymmetric channel connected to an untrusted relay, through which it can establish a QKD link to others.
Figure 4: Simulated secret key rates for asynchronous MDI-QKD, original time-bin MDI-QKD, decoy-state QKD, SNS-QKD with the AOPP method, PM-QKD, and four-phase TF-QKD under the state-of-the-art system.
four-intensity asynchronous MDI-QKD protocol with _click filtering_, we find that the optimal key rates for the four-intensity decoy-state method are nearly equal to the results for the three-intensity decoy-state method [84]. We remark that this situation is also due to overcompleteness. Therefore, the three-intensity asynchronous MDI-QKD protocol is a good trade-off between the performance of key rates and the ease of implementation.
In this work, we have presented an analysis of the practical aspects of asynchronous MDI-QKD. We have provided refined decoy-state methods that enable higher-rate asynchronous MDI-QKD. The numerical results of different asynchronous MDI-QKD protocols demonstrate that the three-intensity protocol, with a _click filtering_ operation, can provide a favorable balance between performance and ease of implementation. We have introduced the decoy-state method for the asymmetric situation, which permits the direct application of our protocol to asynchronous MDI-QKD experiments with asymmetric channels. Our work also provides important insights into asynchronous MDI-QKD: the decoy-state analysis for the \(\mathbf{Z}\) and \(\mathbf{X}\) bases of asynchronous MDI-QKD are overcomplete, rendering the introduction of double scanning and additional decoy states ineffective for key rate improvement. With its superior performance and straightforward design, asynchronous MDI-QKD holds strong potential in future quantum networks spanning 200 to 400 km. We anticipate the application of the asynchronous concept to MDI multiparty quantum communication tasks, such as quantum conference key agreement [85], quantum secret sharing [85], and quantum digital signatures [86].
**ACKNOWLEDGMENTS**
The authors acknowledge Z. Yuan and L. Zhou for the insightful discussions. This work has been supported by the National Natural Science Foundation of China (No. 12274223), the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei New Aera (No. ZDYD20210101), the Program for Innovative Talents and Entrepreneurs in Jiangsu (No. JSSCRC2021484), and the Program of Song Shan Laboratory (Included in the management of Major Science and Technology Program of Henan Province) (No. 221100210800).
## Appendix A Analytic results of joint constraints
Here, we introduce the joint-constraints method to bound tighter values. Without loss of generality, we take Eq. (6) as an example. Similar operations can be applied to other parameters. We can rewrite Eq. (6) as
\[\underline{\mathbf{s}}_{11}^{**}\geq \frac{\sum_{\tilde{k}_{a},\tilde{k}_{b}}\left(\tilde{k}_{a} \tilde{k}_{b}e^{-\tilde{k}_{a}-\tilde{k}_{b}}p_{[\tilde{k}_{a},\tilde{k}_{b}] }\right)}{\nu_{a}\nu_{b}\mu_{a}\mu_{b}(\mu^{\prime}-\nu^{\prime})}\left( \underline{\mathbf{S}}_{1}^{*}-\overline{S}_{2}^{*}\right), \tag{11}\]
where
\[S_{1}= \mu_{a}\mu_{b}\mu^{\prime}e^{\nu_{a}+\mu_{b}}\frac{n_{[\mu_{a}, \mu_{b}]}}{p_{[\mu_{a},\mu_{b}]}}+\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{b}}\frac{ n_{[\alpha_{a},\mu_{b}]}}{p_{[\alpha_{a},\mu_{b}]}}\] \[+\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{a}}\frac{n_{[\mu_{a},\alpha_{ b}]}}{p_{[\mu_{a},\alpha_{b}]}}+(\mu_{a}\mu_{b}\mu^{\prime}-\nu_{a}\nu_{b}\nu^{ \prime})\frac{n_{[\alpha_{a},\alpha_{b}]}}{p_{[\alpha_{a},\alpha_{b}]}}, \tag{12}\]
and
\[S_{2}= \nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{a}+\mu_{b}}\frac{n_{[\mu_{a}, \mu_{b}]}}{p_{[\mu_{a},\mu_{b}]}}+\mu_{a}\mu_{b}\mu^{\prime}e^{\nu_{b}}\frac{ n_{[\alpha_{a},\mu_{b}]}^{*}}{p_{[\alpha_{a},\mu_{b}]}} \tag{13}\] \[+\mu_{a}\mu_{b}\mu^{\prime}e^{\nu_{a}}\frac{n_{[\mu_{a},\alpha_{ b}]}}{p_{[\nu_{a},\alpha_{b}]}}.\]
For \(\underline{\mathbf{S}}_{1}^{*}\), we define
\[S_{1}:= a_{1}\gamma_{1}+a_{2}\gamma_{2}+a_{3}\gamma_{3}+a_{4}\gamma_{4}, \tag{14}\]
where \(a_{1}=\frac{\mu_{a}\mu_{b}\mu^{\prime}e^{\nu_{a}+\nu_{b}}}{p_{[\nu_{a},\nu_{b} ]}}\), \(\gamma_{1}=n_{[\nu_{a},\nu_{b}]}\), \(a_{2}=\frac{\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{b}}}{p_{[\alpha_{a},\mu_{b}]}}\), \(\gamma_{2}=n_{[\alpha_{a},\mu_{b}]}\), \(a_{3}=\frac{\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{a}}}{p_{[\mu_{a},\alpha_{b}]}}\), \(\gamma_{3}=n_{[\mu_{a},\alpha_{b}]}\), \(a_{4}=\frac{\mu_{a}\mu_{b}\mu^{\prime}-\nu_{a}\nu_{b}\nu^{\prime}}{p_{[\alpha_{a}, \alpha_{b}]}}\), \(\gamma_{4}=n_{[\alpha_{a},\alpha_{b}]}\). Denoting \(\{b_{1},b_{2},b_{3},b_{4}\}\) as the ascending order of \(\{a_{1},a_{2},a_{3},a_{4}\}\), and \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\), \(\xi_{4}\) as the corresponding rearrange of \(\{\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\}\) according to the ascending order of \(\{a_{1},a_{2},a_{3},a_{4}\}\), then we have the lower bound of \(S_{1}^{*}\)[76]:
\[\underline{S}_{1}^{*}:= b_{1}\big{(}\underline{\xi}_{1}+\xi_{2}+\xi_{3}+\xi_{4} \big{)}^{*}+(b_{2}-b_{1})\big{(}\underline{\xi}_{2}+\xi_{3}+\xi_{4}\big{)}^{*}\] \[+(b_{3}-b_{2})\big{(}\underline{\xi}_{3}+\xi_{4}\big{)}^{*}+(b_{4 }-b_{3})\underline{\xi}_{4}^{*}. \tag{15}\]
For \(\overline{S}_{2}^{*}\), we define
\[S_{2}:= c_{1}\kappa_{1}+c_{2}\kappa_{2}+c_{3}\kappa_{3}+c_{4}\kappa_{4} \tag{16}\]
where \(a_{1}=\frac{\mu_{a}\mu_{b}\mu^{\prime}e^{\nu_{a}+\nu_{b}}}{p_{[\mu_{a},\nu_{b}]} }\), \(\gamma_{1}=n_{[\mu_{a},\mu_{b}]}\), \(a_{2}=\frac{\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{b}}}{p_{[\mu_{a},\mu_{b}]}}\), \(\gamma_{2}=n_{[\alpha_{a},\mu_{b}]}\), \(a_{3}=\frac{\nu_{a}\nu_{b}\nu^{\prime}e^{\mu_{a}}}{p_{[\mu_{a},\nu_{b}]}}\), \(\gamma_{3}=n_{[\mu_{a},\alpha_{b}]}\), \(a_{4}=\frac{\mu_{a}\mu_{b}\mu^{\prime}-\nu_{a}\mu_{b}\nu^{\prime}}{p_{[\alpha_{a}, \alpha_{b}]}}\), \(\gamma_{4}=n_{[\alpha_{a},\alpha_{b}]}\). Denoting \(\{d_{1},d_{2},d_{3}\}\) as
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Data size & \(10^{12}\) & 5 \(\times 10^{12}\) & \(10^{13}\) & \(10^{13}\) & 5 \(\times 10^{13}\) & 5 \(\times 10^{13}\) \\ \cline{2-7} Distance (km) & 50 & 100 & 150 & 200 & 250 & 300 \\ \hline Secret key rate & 6.02 Mbps & 2.29 Mbps & 855.40 kbps & 305.05 kbps & 129.60 kbps & 46.671 kbps \\ \hline \hline \end{tabular}
\end{table}
Table 3: The secret key rates of the three-intensity asynchronous MDI-QKD protocol with _click filtering_. Here the fiber loss is 0.16 dB/km; the clock rate is 4 GHz; the dark count rates is 0.1 Hz; and the detection efficiency is \(\eta_{d}=\) 80%.
the ascending order of \(\{c_{1},c_{2},c_{3}\}\), and \(\chi_{2}\), \(\chi_{3}\), as the corresponding rearrange of \(\{\kappa_{1},\kappa_{2},\kappa_{3}\}\) according to the ascending order of \(\{c_{1},c_{2},c_{3}\}\), then we have the upper bound of \(S_{2}^{*}\)[76]:
\[\begin{split}\overline{S}_{2}^{*}{}^{*}{=}& d_{1}\times\overline{(\chi_{1}+\chi_{2}+\chi_{3})}^{*}+(d_{2}-d_ {1})\times\overline{(\chi_{2}+\chi_{3})}^{*}\\ &+(d_{3}-d_{2})\times\overline{\chi_{3}^{*}}.\end{split} \tag{10}\]
## Appendix B Decoy-state estimation with the double-scanning method
Here we apply the double-scanning method to asynchronous MDI-QKD. Using the decoy-state method, we can estimate the lower bound of the number of single-photon pairs in the \(\mathbf{X}\) basis
\[\underline{\bar{s}}_{11}^{x*}=\frac{e^{-2\nu_{a}-2\nu_{b}}p_{[2\nu_{a},2\nu_{ b}]}}{\mu_{a}\mu_{b}(\tilde{\mu}^{\prime}-\tilde{\nu}^{\prime})}(\underline{ \bar{S}}^{+*}-\overline{S}^{-*}-\overline{H}^{*}), \tag{11}\]
where
\[\begin{cases}\tilde{\mu}^{\prime}=2\mu_{a},&\tilde{\nu}^{\prime}=2\nu_{a}, \text{ if }\ \ \frac{\mu_{a}}{\mu_{b}}\leq\frac{\nu_{a}}{\nu_{b}},\\ \tilde{\mu}^{\prime}=2\mu_{b},&\tilde{\nu}^{\prime}=2\nu_{b},\text{ if }\ \ \frac{\mu_{a}}{\mu_{b}}>\frac{\nu_{b}}{\nu_{b}},\end{cases} \tag{12}\]
and
\[\begin{split} S^{+*}{}^{*}{=}&\mu_{a}\mu_{b}\tilde{\mu}^{ \prime}e^{2\nu_{a}+2\nu_{b}}\frac{n_{[2\nu_{a},2\nu_{b}]}^{*}}{p_{[2\nu_{a},2 \nu_{b}]}}+\nu_{a}\nu_{b}\tilde{\nu}^{\prime}e^{2\mu_{b}}\frac{\underline{n}_{ [\alpha_{a},2\mu_{b}]}^{*}}{p_{[\alpha_{a},2\mu_{b}]}}\\ &+\nu_{a}\nu_{b}\tilde{\nu}^{\prime}e^{2\mu_{a}}\frac{n_{[2\mu_{a}, 0_{b}]}^{*}}{p_{[2\mu_{a},0_{b}]}},\\ \end{split} \tag{13}\]
The upper bound of the bit error rate of single-photon pairs in the \(\mathbf{X}\) basis \(e_{11}^{x*}\) satisfies
\[\begin{split}\overline{e}_{11}^{x*}=&\frac{1}{\mu_{ a}\mu_{b}\tilde{\mu}^{\prime}e^{2\nu_{a}+2\nu_{b}}\underline{\bar{s}}_{11}^{x*}} \left(\mu_{a}\mu_{b}\tilde{\mu}^{\prime}e^{2\nu_{a}+2\nu_{b}}\frac{\underline{ m}_{[2\nu_{a},2\nu_{b}]}^{*}}{p_{[2\nu_{a},2\nu_{b}]}}-\frac{H}{2}\right).\end{split} \tag{14}\]
Denote \(\tilde{n}_{[2\nu_{a},2\nu_{b}]}=n_{[2\nu_{a},2\nu_{b}]}-m_{[2\nu_{a},2\nu_{b}]}\). We can divide the effective \([2\nu_{a},2\nu_{b}]\) coincidence into two kinds of events, the right effective events whose total number is \(\tilde{n}_{[2\nu_{a},2\nu_{b}]}\), and the wrong effective events whose total number is \(m_{[2\nu_{a},2\nu_{b}]}\). Denote \(M=\mu_{a}\tilde{\mu}^{\prime}e^{2\nu_{a}+2\nu_{b}}\underline{m}_{[2\nu_{a},2 \nu_{b}]}^{*}/p_{[2\nu_{a},2\nu_{b}]}\). We can rewrite Eq. (11) as
\[\underline{\bar{s}}_{11}^{x*}=\frac{e^{-2\nu_{a}-2\nu_{b}}p_{[2\nu_{a},2\nu_{ b}]}}{\mu_{a}\mu_{b}(\tilde{\mu}^{\prime}-\tilde{\nu}^{\prime})}(\underline{\bar{S}}^{ +*}-\overline{S}^{-*}+\underline{M}^{*}-\overline{H}^{*}), \tag{15}\]
where
\[\begin{split}\tilde{S}^{+*}{}^{*}{=}&\mu_{a}\mu_{b} \tilde{\mu}^{\prime}e^{2\nu_{a}+2\nu_{b}}\frac{\tilde{n}_{[2\nu_{a},2\nu_{b}]}^ {*}}{p_{[2\nu_{a},2\nu_{b}]}}+\nu_{a}\nu_{b}\tilde{\nu}^{\prime}e^{2\mu_{b}} \frac{n_{[\alpha_{a},2\mu_{b}]}^{*}}{p_{[\alpha_{a},2\mu_{b}]}}\\ &+\nu_{a}\nu_{b}\tilde{\nu}^{\prime}e^{2\mu_{a}}\frac{n_{[2\mu_{a}, 0_{b}]}^{*}}{p_{[2\mu_{a},0_{b}]}},\\ \end{split} \tag{16}\]
\[\begin{split} S^{-*}{}^{*}{=}&\nu_{a}\nu_{b} \tilde{\nu}^{\prime}e^{2\mu_{a}+2\mu_{b}}\frac{n_{[2\mu_{a},2\mu_{b}]}^{*}}{p_ {[2\mu_{a},2\mu_{b}]}}+\nu_{a}\nu_{b}\tilde{\nu}^{\prime}\frac{n_{[\alpha_{a}, 0_{b}]}^{*}}{p_{[\alpha_{a},0_{b}]}},\\ H^{*}{}^{*}{=}&\mu_{a}\mu_{b}\tilde{\mu}^{\prime} \left(e^{2\nu_{b}}\frac{n_{[\alpha_{a},2\nu_{b}]}^{*}}{p_{[\alpha_{a},2\nu_{b }]}}+e^{2\nu_{a}}\frac{n_{[2\nu_{a},\alpha_{b}]}^{*}}{p_{[2\nu_{a},\alpha_{b} ]}}-\frac{n_{[\alpha_{a},0_{b}]}^{*}}{p_{[\alpha_{a},0_{b}]}}\right).\end{split} \tag{17}\]
For each group \((H,M)\), we can calculate \(e_{11}^{x*}\) with Eqs. (14) and (15)
\[\overline{\bar{e}}_{11}^{x*}=\frac{(\tilde{\mu}^{\prime}-\tilde{\nu}^{\prime})(M- H/2)}{\tilde{\mu}^{\prime}(\tilde{S}^{+}-\tilde{S}^{-}+M-H)}. \tag{18}\]
By scanning \((H,M)\)[76], we can get the worst case for \(e_{11}^{x*}\), i.e.,
\[\max e_{11}^{x*}\] (19) s.t. \[\underline{\underline{H}}\leq H\leq\overline{H}, \tag{20}\] \[\underline{\underline{M}}\leq M\leq\overline{M}.\]
With the formulas in Eqs. (2), (3), (5), (6), and (13), we can get the final key rate.
## Appendix C Four-intensity asynchronous MDI-QKD protocol
Here, we provide the decoy-state method for four-intensity asynchronous MDI-QKD with _click filtering_. The core difference in the parameter estimation steps between the four-intensity protocol and the three-intensity protocol is to estimate the lower bound of the number of single-photon pairs in the \(\mathbf{Z}\) basis. In the four-intensity protocol with _click filtering_, \(s_{11}^{z*}\) is bounded by
\[\underline{\bar{s}}_{11}^{z*}= \frac{\sum_{\tilde{k}_{a},\tilde{k}_{b}}\left(\tilde{k}_{a} \tilde{k}_{b}e^{-\tilde{k}_{a}-\tilde{k}_{b}p_{[\tilde{k}_{a},\tilde{k}_{b}]}} \right)}{\nu_{a}\nu_{b}\omega_{a}\omega_{b}(\omega^{\prime}-\nu^{\prime})}\] (21) \[\times\left[\omega_{a}\omega_{b}\omega^{\prime}\left(e^{\nu_{a}+ \nu_{b}}\frac{n_{[\nu_{a},\nu_{b}]}^{*}}{p_{[\nu_{a},\nu_{b}]}}-e^{\nu_{b}} \frac{\overline{\bar{n}}_{[\alpha_{a},\nu_{b}]}^{*}}{p_{[\alpha_{a},\nu_{b}]}}\right.\] \[\left.-e^{\nu_{a}}\frac{\overline{\bar{n}}_{[\nu_{a},\alpha_{b}]}^{*} }{p_{[\nu_{a},\alpha_{b}]}}+\frac{\underline{n}_{[\alpha_{a},\alpha_{b}]}^{*}}{p_ {[\alpha_{a},\alpha_{b}]}}\right)\] \[-\nu_{a}\nu_{b}\nu^{\prime}\left(e^{\omega_{a}+\nu_{b}}\frac{ \overline{\bar{n}}_{[\alpha_{a},\nu_{b}]}^{*}}{p_{[\omega_{a},\omega_{b}]}}-e^{ \nu_{b}}\frac{\underline{n}_{[\alpha_{a},\omega
and \(p_{[k^{\text{tot}},k^{\text{tot}}]}\) is defined in Eq. (4). When _click filtering_ is not applied, \(p_{s}=1\), otherwise \(p_{s}=1-p_{\mu_{a}}p_{\omega_{b}}-p_{\mu_{a}}p_{\nu_{b}}-p_{\omega_{a}}p_{\mu_{b} }-p_{\omega_{a}}p_{\nu_{b}}-p_{\nu_{a}}p_{\mu_{b}}-p_{\nu_{a}}p_{\omega_{b}}\). Similarly, we use the technique of joint constraints to get the tight estimated value of \(s_{11}^{z*}\). The calculation of the remaining parameter values can directly utilize Eqs. (2), (3), and (8) - (13).
## Appendix D Simulation formulas
The experimental parameters used for performance comparison of these protocols, asynchronous MDI-QKD, decoy-state QKD, SNS-QKD (AOPP), PM-QKD, and four-phase TF-QKD, are listed in Table 4.
### Simulation formulas for asynchronou MDI-QKD
In asynchronous MDI-QKD, suppose Alice and Bob send intensities \(k_{a}\) and \(k_{b}\) with phase difference \(\theta\), the overall gain is given by [Eq. (C22) in Ref. [75]]
\[q_{(k_{a}|k_{b})}= y_{(k_{a}|k_{b})}^{L}I_{0}\left(\eta_{d}^{L}\sqrt{\eta_{a}k_{a} \eta_{b}k_{b}}\right) \tag{10}\] \[+y_{(k_{a}|k_{b})}^{R}I_{0}\left(\eta_{d}^{R}\sqrt{\eta_{a}k_{a} \eta_{b}k_{b}}\right)\] \[-2y_{(k_{a}|k_{b})}^{R}y_{(k_{a}|k_{b})}^{R}I_{0}\left[(\eta_{d}^ {L}-\eta_{d}^{R})\sqrt{\eta_{a}k_{a}\eta_{b}k_{b}}\right],\]
where \(y_{(k_{a}|k_{b})}^{L(R)}=\left(1-p_{d}^{L(R)}\right)e^{-\frac{p_{d}^{L(R)}( \eta_{a}k_{a}+\eta_{b}k_{b})}{2}}\); \(\eta_{d}^{L}\) (\(\eta_{d}^{R}\)) and \(p_{d}^{L}\) (\(p_{d}^{R}\)) are the detection efficiency and the dark count rate of the detector \(D_{L}\) (\(D_{R}\)), respectively; \(\eta_{a}=10^{-\frac{\alpha_{\text{fb}}}{10}}\) and \(\eta_{b}=10^{-\frac{\alpha_{\text{fb}}}{10}}\); \(I_{0}(x)\) refers to the zero-order modified Bessel function of the first kind.
We define \(N_{T_{c}}=FT_{c}\) as the number of time bins within time interval \(T_{c}\). The total number of valid successful pairing results is [Eq. (C24) in Ref. [75]]
\[n_{\text{tot}}=\frac{Nq_{\text{tot}}}{1+1/q_{T_{c}}}, \tag{11}\]
where \(q_{\text{tot}}\) is the probability of having a click event, and \(q_{T_{c}}=1-(1-q_{\text{tot}})^{N_{T_{c}}}\) is the probability that at least one click event occurs within the time interval \(T_{c}\) after a click time bin. When using the matching method without click filtering, \(q_{\text{tot}}=\sum_{k_{a},k_{b}}p_{k_{a}}p_{k_{b}}q_{(k_{a}|k_{b})}\); when using the matching method with click filtering, \(q_{\text{tot}}=\sum_{k_{a},k_{b}}p_{k_{a}}p_{k_{b}}q_{(k_{a}|k_{b})}-p_{\mu_{a} }p_{\nu_{a}}q_{(\mu_{a}|\nu_{b})}-p_{\nu_{a}}p_{\mu_{a}}q_{(\nu_{a}|\mu_{a})}\). The average of the pairing interval can be given by [Eq. (C25)] in Ref. [75]]
\[T_{\text{mean}}=\frac{1-N_{T_{c}}q_{\text{tot}}(1/q_{T_{c}}-1)}{Fq_{\text{tot }}}. \tag{12}\]
The total number of set \(\mathcal{S}_{[k^{\text{tot}}_{a},k^{\text{tot}}_{b}]}\) (except set \(\mathcal{S}_{[2\nu_{a},2\nu_{b}]}\)) is [Eq. (C26) in Ref. [75]]
\[n_{[k^{\text{tot}}_{a},k^{\text{tot}}_{b}]}=n_{\text{tot}}\times \tag{13}\] \[\sum_{k^{z}_{a}+k^{z}_{a}=k^{\text{tot}}_{a}}\sum_{k^{z}_{b}+k^{z }_{b}=k^{\text{tot}}_{b}}\left(\frac{p_{k^{z}_{a}}p_{k^{z}_{a}}q_{(k^{z}_{a}|k^{ z}_{b})}}{q_{\text{tot}}}\frac{p_{k^{z}_{b}}p_{k^{z}_{b}}q_{(k^{z}_{a}|k^{z}_{b})}}{q_{ \text{tot}}}\right).\]
The total number of set \(\mathcal{S}_{[2\nu_{a},2\nu_{b}]}\) is [Eq. (C27) in Ref. [75]]
\[n_{[2\nu_{a},2\nu_{b}]}=\frac{n_{\text{tot}}}{M\pi}\int_{0}^{2\pi}\left(\frac{p _{\nu_{a}}p_{\nu_{b}}q_{(\nu_{a}|\nu_{b})}^{\theta}\frac{p_{\nu_{a}}p_{\nu_{b}}q_ {(\nu_{a}|\nu_{b})}^{\theta}}{q_{\text{tot}}}}\right)d\theta. \tag{14}\]
The total number of errors in the \(\mathbf{X}\) basis can be written as [Eq. (C28) in Ref. [75]]
\[m_{[2\nu_{a},2\nu_{b}]}= \frac{n_{\text{tot}}}{M\pi}p_{\nu_{a}}^{2}p_{\nu_{b}}^{2}\times \tag{15}\] \[\int_{0}^{2\pi}\Bigg{\{}(1- E_{\text{HOM}})\frac{\Big{[}q_{(\nu_{a}|\nu_{b})}^{\theta,L}q_{(\nu_{a}| \nu_{b})}^{\theta+\delta,R}+q_{(\nu_{a}|\nu_{b})}^{\theta,R}q_{(\nu_{a}|\nu_{b}) }^{\theta+\delta,L}\Big{]}}{q_{\text{tot}}^{2}}\] \[+ E_{\text{HOM}}\frac{\Big{[}q_{(\nu_{a}|\nu_{b})}^{\theta,L}q_{(\nu_{ a}|\nu_{b})}^{\theta+\delta,L}+q_{(\nu_{a}|\nu_{b})}^{\theta,R}q_{(\nu_{a}|\nu_{b})}^{ \theta+\delta,R}\Big{]}}{q_{\text{tot}}^{2}}\Bigg{\}}d\theta,\]
where \(E_{\text{HOM}}\) is the interference misalignment error rate, and \(\delta=T_{\text{mean}}(2\pi\Delta\nu+\omega_{\text{fib}})\) is the phase misalignment resulting from the fiber phase drift rate \(\omega_{\text{fib}}\) and laser frequency difference \(\Delta\nu\).
### Simulation formulas for four-intensity MDI-QKD
We denote the number and error number of detection event when Alice sends intensity \(k_{a}\) (\(k_{a}\in\{\mu_{a},\nu_{a},\omega_{a},o_{a}\}\)), and Bob sends \(k_{b}\) (\(k_{b}\in\{\mu_{b},\nu_{b},\omega_{b},o_{b}\}\)) in the \(\mathbf{Z}(\mathbf{Z})\) basis as \(n_{k_{a}k_{b}}^{z(x)}\) and \(m_{k_{b}k_{b}}^{z(x)}\), respectively. The key rate of time-bin MDI-QKD is [29; 76]
\[R= \frac{1}{N^{\prime}}\left\{\underline{n}_{0}^{z}+\underline{n}_{1 1}^{z}\left[1-H_{2}\left(\overline{\phi}_{11}^{z}\right)\right]-\lambda_{\text{ EC}}\right. \tag{16}\] \[\left.-\log_{2}\frac{2}{\varepsilon_{\text{cor}}}-2\log_{2}\frac{2}{ \varepsilon^{\prime}\hat{\varepsilon}}-2\log_{2}\frac{1}{2\varepsilon_{\text{PA}}} \right\},\]
where \(\lambda_{\text{EC}}=n_{\mu_{a}\mu_{b}}^{z}fH_{2}\left(\frac{m_{\mu_{a}\mu_{b}}^{z}}{n _{\mu_{a}\mu_{b}}^{z}}\right)\).
Here we use the decoy-state analysis to consider the complete finite-key effects and apply the double-scanning method to MDI-QKD [76]. The corresponding parame
ters in Eq. (47) can be given by
\[\begin{split}\underline{n}_{0}^{zz}&=\max\left\{\frac{e^ {-\mu_{a}}p_{\mu_{a}}}{p_{o_{a}}}\underline{n}_{o_{a}\mu_{b}}^{zz},\frac{e^{- \mu_{b}}p_{\mu_{b}}}{p_{o_{b}}}\underline{n}_{\mu_{a}o_{b}}^{zz}\right\},\\ \underline{n}_{11}^{zz}&=\frac{\mu_{a}\mu_{b}e^{-\mu _{a}-\mu_{b}}p_{\mu_{a}}}{\nu_{a}\nu_{b}\omega_{a}\omega_{b}(\omega^{\prime}- \nu^{\prime})}\left(\underline{P}^{+*}-\overline{P}^{-*}+\underline{\hat{M}}^ {*}-\overline{\hat{H}}^{*}\right),\\ \overline{t}_{11}^{zz}&=\frac{1}{\omega_{a}\omega_{b }\omega^{\prime}e^{\nu_{a}+\nu_{b}}}\left(\hat{M}-\frac{\hat{H}}{2}\right),\\ \overline{t}_{11}^{zz}&=\frac{\mu_{a}\mu_{b}e^{- \mu_{a}-\mu_{b}}p_{\mu_{a}}}{\nu_{a}\nu_{b}e^{-\nu_{a}-\nu_{b}}}p_{\nu_{a}} \overline{t}_{11}^{xz},\\ \overline{\phi}_{11}^{z}&=\frac{\overline{t}_{11}^{ z}}{\underline{n}_{11}^{z}},\end{split} \tag{48}\]
where
\[\begin{cases}\omega^{\prime}=\omega_{a},&\nu^{\prime}=\nu_{a}\quad\text{if} \quad\frac{\omega_{b}}{\omega_{b}}<\frac{\nu_{b}}{\nu_{b}},\\ \omega^{\prime}=\omega_{b},&\nu^{\prime}=\nu_{b}\quad\text{if}\quad\frac{ \omega_{b}}{\omega_{b}}>\frac{\nu_{b}}{\nu_{b}}.\end{cases} \tag{49}\]
and
\[\begin{split} P^{+*}=&\omega_{a}\omega_{b}\omega^{\prime}e^ {\nu_{a}+\nu_{b}}\left(\frac{n_{\mu_{a}}^{xz}-m_{\nu_{a}}^{x}p_{b}}{p_{\nu_{a} }p_{\nu_{b}}}\right.\\ &+\nu_{a}\nu_{b}\nu^{\prime}e^{\omega_{a}}\frac{n_{\omega_{a}o_{b}}^{xz }}{p_{\nu_{a}}p_{\nu_{b}}}+\nu_{a}\nu_{b}\nu^{\prime}e^{\omega_{b}}\frac{n_{o_{ a}o_{b}}^{xz}}{p_{o_{a}}p_{\nu_{b}}},\\ P^{-*}=&\nu_{a}\nu_{b}\nu^{\prime}e^{\omega_{a}+\omega_{b}} \frac{n_{\omega_{a}o_{b}}^{xz}}{p_{\nu_{a}}p_{\nu_{b}}}+\nu_{a}\nu_{b}\nu^{ \prime}\frac{n_{o_{a}o_{b}}^{xz}}{p_{\omega_{a}}p_{\nu_{b}}},\\ \hat{M}^{*}=&\omega_{a}\omega_{b}\omega^{\prime}e^{\nu_{a}+\nu _{b}}\frac{m_{\nu_{a}o_{b}}^{xz}}{p_{\nu_{a}}p_{\nu_{b}}},\\ \hat{H}^{*}=&\omega_{a}\omega_{b}\omega^{\prime}\left(e^{\nu_{a}} \frac{n_{o_{a}o_{b}}^{xz}}{p_{\alpha_{b}}p_{\nu_{b}}}+e^{\nu_{a}}\frac{n_{\nu _{a}o_{b}}^{xz}}{p_{\nu_{a}}p_{\alpha_{b}}}-\frac{n_{o_{a}o_{b}}^{xz}}{p_{ \alpha_{b}}p_{\nu_{a}}}\right).\end{split} \tag{50}\]
By scanning \((\hat{H},\hat{M})\), we can obtain the secret key rate
\[\text{min} R \tag{51}\] \[\text{s.t.} \hat{\underline{H}}\leq\hat{H}\leq\overline{\hat{H}},\] (52) \[\hat{\underline{M}}\leq\hat{M}\leq\overline{\hat{M}}.\]
Because of the dead time of the detector, only one of the four Bell states can be identified. In the simulation, we set
\[\begin{split} n_{\tilde{k}_{a}k_{b}}^{z}=& N^{\prime}p_{k_{a}}p_{k_{b}}p_{d}(1-p_{d})^{2}e^{-\frac{k_{a}\eta_{a}+k_{b} \eta_{b}}{2}}\\ &\left\{I_{0}(\sqrt{k_{a}\eta_{a}k_{b}\eta_{b}})-(1-p_{d})e^{- \frac{k_{a}\eta_{a}+k_{b}\eta_{b}}{2}}\right.\\ &+\left[1-(1-p_{d})e^{-\frac{k_{a}\eta_{a}}{2}}\right]\left[1-(1-p _{d})e^{-\frac{k_{b}\eta_{b}}{2}}\right]\right\},\\ m_{\tilde{k}_{a}k_{b}}^{z}=& N^{\prime}p_{k_{a}}p_{k_{b}}p_{d}(1-p_{d})^{2}e^{-\frac{k_{a}\eta_{a}+k_{b} \eta_{b}}{2}}\\ &\left\{\left[I_{0}(\sqrt{k_{a}\eta_{a}k_{b}\eta_{b}})-(1-p_{d}) e^{-\frac{k_{a}\eta_{a}+k_{b}\eta_{b}}{2}}\right]\right\},\end{split} \tag{53}\]
and
\[\begin{split} n_{\tilde{k}_{a}k_{b}}^{z}=& N^{\prime}p_{k_{a}}p_{k_{b}}y_{k_{a}k_{b}}^{2}\left[1+2y_{k_{a}k_{b}}^{2} \right.\\ &\left.-4y_{k_{a}k_{b}}I_{0}\left(\frac{\sqrt{k_{a}\eta_{a}k_{b} \eta_{b}}}{2}\right)+I_{0}(\sqrt{k_{a}\eta_{a}k_{b}\eta_{b}})\right],\\ m_{\tilde{k}_{a}k_{b}}^{x}=& N^{\prime}p_{k_{a}}p_{k_{b}}y_{k_{a}k_{b}}^{2}\left\{1+y_{k_{a}k_{b}} ^{2}\right.\\ &-2y_{k_{a}k_{b}}I_{0}\left(\frac{\sqrt{k_{a}\eta_{a}k_{b}\eta_{b}} }{2}\right)\\ &\left.+E_{\text{HOM}}\left[I_{0}(\sqrt{k_{a}\eta_{a}k_{b}\eta_{b}} )-1\right\right\},\end{split} \tag{54}\]
where we have \(y_{k_{a}k_{b}}=(1-p_{d})e^{-\frac{k_{a}\eta_{a}+k_{b}\eta_{b}}{4}}\) and \(E_{\text{HOM}}=0.04\). Note that in time-bin MDI-QKD, two pulses form one bit, i.e., \(N^{\prime}=N/2\).
### Simulation formulas for four-intensity decoy-state QKD
The key rate of decoy-state QKD is [81; 87]
\[\begin{split} R=&\frac{1}{N}\left\{\underline{n}_{0}^{z} +\underline{n}_{1}^{z}\left[1-H_{2}\left(\overline{\phi}_{1}^{z}\right) \right]-\lambda_{\text{EC}}\right.\\ &\left.-6\log_{2}\frac{23}{\varepsilon_{\text{sec}}}-2\log_{2} \frac{2}{\varepsilon_{\text{cor}}}\right\},\end{split} \tag{55}\]
where \(\lambda_{\text{EC}}=(n_{\mu}^{z}+n_{\nu}^{z})Hf_{2}\left(\frac{n_{\mu}^{z}+m_{\nu}^ {z}}{n_{\mu}^{z}+n_{\nu}^{z}}\right)\), and \(n_{k}^{z(x)}\) and \(m_{k}^{z(x)}\) are the number and error number of intensity pulse \(k\) (\(k\in\{\mu,\nu,\omega,o\}\)) measured in the \(\mathbf{Z(X)}\) basis, respecti
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{Asynchronous MDI-QKD} & Decoy-state QKD & SNS-QKD \& PM-QKD & Four-phase TF-QKD \\ \hline Fiber loss & 0.16 dB/km & 0.16 dB/km & 0.16 dB/km & 0.16 dB/km \\ Charlie loss & — & 2 dB & — & — \\ Detector efficiency & 80\% & 80\% & 80\% & 80\% \\ Dark count rate & 0.1 Hz & 0.1 Hz & 0.1 Hz & 0.1 Hz \\ Spectral filtering loss & 0 dB & 0 dB & 2 dB at Bob-Charlie & 2 dB at Alice-Charlie \\ Duty Cycle & 100 & 100 & 50 & 50 \\ Laser frequency difference & 10 Hz & — & — & — \\ Drift rates & \(5.9\times 10^{3}\) rad/s & — & — & — \\ Number of phase slices & 16 & — & 16 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: List of experimental parameters used in numerical simulations.
First, we extend the decoy state analysis to finite-size cases. The number of vacuum events in the \(\mathbf{Z}\) and \(\mathbf{X}\) bases satisfy
\[\underline{n}_{0}^{zz*}= \frac{p_{\mu}e^{-\mu}+p_{\nu}e^{-\nu}}{p_{o}}\underline{n}_{o}^{z*}, \tag{101}\]
and
\[\underline{n}_{0}^{xx*}= \frac{p_{\omega}e^{-\omega}}{p_{o}}\underline{n}_{o}^{xx*}, \tag{102}\]
respectively.
The number of single-photon events in the \(\mathbf{Z}\) and \(\mathbf{X}\) bases are
\[\underline{n}_{1}^{zz*}= \frac{(p_{\mu}\mu e^{-\mu}+p_{\nu}\nu e^{-\nu})\mu}{\mu\nu-\nu^{2} }\left(\frac{e^{\nu}\underline{n}_{i}^{z*}}{p_{\nu}}-\frac{\nu^{2}}{\mu^{2}} \frac{e^{\mu}\overline{n}_{\mu}^{z*}}{p_{\mu}}-\frac{\mu^{2}-\nu^{2}}{\mu^{2} }\frac{\overline{n}_{o}^{z*}}{p_{o}}\right), \tag{103}\]
and
\[\underline{n}_{1}^{xx*}= \frac{p_{\omega}\omega e^{-\omega}\mu}{\mu\nu-\nu^{2}}\left( \frac{e^{\nu}\underline{n}_{\nu}^{x*}}{p_{\nu}}-\frac{\nu^{2}}{\mu^{2}}\frac {e^{\mu}\overline{n}_{\mu}^{xx*}}{p_{\mu}}-\frac{\mu^{2}-\nu^{2}}{\mu^{2}} \frac{\overline{n}_{o}^{x*}}{p_{o}}\right), \tag{104}\]
respectively. In addition, the number of bit errors \(\overline{t}_{1}^{x}\) associated with the single-photon events in the \(\mathbf{X}\) basis is also required. It is given by
\[\overline{t}_{1}^{x}=m_{\omega}^{x}-\underline{m}_{0}^{x}, \tag{105}\]
where \(\underline{m}_{0}^{xx*}=\frac{p_{\omega}e^{-\omega}}{p_{o}}\underline{m}_{o} ^{x*}\). Second, the formula for the phase error rate of the single-photon events in the \(\mathbf{Z}\) basis can be written as
\[\overline{\phi}_{1}^{z}= \frac{\overline{m}_{1}^{x}}{\underline{n}_{1}^{x}}+\gamma\left( \underline{n}_{1}^{z},\underline{n}_{1}^{x},\frac{\overline{t}_{1}^{x}}{ \underline{n}_{1}^{z}},\varepsilon_{e}\right). \tag{106}\]
In the simulation, we set
\[\begin{split} n_{k}^{z}=&\frac{Np_{k}}{2}\left[1-(1- p_{d}^{z})^{2}e^{-kq_{q}\eta^{z}}\right]\left[1+(1-p_{d}^{x})^{2}e^{-kq_{q}\eta^{z}} \right],\\ m_{k}^{z}=&\frac{Np_{k}}{2}\left[1+(1-p_{d}^{x})^{2 }e^{-kq_{q}\eta^{z}}\right]\left\{(e_{0}-e_{m}^{z})\left[1-(1-p_{d}^{z})^{2} \right]e^{-kq_{q}\eta^{z}}+e_{m}^{z}\left[1-(1-p_{d}^{z})^{2}e^{-kq_{q}\eta^{z }}\right]\right\},\end{split} \tag{107}\]
and
\[\begin{split} n_{k}^{x}=&\frac{Np_{k}}{2}\left[1-(1- p_{d}^{z})^{2}e^{-kq_{q}\eta^{z}}\right]\left[1+(1-p_{d}^{z})^{2}e^{-kq_{q}\eta^{z }}\right],\\ m_{k}^{x}=&\frac{Np_{k}}{2}\left[1+(1-p_{d}^{z})^{2 }e^{-kq_{q}\eta^{z}}\right]\left\{(e_{0}-e_{m}^{x})\left[1-(1-p_{d}^{x})^{2} \right]e^{-kq_{q}\eta^{x}}+e_{m}^{x}\left[1-(1-p_{d}^{x})^{2}e^{-kq_{q}\eta^{z }}\right]\right\},\end{split} \tag{108}\]
where \(e_{0}=1/2\) is the error rate of the background noise, \(e_{m}^{z}=e_{m}^{x}=e_{m}\), \(p_{d}^{z}=p_{d}^{x}=p_{d}\), and \(\eta^{z}=\eta^{x}=\eta_{d}10^{-\frac{\alpha l+\eta_{\rm int}}{10}}\). The code of decoy-state QKD and decoy-state MDI-QKD has been uploaded to the open-source code website [84].
## Appendix E Statistical fluctuation analysis
In this Appendix, we introduce the statistical fluctuation analysis method [81] used in the simulation.
_Chernoff bound._ For a given expected value \(x^{*}\) and failure probability \(\varepsilon\), we can use the Chernoff bound to estimate the upper and lower bounds of the observed value
\[\overline{x}=\varphi^{U}(x^{*})=x^{*}+\frac{\beta}{2}+\sqrt{2\beta x^{*}+\frac {\beta^{2}}{4}}, \tag{109}\]
and
\[\underline{x}=\varphi^{L}(x^{*})=x^{*}-\sqrt{2\beta x^{*}}, \tag{110}\]
where \(\beta=\ln\epsilon^{-1}\).
_Variant of Chernoff bound._ The variant of the Chernoff bound can help us estimate the expected value from their observed values. One can apply the following equations to obtain the upper and lower bounds of \(x^{*}\)
\[\overline{x}^{*}=x+\beta+\sqrt{2\beta x+\beta^{2}}, \tag{10}\]
and
\[\underline{x}^{*}=\max\left\{x-\frac{\beta}{2}-\sqrt{2\beta x+\frac{\beta^{2}} {4}},\ 0\right\}. \tag{11}\]
|
2307.16380 | Low-Dissipation Central-Upwind Schemes for Compressible Multifluids | We introduce second-order low-dissipation (LD) path-conservative
central-upwind (PCCU) schemes for the one- (1-D) and two-dimensional (2-D)
multifluid systems, whose components are assumed to be immiscible and separated
by material interfaces. The proposed LD PCCU schemes are derived within the
flux globalization based PCCU framework and they employ the LD central-upwind
(LDCU) numerical fluxes. These fluxes have been recently proposed in [{\sc A.
Kurganov and R. Xin}, J. Sci. Comput., 96 (2023), Paper No. 56] for the
single-fluid compressible Euler equations and we rigorously develop their
multifluid extensions. In order to achieve higher resolution near the material
interfaces, we track their locations and use an overcompressive SBM limiter in
their neighborhoods, while utilizing a dissipative generalized minmod limiter
in the rest of the computational domain. We first develop a second-order
finite-volume LD PCCU scheme and then extend it to the fifth order of accuracy
via the finite-difference alternative weighted essentially non-oscillatory
(A-WENO) framework. We apply the developed schemes to a number of 1-D and 2-D
numerical examples to demonstrate the performance of the new schemes. | Shaoshuai Chu, Alexander Kurganov, Ruixiao Xin | 2023-07-31T03:14:00Z | http://arxiv.org/abs/2307.16380v1 | # Low-Dissipation Central-Upwind Schemes for Compressible Multifluids
###### Abstract
We introduce second-order low-dissipation (LD) path-conservative central-upwind (PCCU) schemes for the one- (1-D) and two-dimensional (2-D) multifluid systems, whose components are assumed to be immiscible and separated by material interfaces. The proposed LD PCCU schemes are derived within the flux globalization based PCCU framework and they employ the LD central-upwind (LDCU) numerical fluxes. These fluxes have been recently proposed in [A. Kurganov and R. Xin, J. Sci. Comput., 96 (2023), Paper No. 56] for the single-fluid compressible Euler equations and we rigorously develop their multifluid extensions. In order to achieve higher resolution near the material interfaces, we track their locations and use an overcompressive SBM limiter in their neighborhoods, while utilizing a dissipative generalized minmod limiter in the rest of the computational domain. We first develop a second-order finite-volume LD PCCU scheme and then extend it to the fifth order of accuracy via the finite-difference alternative weighted essentially non-oscillatory (A-WENO) framework. We apply the developed schemes to a number of 1-D and 2-D numerical examples to demonstrate the performance of the new schemes.
**Key words:** Low-dissipation central-upwind schemes, path-conservative central-upwind schemes, flux globalization, affine-invariant WENO-Z interpolation, compressible multifluids.
**AMS subject classification:** 76M12, 65M08, 76M20, 65M20, 76N30.
## 1 Introduction
In this paper, we focus on the development of highly accurate and conservative finite-volume methods for compressible multifluids, which are assumed to be immiscible. The studied two-dimensional (2-D) multifluid system reads as
\[\begin{split}&\rho_{t}+(\rho u)_{x}+(\rho v)_{y}=0,\\ &(\rho u)_{t}+(\rho u^{2}+p)_{x}+(\rho uv)_{y}=0,\\ &(\rho v)_{t}+(\rho uv)_{x}+(\rho v^{2}+p)_{y}=0,\\ & E_{t}+[u(E+p)]_{x}+[v(E+p)]_{y}=0.\end{split} \tag{1.1}\]
Here, \(x\) and \(y\) are spatial variables, \(t\) is the time, \(\rho\) is the density, \(u\) and \(v\) are the \(x\)- and \(y\)-velocities, \(p\) is the pressure, and \(E\) is the total energy. The system (1.1) is closed through the following equation of state (EOS) for each of the fluid components:
\[p=(\gamma-1)\left[E-\frac{\rho}{2}(u^{2}+v^{2})\right]-\gamma\pi_{\infty}, \tag{1.2}\]
where the parameters \(\gamma\) and \(\pi_{\infty}\) represent the specific heat ratio and stiffness parameter, respectively. When \(\pi_{\infty}\equiv 0\), the system (1.1)-(1.2) reduces to the ideal gas multicomponent case.
The fluid components can be identified by the variable \(\phi\), such as the specific heat ratio \(\gamma\) (or a certain function of \(\gamma\)), the mass fraction of the fluid component in the fluid mixture, or a level-set function designed to track the interfaces between the fluid components; see, e.g., [2, 3, 10, 15, 33, 36, 51, 50, 33] and references therein. The state variable \(\phi\) propagates with the fluid velocity and thus satisfies the following advection equation:
\[\phi_{t}+u\phi_{x}+v\phi_{y}=0. \tag{1.3}\]
The system (1.1)-(1.3) is a nonlinear hyperbolic system of PDEs and thus its solutions may develop complicated wave structures including shocks, rarefactions, and contact discontinuities. In the single-fluid regime, that is, when \(\gamma\equiv\text{Const}\) and \(\pi_{\infty}\equiv\text{Const}\), the system (1.1)-(1.3) reduces to the Euler equations of gas dynamics, which can be numerically solved by finite-volume (FV) methods; see, e.g., the monographs [20, 30, 45] and references therein. However, a straightforward application of single-fluid FV methods to the multifluid system (1.1)-(1.3) may generate spurious pressure and velocity oscillations, which typically originate near the material interface and then spread all over the computational domain; see, e.g., the review paper [2] and references therein.
In recent years, a variety of FV methods capable of capturing material interfaces in a non-oscillatory manner have been proposed. A fully conservative approach was first developed in [42], where the pressure and velocity remained constant across the material interface. This approach is robust but may suffer obvious drawbacks when strong shocks pass through the fluid interface. The quasi-conservative approach was first introduced in [1], where pressure and velocity non-disturbing condition at an isolated material interface was introduced to analyze and derive the spatial discretization. The resulting schemes reduced the numerical oscillations effectively with the help of a quasi-conservative discretization. There are also many locally nonconservative approaches designed to prevent pressure/velocity oscillations by sacrificing the conservation property near material interfaces. The conservation errors in these approaches are typically small and decay after the mesh is refined. The pressure-based hybrid algorithms [9, 24] are obtained by switching from the conservative energy equation to the nonconservative pressure one near the interfaces. The ghost-cell methods based on the single-fluid interpolations leading to two different single-fluid numerical fluxes at the material interfaces (placed at the cell interfaces at each time step) were introduced in [3, 15]. The interface tracking method [10] is based on the interpolation between the single-fluid data from both sides of the interface and ignoring the "mixed" cell data. We note that both the ghost fluid and interface tracking approaches are very robust in the 1-D case, but their multidimensional extensions are rather cumbersome. For several high-order WENO schemes for compressible multifluids, we refer the reader to [13, 19, 22, 38, 39].
In this paper, our objective is to develop highly accurate and non-oscillatory numerical schemes for the so-called \(\gamma\)-based multifluid systems studied in [8, 43], which in the 2-D case read as (1.1)-(1.2) together with the equations (1.3) for the state variables \(\Gamma:=1/(\gamma-1)\) and \(\Pi:=\gamma\pi_{\infty}/(\gamma-1)\)
which we recast as follows:
\[\Gamma_{t}+(u\Gamma)_{x}+(v\Gamma)_{y}=\Gamma(u_{x}+v_{y}),\quad\Pi_{t}+(u\Pi)_{x }+(v\Pi)_{y}=\Pi(u_{x}+v_{y}). \tag{1.4}\]
The resulting system is nonconservative (in fact, it can be rewritten in the conservative form, but as it was shown in [43], a nonconservative form is preferable for designing an accurate numerical method) and the nonconservative terms on the right-hand side require a special treatment.
We numerically solve the system (1.1)-(1.2), (1.4) and its one-dimensional (1-D) version by the Riemann-problem-solver-free central-upwind (CU) schemes, which were originally introduced in [25, 27, 28] for general multidimensional hyperbolic systems of conservative laws, and then extended to nonconservative hyperbolic systems in [6], where path-conservative CU (PCCU) schemes were introduced. The PCCU schemes were extended to the flux globalization framework allowing to treat a wider variety of nonconservative systems in [4, 5, 7, 26].
The aforementioned PCCU schemes are based on the CU numerical fluxes from [25, 27], which have relatively large numerical dissipation preventing high resolution of contact waves/material interfaces. In the recent work [29], we have introduced a new way of reducing the numerical dissipation present in the CU schemes and introduced the low-dissipation CU (LDCU) schemes. In these schemes, the dissipation is reduced at the projection step, performed after the numerical solution is evolved to the new time level. The novel projection is based on a subcell resolution technique, which introduces several degrees of freedom that can be utilized to better approximate contact waves and material interfaces. In [29], we have designed the LDCU schemes for both the 1-D and 2-D single-fluid compressible Euler equations and in this paper, we extend the LDCU schemes to the \(\gamma\)-based multifluid models. The extension is carried out in the flux globalization PCCU framework and results in new flux globalization based LD PCCU schemes.
We also extend the proposed LD PCCU schemes to the fifth order of accuracy using the framework of the finite-difference alternative WENO (A-WENO) schemes developed in [11, 21, 34, 35, 47, 48, 49]. Our new fifth-order schemes are based on the LD PCCU numerical fluxes, a new, more efficient way to approximate the high-order A-WENO correction terms (see [12]), and the recently proposed fifth-order affine-invariant WENO-Z (Ai-WENO-Z) interpolation [14, 31, 46] applied to the local characteristic variables with the help of the local characteristic decomposition (LCD).
This paper is organized as follows. SS2 is devoted to the 1-D LD PCCU scheme. In SS2.1, we give an overview of the flux globalization based PCCU schemes and develop such scheme for the \(\gamma\)-based multifluid model. The LD PCCU scheme is derived in SS2.2, where we prove that the new scheme preserves constant velocity and pressure across isolated material interfaces (this ensures lack of pressure/velocity-based oscillations). In SS2.3, the second-order LD PCCU scheme is extended to the fifth-order flux globalization based LD Ai-WENO PCCU scheme. In SS3, we present the 2-D extensions of the new LD PCCU schemes. In SS4, we test the proposed LD PCCU schemes together with their fifth-order versions on a number of 1-D and 2-D numerical examples. Finally, in SS5, we give some concluding remarks and comments.
## 2 One-Dimensional Algorithms
In this section, we present the new 1-D flux globalization based LD PCCU schemes for the 1-D compressible multifluids.
### Flux Globalization Based Path-Conservative Central-Upwind Schemes
We begin with a brief overview of the flux globalization based PCCU scheme, which was introduced in [26] for the general nonconservative system
\[\mathbf{U}_{t}+\mathbf{F}(\mathbf{U})_{x}=B(\mathbf{U})\mathbf{U}_{x},\]
which can be rewritten in the following quasi-conservative form:
\[\mathbf{U}_{t}+\mathbf{K}(\mathbf{U})_{x}=0,\quad\mathbf{K}(\mathbf{U})=\mathbf{F}(\mathbf{U})-\mathbf{R}(\mathbf{U}), \tag{2.1}\]
where \(\mathbf{U}(\mathbf{x},t)\in\mathbb{R}^{d}\) is the vector of unknowns, \(\mathbf{F}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is a flux, \(B\in\mathbb{R}^{d\times d}\),
\[\mathbf{R}(\mathbf{U}):=\int\limits_{\widetilde{x}}^{x}B(\mathbf{U})\mathbf{U}_{\xi}(\xi,t)\, \mathrm{d}\xi,\]
and \(\widehat{x}\) is an arbitrary number.
We first introduce a uniform mesh consisting of the finite-volume cells \(C_{j}:=[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]\) of size \(x_{j+\frac{1}{2}}-x_{j-\frac{1}{2}}\equiv\Delta x\) centered at \(x_{j}=(x_{j-\frac{1}{2}}+x_{j+\frac{1}{2}})/2\), \(j=1,\ldots,N\) and set \(\widehat{x}=x_{\frac{1}{2}}\). We assume that at a certain level \(t\), an approximate solution, realized in terms of the cell averages
\[\overline{\mathbf{U}}_{j}(t):\approx\frac{1}{\Delta x}\int\limits_{C_{j}}\mathbf{U}(x,t)\,\mathrm{d}x,\]
is available. The cell averages \(\overline{\mathbf{U}}_{j}\) are then evolved in time by solving the following system of ODEs (see [26]):
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathbf{U}}_{j}=-\frac{\mathbf{\mathcal{K}}_{ j+\frac{1}{2}}-\mathbf{\mathcal{K}}_{j-\frac{1}{2}}}{\Delta x}, \tag{2.2}\]
where \(\mathbf{\mathcal{K}}_{j+\frac{1}{2}}\) are the CU numerical fluxes
\[\mathbf{\mathcal{K}}_{j+\frac{1}{2}}=\frac{a_{j+\frac{1}{2}}^{+}\mathbf{K}_{j+\frac{1 }{2}}^{-}-a_{j+\frac{1}{2}}^{-}\mathbf{K}_{j+\frac{1}{2}}^{+}}{a_{j+\frac{1}{2}}^ {+}-a_{j+\frac{1}{2}}^{-}}+\frac{a_{j+\frac{1}{2}}^{+}a_{j+\frac{1}{2}}^{-}}{ a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-}}\left(\mathbf{U}_{j+\frac{1}{2}}^{+}-\mathbf{ U}_{j+\frac{1}{2}}^{-}\right). \tag{2.3}\]
Notice that all of the indexed quantities in (2.2) and (2.3) as well as other indexed quantities introduced below depend on \(t\), but from now on we will omit this dependence for the sake of brevity. In (2.3), \(\mathbf{U}_{j+\frac{1}{2}}^{\pm}\) are the left/right-sided point values of \(\mathbf{U}\) at the cell interfaces \(x_{j+\frac{1}{2}}\), which are computed using a piecewise linear reconstruction, which will be discussed in SS2.1.1, and \(a_{j+\frac{1}{2}}^{\pm}\) are the one-sided local speeds of propagation, which can be estimated using the largest and smallest eigenvalues of the matrix \(\frac{\partial\mathbf{F}}{\partial\mathbf{U}}(\mathbf{U})-B(\mathbf{U})\). The global fluxes \(\mathbf{K}_{j+\frac{1}{2}}^{\pm}\) are obtained using the relation in (2.1), namely, by
\[\mathbf{K}_{j+\frac{1}{2}}^{\pm}=\mathbf{F}_{j+\frac{1}{2}}^{\pm}-\mathbf{R}_{j+\frac{1}{2 }}^{\pm}, \tag{2.4}\]
where \(\mathbf{F}_{j+\frac{1}{2}}^{\pm}:=\mathbf{F}(\mathbf{U}_{j+\frac{1}{2}}^{\pm})\) and the point values of the global variable \(\mathbf{R}\) are computed as follows. First, we set \(\mathbf{R}_{\frac{1}{2}}^{-}:=\mathbf{0}\) and then evaluate
\[\mathbf{R}_{\frac{1}{2}}^{+}=\mathbf{B}_{\mathbf{\Psi},\frac{1}{2}}, \tag{2.5}\]
and recursively
\[\mathbf{R}^{-}_{j+\frac{1}{2}}=\mathbf{R}^{+}_{j-\frac{1}{2}}+\mathbf{B}_{j},\quad\mathbf{R}^{+}_{ j+\frac{1}{2}}=\mathbf{R}^{-}_{j+\frac{1}{2}}+\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2}}, \quad j=1,\ldots,N. \tag{2.6}\]
In (2.5) and (2.6), \(\mathbf{B}_{j}\) and \(\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2}}\) are obtained using a proper quadrature for the integrals in
\[\mathbf{B}_{j}\approx\int\limits_{C_{j}}B(\mathbf{U})\mathbf{U}_{x}\,\mathrm{d}x\quad \text{and}\quad\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2}}=\int\limits_{0}^{1}B\big{(}\bm {\Psi}_{j+\frac{1}{2}}(s)\big{)}\mathbf{\Psi}^{\prime}_{j+\frac{1}{2}}(s)\,\mathrm{ d}s, \tag{2.7}\]
where, \(\mathbf{\Psi}_{j+\frac{1}{2}}(s):=\mathbf{\Psi}\big{(}s;\mathbf{U}^{-}_{j+\frac{1}{2}},\bm {U}^{+}_{j+\frac{1}{2}}\big{)}\) is a sufficiently smooth path connecting the states \(\mathbf{U}^{-}_{j+\frac{1}{2}}\) and \(\mathbf{U}^{+}_{j+\frac{1}{2}}\), that is,
\[\mathbf{\Psi}:[0,1]\times\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d},\quad \mathbf{\Psi}(0;\mathbf{U}^{-}_{j+\frac{1}{2}},\mathbf{U}^{+}_{j+\frac{1}{2}})=\mathbf{U}^{-}_ {j+\frac{1}{2}},\quad\mathbf{\Psi}(1;\mathbf{U}^{-}_{j+\frac{1}{2}},\mathbf{U}^{+}_{j+ \frac{1}{2}})=\mathbf{U}^{+}_{j+\frac{1}{2}}.\]
#### 2.1.1 Application to the Compressible Multifluid System
We apply the flux globalization based PCCU scheme to the 1-D \(\gamma\)-based multifluid system
\[\begin{split}&\rho_{t}+(\rho u)_{x}=0,\\ &(\rho u)_{t}+(\rho u^{2}+p)_{x}=0,\\ & E_{t}+[u(E+p)]_{x}=0,\\ &\Gamma_{t}+(u\Gamma)_{x}=\Gamma u_{x},\\ &\Pi_{t}+(u\Pi)_{x}=\Pi u_{x},\end{split} \tag{2.8}\]
completed with the following EOS:
\[p=(\gamma-1)\left[E-\frac{\rho}{2}u^{2}\right]-\gamma\pi_{\infty}. \tag{2.9}\]
The system (2.8) can be rewritten in the equivalent quasi-conservative form (2.1) with
\[\begin{split}&\mathbf{U}:=(\rho,\rho u,E,\Gamma,\Pi)^{\top},\quad \mathbf{F}(\mathbf{U})=(\rho u,\rho u^{2}+p,u(E+p),u\Gamma,u\Pi)^{\top},\\ &\text{and}\quad\mathbf{R}(\mathbf{U})=\bigg{(}0,0,0,\int\limits_{\widehat {x}}^{x}\Gamma u_{\xi}\,\mathrm{d}\xi,\int\limits_{\widehat{x}}^{x}\Pi u_{\xi }\,\mathrm{d}\xi\bigg{)}^{\top}.\end{split} \tag{2.10}\]
We first discuss the reconstruction procedure for recovering the point values \(\mathbf{U}^{\pm}_{j+\frac{1}{2}}\) out of the cell averages \(\overline{\mathbf{U}}_{j}\). Since the variables \(u\) and \(p\) are continuous across material interfaces (contact waves), we reconstruct the primitive variables \(\mathbf{V}:=(\rho,u,p,\Gamma,\Pi)^{\top}\) instead of the conservative ones. To this end, we compute the cell centered values of \(u\) and \(p\),
\[u_{j}=\frac{(\overline{\rho u})_{j}}{\overline{\rho}_{j}},\quad p_{j}=\frac{1} {\overline{\Gamma}_{j}}\left[\overline{E}_{j}-\frac{((\overline{\rho u})_{j} )^{2}}{2\overline{\rho}_{j}}-\overline{\Pi}_{j}\right], \tag{2.11}\]
and then construct the linear pieces
\[\widetilde{\mathbf{V}}_{j}(x)=\mathbf{V}_{j}+(\mathbf{V}_{x})_{j}(x-x_{j}),\quad x\in C_{j}, \tag{2.12}\]
where \(\mathbf{V}_{j}:=(\overline{\rho}_{j},u_{j},p_{j},\overline{\Gamma}_{j},\overline{ \Pi}_{j})^{\top}\) and the slopes \((\mathbf{V}_{x})_{j}\) are supposed to be computed with the help of a nonlinear limiter to ensure a non-oscillatory nature of (2.12). In the numerical experiments reported in SS4, we implement a simple adaptive limiting strategy and use different limiters near and away from the material interfaces. To this end, we need to detect the location of the interfaces. In the two-fluid case, this can be done as follows. We first introduce \(\widehat{\Gamma}:=(\Gamma_{1}+\Gamma_{\Pi})/2\), where \(\Gamma_{1}\) and \(\Gamma_{\Pi}\) are the values of \(\Gamma\) for the first and second fluid, respectively. We then assume that the interface is located either in cell \(C_{j}\) or \(C_{j+1}\) if
\[(\overline{\Gamma}_{j}-\widehat{\Gamma})(\overline{\Gamma}_{j+1}-\widehat{ \Gamma})<0. \tag{2.13}\]
In these two cells as well as in the neighboring cells \(C_{j-1}\) and \(C_{j+2}\), we use the overcompressive SBM limiter [32]:
\[(\mathbf{V}_{x})_{j}=\phi_{\theta,\tau}^{\mathrm{SBM}}\left(\frac{\overline{\mathbf{ V}}_{j+1}-\overline{\mathbf{V}}_{j}}{\overline{\mathbf{V}}_{j}-\overline{\mathbf{V}}_{j-1}} \right)\frac{\overline{\mathbf{V}}_{j+1}-\overline{\mathbf{V}}_{j}}{\Delta x}, \tag{2.14}\]
where the two-parameter function
\[\phi_{\theta,\tau}^{\mathrm{SBM}}(r):=\begin{cases}0&\text{if }r<0,\\ \min\{r\theta,1+\tau(r-1)\}&\text{if }0<r\leq 1,\\ r\phi_{\theta,\tau}^{\mathrm{SBM}}\Big{(}\frac{1}{r}\Big{)}&\text{otherwise}, \end{cases} \tag{2.15}\]
is applied in a componentwise manner with \(\tau=-0.5\), which belongs to the overcompressive range of values of \(\tau\); see [32]. Away from the material interfaces, we use a dissipative generalized minmod limiter which is given by the same formulae (2.14)-(2.15), but with \(\tau=0.5\). In both areas, we use \(\theta=1.3\).
**Remark 2.1**: _Notice that the generalized minmod limiter can be written in a simpler form [32, 37, 44] by_
\[(\mathbf{V}_{x})_{j}=\mathrm{minmod}\left(\theta\,\frac{\overline{\mathbf{V}}_{j+1}- \overline{\mathbf{V}}_{j}}{\Delta x},\,\frac{\overline{\mathbf{V}}_{j+1}-\overline{ \mathbf{V}}_{j-1}}{2\Delta x},\,\theta\,\frac{\overline{\mathbf{V}}_{j}-\overline{\mathbf{ U}}_{j-1}}{\Delta x}\right),\]
_with the minmod function defined by_
\[\mathrm{minmod}(c_{1},c_{2},\ldots)=\begin{cases}\min(c_{1},c_{2},\ldots)& \text{if }c_{i}>0,\,\,\forall i,\\ \max(c_{1},c_{2},\ldots)&\text{if }c_{i}<0,\,\,\forall i,\\ 0&\text{otherwise}.\end{cases}\]
**Remark 2.2**: _If the number of fluid components is more than two, detecting interface cells becomes a more complicated task. A reasonable extension of the strategy used in (2.13) should be developed for the problem at hand._
Equipped with \((\mathbf{V}_{x})_{j}\), we then use (2.12) to obtain
\[\mathbf{V}_{j+\frac{1}{2}}^{-}=\lim_{x\to x^{-}_{j+\frac{1}{2}}}\widetilde{\mathbf{V} }(x)=\,\overline{\mathbf{V}}_{j}+\frac{\Delta x}{2}(\mathbf{V}_{x})_{j},\quad\mathbf{V}_{j +\frac{1}{2}}^{+}=\lim_{x\to x^{+}_{j+\frac{1}{2}}}\widetilde{\mathbf{V}}(x)=\, \overline{\mathbf{V}}_{j+1}-\frac{\Delta x}{2}(\mathbf{V}_{x})_{j+1}, \tag{2.16}\]
and the corresponding point values of the conservative variables \(\mathbf{U}\):
\[\mathbf{U}_{j+\frac{1}{2}}^{\pm}=\left(\rho_{j+\frac{1}{2}}^{\pm},\rho_{j+\frac{1 }{2}}^{\pm}u_{j+\frac{1}{2}}^{\pm},E_{j+\frac{1}{2}}^{\pm},\Gamma_{j+\frac{1}{2 }}^{\pm},\Pi_{j+\frac{1}{2}}^{\pm}\right)^{\top},\quad E_{j+\frac{1}{2}}^{\pm}= \Gamma_{j+\frac{1}{2}}^{\pm}p_{j+\frac{1}{2}}^{\pm}+\frac{\rho_{j+\frac{1}{2} }^{\pm}}{2}\big{(}u_{j+\frac{1}{2}}^{\pm}\big{)}^{2}+\Pi_{j+\frac{1}{2}}^{\pm}. \tag{2.17}\]
We then compute the point values \(\mathbf{K}_{j+\frac{1}{2}}^{\pm}\) following (2.4)-(2.6). First, we use (2.10) and (2.17) to obtain
\[\mathbf{F}_{j+\frac{1}{2}}^{\pm}=\Big{(}\rho_{j+\frac{1}{2}}^{\pm}u_{j+\frac{1}{2}}^ {\pm},\rho_{j+\frac{1}{2}}^{\pm}\big{(}u_{j+\frac{1}{2}}^{\pm}\big{)}^{2}+p_{j+ \frac{1}{2}}^{\pm},u_{j+\frac{1}{2}}^{\pm}\big{(}E_{j+\frac{1}{2}}^{\pm}+p_{j+ \frac{1}{2}}^{\pm}\big{)},\Gamma_{j+\frac{1}{2}}^{\pm}u_{j+\frac{1}{2}}^{\pm}, \Pi_{j+\frac{1}{2}}^{\pm}u_{j+\frac{1}{2}}^{\pm}\Big{)}^{\top}\,, \tag{2.18}\]
and then evaluate \(\mathbf{B}_{j}\) in (2.7) by substituting there the piecewise linear reconstructions (2.12) of \(u\), \(\Gamma\), and \(\Pi\), which results in
\[\mathbf{B}_{j}=\bigg{(}0,0,0,\frac{\Gamma_{j+\frac{1}{2}}^{-}+\Gamma_{j-\frac{1}{ 2}}^{+}}{2}\big{(}u_{j+\frac{1}{2}}^{-}-u_{j-\frac{1}{2}}^{+}\big{)},\frac{ \Pi_{j+\frac{1}{2}}^{-}+\Pi_{j-\frac{1}{2}}^{+}}{2}\big{(}u_{j+\frac{1}{2}}^{ -}-u_{j-\frac{1}{2}}^{+}\big{)}\bigg{)}^{\top}. \tag{2.19}\]
Next, in order to obtain \(\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2}}\) in (2.7), a proper path connecting the states \((u_{j+\frac{1}{2}}^{-},\Gamma_{j+\frac{1}{2}}^{-},\Pi_{j+\frac{1}{2}}^{-})\) and \((u_{j+\frac{1}{2}}^{+},\Gamma_{j+\frac{1}{2}}^{+},\Pi_{j+\frac{1}{2}}^{+})\) needs to be used, for instance, a simple linear path:
\[\begin{split}\mathbf{\Psi}_{j+\frac{1}{2}}^{u}(s)&=u_{j+ \frac{1}{2}}^{-}+s\big{(}u_{j+\frac{1}{2}}^{+}-u_{j+\frac{1}{2}}^{-}\big{)}, \ \ \mathbf{\Psi}_{j+\frac{1}{2}}^{\Gamma}(s)=\Gamma_{j+\frac{1}{2}}^{-}+s\big{(} \Gamma_{j+\frac{1}{2}}^{+}-\Gamma_{j+\frac{1}{2}}^{-}\big{)},\\ \mathbf{\Psi}_{j+\frac{1}{2}}^{\Pi}(s)&=\Pi_{j+\frac{1}{ 2}}^{-}+s\big{(}\Pi_{j+\frac{1}{2}}^{+}-\Pi_{j+\frac{1}{2}}^{-}\big{)}.\end{split} \tag{2.20}\]
Substituting (2.20) into (2.7) then results in
\[\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2}}=\bigg{(}0,0,0,\frac{\Gamma_{j+\frac{1}{2}}^{ +}+\Gamma_{j+\frac{1}{2}}^{-}}{2}\big{(}u_{j+\frac{1}{2}}^{+}-u_{j+\frac{1}{2} }^{-}\big{)},\frac{\Pi_{j+\frac{1}{2}}^{+}+\Pi_{j+\frac{1}{2}}^{-}}{2}\big{(}u _{j+\frac{1}{2}}^{+}-u_{j+\frac{1}{2}}^{-}\big{)}\bigg{)}^{\top}. \tag{2.21}\]
Finally, the one-sided local-speeds of propagation \(a_{j+\frac{1}{2}}^{\pm}\) can be estimated by
\[a_{j+\frac{1}{2}}^{+}=\max\left\{u_{j+\frac{1}{2}}^{-}+c_{j+\frac{1}{2}}^{-},u _{j+\frac{1}{2}}^{+}+c_{j+\frac{1}{2}}^{+},0\right\},\quad a_{j+\frac{1}{2}}^{ -}=\min\left\{u_{j+\frac{1}{2}}^{-}-c_{j+\frac{1}{2}}^{-},u_{j+\frac{1}{2}}^{+ }-c_{j+\frac{1}{2}}^{+},0\right\},\]
where \(c:=\sqrt{\left[(1+\Gamma)p+\Pi\right]/(\Gamma\rho)}\).
**Remark 2.3**: _The numerical fluxes (2.3) are slightly different from those present in [26] as one of the goals in [26] was to make the resulting scheme well-balanced and thus different values of \(\mathbf{U}_{j+\frac{1}{2}}^{\pm}\) were used in (2.3) and in the computation of \(\mathbf{K}_{j+\frac{1}{2}}^{\pm}\)._
### Flux Globalization Based Low-Dissipation PCCU Schemes
In this section, we derived a modified, LD version of the flux globalization based PCCU scheme presented in SS2.1. We follow the idea used in [29], where the LDCU scheme for the single-fluid compressible Euler equations has been introduced. In order to extend the LDCU scheme to the multifluid case, we now go through all of the derivation steps and begin with the development of the fully discrete LD PCCU scheme.
#### 2.2.1 Fully Discrete Scheme
We assume that the computed cell averages \(\mathbf{\overline{U}}_{j}^{n}:\approx\frac{1}{\Delta x}\int_{C_{j}}\mathbf{U}(x,t^{n}) \,\mathrm{d}x\) are available at a certain time level \(t=t^{n}\) and use them to reconstruct a second-order piecewise linear interpolant consisting
of the linear pieces \(\overline{\mathbf{U}}_{j}^{\,n}+(\mathbf{U}_{x})_{j}^{n}(x-x_{j})\), \(x\in C_{j}\), where the slopes \((\mathbf{U}_{x})_{j}^{n}\) are obtained using a certain nonlinear limiter. We then estimate the local speeds of propagation \(a_{j+\frac{1}{2}}^{\pm}\), introduce the corresponding points \(x_{j+\frac{1}{2},\ell}^{n}:=x_{j+\frac{1}{2}}+a_{j+\frac{1}{2}}^{-}\Delta t^{n}\) and \(x_{j+\frac{1}{2},r}^{n}:=x_{j+\frac{1}{2}}+a_{j+\frac{1}{2}}^{+}\Delta t^{n}\), and integrate the system (2.1) over the space-time control volumes, which consist of the "smooth", \([x_{j-\frac{1}{2},r},x_{j+\frac{1}{2},\ell}]\times[t^{n},t^{n+1}]\), and "nonsmooth", \([x_{j+\frac{1}{2},\ell},x_{j+\frac{1}{2},r}]\times[t^{n},t^{n+1}]\), ones, where \(t^{n+1}:=t^{n}+\Delta t^{n}\). This way the solution is evolved in time and upon the completion of the evolution step, we obtain the intermediate cell averages
\[\begin{split}\overline{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int}}= \frac{1}{a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-}}\Big{\{}& \mathbf{U}_{j+\frac{1}{2},r}^{n}a_{j+\frac{1}{2}}^{+}-\frac{(\mathbf{U}_{x})_{j+1}^{n} }{2}\big{(}a_{j+\frac{1}{2}}^{+}\big{)}^{2}\Delta t^{n}-\mathbf{U}_{j+\frac{1}{2}, \ell}^{n}a_{j+\frac{1}{2}}^{-}+\frac{(\mathbf{U}_{x})_{j}^{n}}{2}\big{(}a_{j+ \frac{1}{2}}^{-}\big{)}^{2}\Delta t^{n}\\ &-\Big{[}\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},r}^{n+\frac{1}{2}} \big{)}-\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},\ell}^{n+\frac{1}{2}}\big{)}\Big{]} \,\Big{\}}\end{split} \tag{2.22}\]
and
\[\begin{split}\overline{\mathbf{U}}_{j}^{\,\text{int}}=& \ \overline{\mathbf{U}}_{j}^{\,n}+\frac{(\mathbf{U}_{x})_{j}^{n}}{2}\big{(}a_{j-\frac{1}{2} }^{+}+a_{j+\frac{1}{2}}^{-}\big{)}\Delta t^{n}\\ &-\frac{\Delta t^{n}}{\Delta x-\big{(}a_{j-\frac{1}{2}}^{+}-a_{j +\frac{1}{2}}^{-}\big{)}\Delta t^{n}}\left[\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2}, \ell}^{n+\frac{1}{2}}\big{)}-\mathbf{K}\big{(}\mathbf{U}_{j-\frac{1}{2},r}^{n+\frac{1} {2}}\big{)}\right];\end{split} \tag{2.23}\]
see [29] for details. In (2.22)-(2.23), the point values of \(\mathbf{U}\) at \(\big{(}x_{j+\frac{1}{2},\ell}^{n},t^{n}\big{)}\) and \(\big{(}x_{j+\frac{1}{2},r}^{n},t^{n}\big{)}\) are computed using the piecewise linear reconstruction of \(\mathbf{U}\), namely,
\[\begin{split}\mathbf{U}_{j+\frac{1}{2},\ell}^{n}:=& \ \widetilde{\mathbf{U}}_{j+\frac{1}{2},\ell}^{n},t^{n})=\ \overline{\mathbf{U}}_{j}^{\,n}+(\mathbf{U}_{x})_{j}^{n}\Big{(}\frac{\Delta x}{2}+a_{j+ \frac{1}{2}}^{-}\Delta t^{n}\Big{)},\\ \mathbf{U}_{j+\frac{1}{2},r}^{n}:=&\ \widetilde{\mathbf{U}}(x_{j+\frac{1}{2},r}^{n},t^{n})= \ \overline{\mathbf{U}}_{j+1}^{\,n}-(\mathbf{U}_{x})_{j+1}^{n}\Big{(}\frac{\Delta x}{2}-a_{j +\frac{1}{2}}^{+}\Delta t^{n}\Big{)},\end{split}\]
and the point values of \(\mathbf{U}\) at \(\big{(}x_{j+\frac{1}{2},\ell}^{n},t^{n+\frac{1}{2}}\big{)}\) and \(\big{(}x_{j+\frac{1}{2},r}^{n},t^{n+\frac{1}{2}}\big{)}\), are obtained using the Taylor expansions about \(\big{(}x_{j+\frac{1}{2},r},t^{n}\big{)}\) and \(\big{(}x_{j+\frac{1}{2},\ell},t^{n}\big{)}\), respectively, and the fact that \(\mathbf{U}_{t}=-\mathbf{K}(\mathbf{U})_{x}\):
\[\mathbf{U}_{j+\frac{1}{2},\ell}^{n+\frac{1}{2}}=\mathbf{U}_{j+\frac{1}{2},\ell}^{n}- \frac{\Delta t^{n}}{2}\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},\ell}^{n}\big{)}_{x}, \quad\mathbf{U}_{j+\frac{1}{2},r}^{n+\frac{1}{2}}=\mathbf{U}_{j+\frac{1}{2},r}^{n}- \frac{\Delta t^{n}}{2}\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},r}^{n}\big{)}_{x}. \tag{2.24}\]
Here, the slopes \(\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},\ell}^{n}\big{)}_{x}\) and \(\mathbf{K}\big{(}\mathbf{U}_{j+\frac{1}{2},r}^{n}\big{)}_{x}\) can be computed with the help of a certain nonlinear limiter; see [25] for details.
Next, the intermediate solution, realized in terms of \(\{\overline{\mathbf{U}}_{j}^{\,\text{int}}\}\) and \(\{\overline{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int}}\}\), is projected onto the original grid. To this end, we need to construct the interpolant
\[\begin{split}\widetilde{\mathbf{U}}^{\,\text{int}}(x)=\sum_{j}\Big{\{} \widetilde{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int}}(x)\mathcal{X}_{[x_{j+\frac{1}{2},\ell}x_{j+\frac{1}{2},r}]}+\overline{\mathbf{U}}_{j}^{\,\text{int}}\mathcal{X}_{[x_{ j-\frac{1}{2},r}x_{j+\frac{1}{2},\ell}]}\Big{\}}\,,\end{split} \tag{2.25}\]
where \(\mathcal{X}\) denotes the characteristic function of the corresponding intervals. We set
\[\begin{split}\widetilde{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int}}(x)= \begin{cases}\overline{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int,L}},&\ x<x_{j+\frac{1}{2}},\\ \overline{\mathbf{U}}_{j+\frac{1}{2}}^{\,\text{int,R}},&\ x>x_{j+\frac{1}{2}}, \end{cases}\end{split} \tag{2.26}\]
where the values \(\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,L}}\) and \(\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,R}}\) are determined in several steps. First, according to the local conservation requirement, the conditions
\[a_{j+\frac{1}{2}}^{+}\,\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,R}} -a_{j+\frac{1}{2}}^{-}\,\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,L} }=(a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-})\,\overline{\boldsymbol{U}}_{j +\frac{1}{2}}^{\text{int}} \tag{2.27}\]
have to be satisfied, and the rest of the relations on \(\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,L}}\) and \(\overline{\boldsymbol{U}}_{j+\frac{1}{2}}^{\text{int,R}}\) are to be established for the problem at hand. In fact, the conservation of \(\Gamma\) and \(\Pi\) components of \(\boldsymbol{U}\) is not physically essential, but the relation (2.27) for \(\Gamma\) and \(\Pi\) are crucial for proving physically relevant properties of the resulting flux globalization based LD PCCU scheme; see SS2.2.3.
For the \(\gamma\)-based multifluid model (2.8)-(2.9), we follow the single-fluid approach from [29] and make the projection step sharp and accurate for the contact waves, which are linearly degenerate and thus affected by the excessive numerical dissipation much more than nonlinear shock waves. In order to design such projection step, we consider an isolated contact wave consisting of the jump discontinuities in \(\rho\), \(\gamma\), and \(\Pi\) propagating in the region with constant \(u\) and \(p\), and make both \(u\) and \(p\) to be constant across the cell interface, namely, we set
\[\frac{(\overline{\rho u})_{j+\frac{1}{2}}^{\text{int,L}}}{\overline{\rho}_{j+ \frac{1}{2}}^{\text{int,L}}}=\frac{(\overline{\rho u})_{j+\frac{1}{2}}^{\text{ int,R}}}{\overline{\rho}_{j+\frac{1}{2}}^{\text{int,R}}}, \tag{2.28}\]
where we have used the EOS (2.9). Next, we solve (2.27) and (2.28) for \((\overline{\rho u})_{j+\frac{1}{2}}^{\text{int,L}}\), \((\overline{\rho u})_{j+\frac{1}{2}}^{\text{int,R}}\), \(\overline{E}_{j+\frac{1}{2}}^{\text{int,L}}\), and \(\overline{E}_{j+\frac{1}{2}}^{\text{int,R}}\), and express these quantities in terms of \(\overline{\rho}_{j+\frac{1}{2}}^{\text{int,L}}\), \(\overline{\rho}_{j+\frac{1}{2}}^{\text{int,R}}\), \(\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,L}}\), \(\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,R}}\), \(\overline{\Pi}_{j+\frac{1}{2}}^{\text{int,L}}\), and \(\overline{\Pi}_{j+\frac{1}{2}}^{\text{int,R}}\):
\[(\overline{\rho u})_{j+\frac{1}{2}}^{\text{int,L}} =\frac{\overline{\rho}_{j+\frac{1}{2}}^{\text{int,L}}}{\overline{ \rho}_{j+\frac{1}{2}}^{\text{int}}}(\overline{\rho u})_{j+\frac{1}{2}}^{\text{ int}},\quad\overline{E}_{j+\frac{1}{2}}^{\text{int,L}}=\frac{\overline{\Gamma}_{j+ \frac{1}{2}}^{\text{int,L}}}{\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}} \overline{E}_{j+\frac{1}{2}}^{\text{int}} \tag{2.29}\] \[\quad+\frac{a_{j+\frac{1}{2}}^{+}\big{(}\,\overline{\Gamma}_{j+ \frac{1}{2}}^{\text{int,R}}\,\overline{\rho}_{j+\frac{1}{2}}^{\text{int,L}}- \overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,L}}\,\overline{\rho}_{j+\frac{1}{ 2}}^{\text{int,R}}\big{)}}{2\big{(}a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-} \big{)}\,\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}\big{(}\,\overline{\rho} _{j+\frac{1}{2}}^{\text{int}}\big{)}^{2}}\big{(}(\overline{\rho u})_{j+\frac{1} {2}}^{\text{int}}\big{)}^{2}+\frac{a_{j+\frac{1}{2}}^{+}\big{(}\,\overline{\Gamma }_{j+\frac{1}{2}}^{\text{int,R}}\,\overline{\Pi}_{j+\frac{1}{2}}^{\text{int,L}} -\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,L}}\,\overline{\Pi}_{j+\frac{1}{ 2}}^{\text{int,R}}\big{)}}{\big{(}a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-} \big{)}\,\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}},\] \[(\overline{\rho u})_{j+\frac{1}{2}}^{\text{int,R}} =\frac{\overline{\rho}_{j+\frac{1}{2}}^{\text{int,R}}}{\overline{ \rho}_{j+\frac{1}{2}}^{\text{int}}}(\overline{\rho u})_{j+\frac{1}{2}}^{\text{ int}},\quad\overline{E}_{j+\frac{1}{2}}^{\text{int,R}}=\frac{\overline{\Gamma}_{j+ \frac{1}{2}}^{\text{int,R}}}{\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}} \overline{E}_{j+\frac{1}{2}}^{\text{int}}\] \[\quad+\frac{a_{j+\frac{1}{2}}^{-}\big{(}\,\overline{\Gamma}_{j+ \frac{1}{2}}^{\text{int,R}}\,\overline{\rho}_{j+\frac{1}{2}}^{\text{int,L}}- \overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,L}}\,\overline{\rho}_{j+\frac{1}{2}}^ {\text{int,R}}\big{)}}{2\big{(}a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-} \big{)}\,\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}\big{(}\,\overline{\rho} _{j+\frac{1}{2}}^{\text{int}}\big{)}^{2}}+\frac{a_{j+\frac{1}{2}}^{-}\big{(} \,\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,R}}\,\overline{\Pi}_{j+\frac{1}{ 2}}^{\text{int,L}}-\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int,L}}\,\overline{\Pi}_{j+ \frac{1}{2}}^{\text{int,R}}\big{)}}{\big{(}a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^ {-}\big{)}\,\overline{\Gamma}_{j+\frac{1}{2}}^{\text{int}}}.\]
Notice that after enforcing (2.27) and (2.28), we are left with three degrees of freedom, which we use to make the profiles of \(\rho\), \(\Gamma\), and \(\Pi\) across the cell interface as sharp as possible and, at the
same time, non-oscillatory. This is achieved in the same way as in [29], namely, by setting
\[\begin{array}{l}\overline{\rho}^{\rm int,L}_{j+\frac{1}{2}}=\overline{\rho}^{ \rm int}_{j+\frac{1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2}}}{a^{-}_{j+\frac{1} {2}}},\quad\overline{\Gamma}^{\rm int,L}_{j+\frac{1}{2}}=\overline{\Gamma}^{\rm int }_{j+\frac{1}{2}}+\frac{\delta^{\Gamma}_{j+\frac{1}{2}}}{a^{-}_{j+\frac{1}{2} }},\quad\overline{\Pi}^{\rm int,L}_{j+\frac{1}{2}}=\overline{\Pi}^{\rm int}_{j+ \frac{1}{2}}+\frac{\delta^{\Pi}_{j+\frac{1}{2}}}{a^{-}_{j+\frac{1}{2}}},\\ \overline{\rho}^{\rm int,R}_{j+\frac{1}{2}}=\overline{\rho}^{\rm int}_{j+\frac {1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2}}}{a^{+}_{j+\frac{1}{2}}},\quad \overline{\Gamma}^{\rm int,R}_{j+\frac{1}{2}}=\overline{\Gamma}^{\rm int}_{j+ \frac{1}{2}}+\frac{\delta^{\Gamma}_{j+\frac{1}{2}}}{a^{+}_{j+\frac{1}{2}}}, \quad\overline{\Pi}^{\rm int,R}_{j+\frac{1}{2}}=\overline{\Pi}^{\rm int}_{j+ \frac{1}{2}}+\frac{\delta^{\Pi}_{j+\frac{1}{2}}}{a^{+}_{j+\frac{1}{2}}},\end{array} \tag{2.30}\]
where
\[\begin{array}{l}\delta^{\rho}_{j+\frac{1}{2}}={\rm minmod}\left(-a^{-}_{j+ \frac{1}{2}}\big{(}\overline{\rho}^{\rm int}_{j+\frac{1}{2}}-\rho^{\rm int}_{j +\frac{1}{2},\ell}\big{)},a^{+}_{j+\frac{1}{2}}\big{(}\rho^{\rm int}_{j+\frac {1}{2},r}-\overline{\rho}^{\rm int}_{j+\frac{1}{2}}\big{)}\right),\\ \delta^{\Gamma}_{j+\frac{1}{2}}={\rm minmod}\left(-a^{-}_{j+\frac{1}{2}}\big{(} \overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}-\Gamma^{\rm int}_{j+\frac{1}{2}, \ell}\big{)},a^{+}_{j+\frac{1}{2}}\big{(}\Gamma^{\rm int}_{j+\frac{1}{2},r}- \overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}\big{)}\right),\\ \delta^{\Pi}_{j+\frac{1}{2}}={\rm minmod}\left(-a^{-}_{j+\frac{1}{2}}\big{(} \overline{\Pi}^{\rm int}_{j+\frac{1}{2}}-\Pi^{\rm int}_{j+\frac{1}{2},\ell} \big{)},a^{+}_{j+\frac{1}{2}}\big{(}\Pi^{\rm int}_{j+\frac{1}{2},r}- \overline{\Pi}^{\rm int}_{j+\frac{1}{2}}\big{)}\right),\end{array} \tag{2.31}\]
and \(\mathbf{U}^{\rm int}_{j+\frac{1}{2},\ell}\approx\mathbf{U}(x_{j+\frac{1}{2},\ell},t^{n +1})\) and \(\mathbf{U}^{\rm int}_{j+\frac{1}{2},r}\approx\mathbf{U}(x_{j+\frac{1}{2},r},t^{n+1})\) are evaluated similarly to (2.24):
\[\mathbf{U}^{\rm int}_{j+\frac{1}{2},\ell}=\mathbf{U}^{n}_{j+\frac{1}{2},\ell}-\Delta t ^{n}\mathbf{K}\big{(}\mathbf{U}^{n}_{j+\frac{1}{2},\ell}\big{)}_{x},\quad\mathbf{U}^{\rm int }_{j+\frac{1}{2},r}=\mathbf{U}^{n}_{j+\frac{1}{2},r}-\Delta t^{n}\mathbf{K}\big{(}\mathbf{U} ^{n}_{j+\frac{1}{2},r}\big{)}_{x}. \tag{2.32}\]
We then complete the construction of \(\overline{\mathbf{U}}^{\rm int,L}_{j+\frac{1}{2}}\) and \(\overline{\mathbf{U}}^{\rm int,R}_{j+\frac{1}{2}}\) by substituting (2.30) into (2.29), which results in
\[\begin{array}{l}(\overline{\rho}u)^{\rm int,L}_{j+\frac{1}{2}}=(\overline{ \rho}\overline{u})^{\rm int}_{j+\frac{1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2} }}{a^{-}_{j+\frac{1}{2}}}\,u^{\rm int}_{j+\frac{1}{2}},\quad(\overline{\rho} \overline{u})^{\rm int,R}_{j+\frac{1}{2}}=(\overline{\rho}\overline{u})^{\rm int }_{j+\frac{1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2}}}{a^{+}_{j+\frac{1}{2}}} \,u^{\rm int}_{j+\frac{1}{2}},\\ \overline{E}^{\rm int,L}_{j+\frac{1}{2}}=\left(1+\frac{\delta^{\Gamma}_{j+\frac {1}{2}}}{a^{-}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}} \right)\overline{E}^{\rm int}_{j+\frac{1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2} }\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}-\delta^{\Gamma}_{j+\frac{1}{2}} \,\overline{\rho}^{\rm int}_{j+\frac{1}{2}}}{2a^{-}_{j+\frac{1}{2}}\overline{ \Gamma}^{\rm int}_{j+\frac{1}{2}}}\big{(}u^{\rm int}_{j+\frac{1}{2}}\big{)}^{2} +\frac{\delta^{\Pi}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}} -\delta^{\Gamma}_{j+\frac{1}{2}}\overline{\Pi}^{\rm int}_{j+\frac{1}{2}}}{a^{ -}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}},\\ \overline{E}^{\rm int,R}_{j+\frac{1}{2}}=\left(1+\frac{\delta^{\Gamma}_{j+\frac {1}{2}}}{a^{+}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}} \right)\overline{E}^{\rm int}_{j+\frac{1}{2}}+\frac{\delta^{\rho}_{j+\frac{1}{2}} \overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}-\delta^{\Gamma}_{j+\frac{1}{2}} \,\overline{\rho}^{\rm int}_{j+\frac{1}{2}}}{2a^{+}_{j+\frac{1}{2}}\overline{\Gamma }^{\rm int}_{j+\frac{1}{2}}}\big{(}u^{\rm int}_{j+\frac{1}{2}}\big{)}^{2}+ \frac{\delta^{\Pi}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}}{a^{ +}_{j+\frac{1}{2}}\overline{\Gamma}^{\rm int}_{j+\frac{1}{2}}},\end{array} \tag{2.33}\]
where \(u^{\rm int}_{j+\frac{1}{2}}:=(\overline{\rho}\overline{u})^{\rm int}_{j+\frac{1}{2}} /\overline{\rho}^{\rm int}_{j+\frac{1}{2}}\).
Equipped with (2.30)-(2.33), we finalize the projection step by integrating the piecewise constant interpolant (2.25)-(2.26) over the cell \(C_{j}\). This leads to the following cell averages at the time level \(t=t^{n+1}\):
\[\begin{array}{l}\overline{\rho}^{n+1}_{j}=\frac{1}{\Delta x}\int \limits_{C_{j}}\widetilde{\rho}^{\rm int}(x)\,{\rm d}x=\overline{\rho}^{\rm int}_{j} +\frac{\Delta t^{n}}{\Delta x}\Big{[}a^{+}_{j-\frac{1}{2}}\big{(}\overline{ \rho}^{\rm int,R}_{j-\frac{1}{2}}-\overline{\rho}^{\rm int}_{j}\big{)}-a^{-}_{j+ \frac{1}{2}}\big{(}\overline{\rho}^{\rm int,L}_{j+\frac{1}{2}}-\overline{\rho} ^{\rm int}_{j}\big{)}\Big{]}\\ \stackrel{{(\ref{eq:2.30})}}{{=}}\overline{\rho}^{\rm int}_{j}+ \frac{\Delta t^{n}}{\Delta x}\Big{[}a^{+}_{j-\frac{1}{2}}\big{(}\overline{ \rho}^{\rm int}_{j-\frac{1}{2}}-\overline{\rho}^{\rm int}_{j}\big{)}-a^{-}_{j+ \frac{1}{2}}\big{(}\overline{\rho}^{\rm int}_{j+\frac{1}{2}}-\overline{\rho}^{ \rm int}_{j}\big
\[(\overline{\rho u})_{j}^{n+1} =\frac{1}{\Delta x}\underset{C_{j}}{\int}(\widetilde{\rho u})^{ \mathrm{int}}(x)\,\mathrm{d}x \tag{2.35}\] \[=(\overline{\rho u})_{j}^{\mathrm{int}}+\frac{\Delta t^{n}}{\Delta x }\Big{[}a_{j-\frac{1}{2}}^{+}\big{(}(\overline{\rho u})_{j-\frac{1}{2}}^{ \mathrm{int},\mathrm{R}}-(\overline{\rho u})_{j}^{\mathrm{int}}\big{)}-a_{j+ \frac{1}{2}}^{-}\big{(}(\overline{\rho u})_{j+\frac{1}{2}}^{\mathrm{int}, \mathrm{L}}-(\overline{\rho u})_{j}^{\mathrm{int}}\big{)}\Big{]}\] \[\overset{\eqref{eq:2.33}}{=}(\overline{\rho u})_{j}^{\mathrm{int} }+\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(}(\overline{ \rho u})_{j-\frac{1}{2}}^{\mathrm{int}}-(\overline{\rho u})_{j}^{\mathrm{int} }\big{)}-a_{j+\frac{1}{2}}^{-}\big{(}(\overline{\rho u})_{j+\frac{1}{2}}^{ \mathrm{int}}-(\overline{\rho u})_{j}^{\mathrm{int}}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad+\delta_{j-\frac{1}{2}}^{\rho}u_{j- \frac{1}{2}}^{\mathrm{int}}-\delta_{j+\frac{1}{2}}^{\rho}u_{j+\frac{1}{2}}^{ \mathrm{int}}\Big{]},\] \[\overline{E}_{j}^{\,n+1}=\frac{1}{\Delta x}\underset{C_{j}}{\int} \widetilde{E}^{\,\mathrm{int}}(x)\,\mathrm{d}x=\overline{E}_{j}^{\mathrm{int}} +\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(}\overline{E} _{j-\frac{1}{2}}^{\mathrm{int},\mathrm{R}}-\overline{E}_{j}^{\mathrm{int}} \big{)}-a_{j+\frac{1}{2}}^{-}\big{(}\overline{E}_{j+\frac{1}{2}}^{\mathrm{int },\mathrm{L}}-\overline{E}_{j}^{\mathrm{int}}\big{)}\Big{]}\] \[\overset{\eqref{eq:2.33}}{=}\overline{E}_{j}^{\mathrm{int}} +\frac{\Delta t^{n}}{\Delta x}\bigg{[}a_{j-\frac{1}{2}}^{+}\big{(} \overline{E}_{j-\frac{1}{2}}^{\mathrm{int}}-\overline{E}_{j}^{\mathrm{int}} \big{)}-a_{j+\frac{1}{2}}^{-}\big{(}\overline{E}_{j+\frac{1}{2}}^{\mathrm{int }}-\overline{E}_{j}^{\mathrm{int}}\big{)}\] (2.36) \[+\frac{\delta_{j-\frac{1}{2}}^{\rho}\overline{\Gamma}_{j-\frac{1} {2}}^{\mathrm{int}}-\delta_{j-\frac{1}{2}}^{\Gamma}\overline{\Pi}_{j-\frac{1} {2}}^{-}}{\overline{\Gamma}_{j-\frac{1}{2}}^{\mathrm{int}}}-\frac{\delta_{j+ \frac{1}{2}}^{\Pi}\overline{\Gamma}_{j+\frac{1}{2}}^{\mathrm{int}}-\delta_{j+ \frac{1}{2}}^{\Gamma}\overline{\Pi}_{j+\frac{1}{2}}^{\mathrm{int}}}{\overline{ \Gamma}_{j+\frac{1}{2}}^{\mathrm{int}}}+\frac{\delta_{j-\frac{1}{2}}^{\Gamma} \overline{E}_{j-\frac{1}{2}}^{\mathrm{int}}}{\overline{\Gamma}_{j+\frac{1}{2}} ^{\mathrm{int}}}\,\overline{E}_{j+\frac{1}{2}}^{\mathrm{int}}\bigg{]},\] \[\overline{\Gamma}_{j}^{\,n+1}=\frac{1}{\Delta x}\underset{C_{j}}{ \int}\widetilde{\Gamma}^{\,\mathrm{int}}(x)\,\mathrm{d}x=\overline{\Gamma}_{j}^ {\mathrm{int}}+\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(} \overline{\Gamma}_{j-\frac{1}{2}}^{\mathrm{int},\mathrm{R}}-\overline{\Gamma} _{j}^{\mathrm{int}}\big{)}-a_{j+\frac{1}{2}}^{-}\big{(}\overline{\Gamma}_{j+ \frac{1}{2}}^{\mathrm{int},\mathrm{L}}-\overline{\Gamma}_{j}^{\mathrm{int}} \big{)}\Big{]}\] (2.37) \[\overset{\eqref{eq:2.30}}{=}\overline{\Gamma}_{j}^{\mathrm{int}} +\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(}\overline{ \Gamma}_{j-\frac{1}{2}}^{\mathrm{int}}-\overline{\Gamma}_{j}^{\mathrm{int}} \big{)}-a_{j+\frac{1}{2}}^{-}\big{(}\overline{\Gamma}_{j+\frac{1}{2}}^{ \mathrm{int}}-\overline{\Gamma}_{j}^{\mathrm{int}}\big{)}+\delta_{j-\frac{1}{2} }^{\Gamma}-\delta_{j+\frac{1}{2}}^{\Gamma}\Big{]},\] \[\overline{\Pi}_{j}^{\,n+1}=\frac{1}{\Delta x}\underset{C_{j}}{ \int}\widetilde{\Pi}^{\mathrm{int}}(x)\,\mathrm{d}x=\overline{\Pi}_{j}^{ \mathrm{int}}+\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(} \overline{\Pi}_{j-\frac{1}{2}}^{\mathrm{int},\mathrm{R}}-\overline{\Pi}_{j}^ {\mathrm{int}}\big{)}-a_{j+\frac{1}{2}}^{-}\big{(}\overline{\Pi}_{j+\frac{1}{2 }}^{\mathrm{int},\mathrm{L}}-\overline{\Pi}_{j}^{\mathrm{int}}\big{)}\Big{]}\] (2.38) \[\overset{\eqref{eq:2.30}}{=}\overline{\Pi}_{j}^{\mathrm{int}} +\frac{\Delta t^{n}}{\Delta x}\Big{[}a_{j-\frac{1}{2}}^{+}\big{(}\overline{ \Pi}_{j-\frac{1}{2}}^{\mathrm{int}}-\overline{\Pi}_{j}^{\mathrm{int}}\big{)}-a _{j+\frac{1}{2}}^{-}\big{(}\overline{\Pi}_{j+\frac{1}{2}}^{\mathrm{int}}- \overline{\Pi}_{j}^{\mathrm{int}}\big{)}+\delta_{j-\frac{1}{2}}^{\Pi}-\delta_{j+ \frac{1}{2}}^{\Pi}\Big{]}.\]
#### 2.2.2 Semi-Discrete Scheme
Finally, we pass to the semi-discrete limit \(\Delta t^{n}\to 0\) in (2.34)-(2.38) and proceed as in [29, SS2.2] to end up with the semi-discretization (2.2) with the modified (compared with (2.3)) numerical fluxes
\[\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}=\frac{a_{j+\frac{1}{2}}^{+}\boldsymbol{ K}_{j+\frac{1}{2}}^{-}-a_{j+\frac{1}{2}}^{-}\boldsymbol{K}_{j+\frac{1}{2}}^{+}}{a_{j+ \frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-}}+\frac{a_{j+\frac{1}{2}}^{+}a_{j+\frac{1}{2} }^{-}a_{j+\frac{1}{2}}^{-}}{a_{j+\frac{1}{2}}^{+}-a_{j+\frac{1}{2}}^{-}} \left(\boldsymbol{U}_{j+\frac{1}{2}}^{+}-\boldsymbol{U}_{j+\frac{1}{2}}^{-} \right)+\boldsymbol{q}_{j+\frac{1}{2}}, \tag{2.39}\]
where
\[\boldsymbol{q}_{j+\frac{1}{2}} =q_{j+\frac{1}{2}}^{\rho}\Big{(}1,u_{j+\frac{1}{2}}^{*},\frac{ \big{(}u_{j+\frac{1}{2}}^{*}\big{)}^{2}}{2},0,0\Big{)}^{\top} \tag{2.40}\]
is a built-in "anti-diffusion" term. In (2.40),
\[\begin{split}\boldsymbol{U}^{*}_{j+\frac{1}{2}}&=\frac{ a^{+}_{j+\frac{1}{2}}\boldsymbol{U}^{+}_{j+\frac{1}{2}}-a^{-}_{j+\frac{1}{2}} \boldsymbol{U}^{-}_{j+\frac{1}{2}}-\big{(}\boldsymbol{K}^{+}_{j+\frac{1}{2}}- \boldsymbol{K}^{-}_{j+\frac{1}{2}}\big{)}}{a^{+}_{j+\frac{1}{2}}-a^{-}_{j+ \frac{1}{2}}},\quad u^{*}_{j+\frac{1}{2}}=\frac{(\rho u)^{*}_{j+\frac{1}{2}}}{ \rho^{*}_{j+\frac{1}{2}}},\\ q^{\rho}_{j+\frac{1}{2}}&=\operatorname{minmod} \left(-a^{-}_{j+\frac{1}{2}}\big{(}\rho^{*}_{j+\frac{1}{2}}-\rho^{-}_{j+\frac{ 1}{2}}\big{)},a^{+}_{j+\frac{1}{2}}\big{(}\rho^{+}_{j+\frac{1}{2}}-\rho^{*}_{j +\frac{1}{2}}\big{)}\right),\\ q^{\Gamma}_{j+\frac{1}{2}}&=\operatorname{minmod} \left(-a^{-}_{j+\frac{1}{2}}\big{(}\Gamma^{*}_{j+\frac{1}{2}}-\Gamma^{-}_{j+ \frac{1}{2}}\big{)},a^{+}_{j+\frac{1}{2}}\big{(}\Gamma^{+}_{j+\frac{1}{2}}- \Gamma^{*}_{j+\frac{1}{2}}\big{)}\right),\\ q^{\Pi}_{j+\frac{1}{2}}&=\operatorname{minmod} \left(-a^{-}_{j+\frac{1}{2}}\big{(}\Pi^{*}_{j+\frac{1}{2}}-\Pi^{-}_{j+\frac{1}{ 2}}\big{)},a^{+}_{j+\frac{1}{2}}\big{(}\Pi^{+}_{j+\frac{1}{2}}-\Pi^{*}_{j+ \frac{1}{2}}\big{)}\right).\end{split} \tag{2.41}\]
**Remark 2.4**: _If we replace (2.26) with the following limited linear piece:_
\[\widetilde{\boldsymbol{U}}^{\operatorname{int}}_{j+\frac{1}{2}}(x)= \overline{\boldsymbol{U}}^{\operatorname{int}}_{j+\frac{1}{2}}+(\boldsymbol {U}_{x})^{\operatorname{int}}_{j+\frac{1}{2}}\left(x-\frac{x_{j+\frac{1}{2}, r}+x_{j+\frac{1}{2},\ell}}{2}\right),\]
_where_
\[(\boldsymbol{U}_{x})^{\operatorname{int}}_{j+\frac{1}{2}}=\operatorname{ minmod}\!\left(\frac{\boldsymbol{U}^{\operatorname{int}}_{j+\frac{1}{2},r}- \overline{\boldsymbol{U}}^{\operatorname{int}}_{j+\frac{1}{2}}}{\delta},\frac{ \overline{\boldsymbol{U}}^{\operatorname{int}}_{j+\frac{1}{2}}-\boldsymbol{U }^{\operatorname{int}}_{j+\frac{1}{2},\ell}}{\delta}\right)\!,\quad\delta:= \frac{\Delta t^{n}}{2}\big{(}a^{+}_{j+\frac{1}{2}}-a^{-}_{j+\frac{1}{2}}\big{)},\]
_we will end up with an alternative built-in "anti-diffusion" term_
\[\boldsymbol{q}_{j+\frac{1}{2}}=-\frac{a^{+}_{j+\frac{1}{2}}a^{-}_{j+\frac{1}{ 2}}}{a^{+}_{j+\frac{1}{2}}-a^{-}_{j+\frac{1}{2}}}\operatorname{minmod}\left( \boldsymbol{U}^{*}_{j+\frac{1}{2}}-\boldsymbol{U}^{-}_{j+\frac{1}{2}}, \boldsymbol{U}^{+}_{j+\frac{1}{2}}-\boldsymbol{U}^{*}_{j+\frac{1}{2}}\right),\]
_which was introduced in [25]. This will result in another flux globalization based PCCU scheme for the \(\gamma\)-based multifluid system. In the numerical results reported in SS4, we will compare the behavior of this PCCU scheme with the proposed LD PCCU one._
#### 2.2.3 Properties of the Semi-Discrete Scheme
In this section, we establish two important properties the designed semi-discrete scheme satisfies:
1. When initially \(\gamma\equiv\operatorname{Const}\) and \(\pi_{\infty}\equiv\operatorname{Const}\), that is, if initially the system contains a single fluid, then \(\gamma\) and \(\pi_{\infty}\) will stay constant for all \(t\);
2. At the isolated material interface at which initially \(u\equiv\operatorname{Const}\) and \(p\equiv\operatorname{Const}\), both \(u\) and \(p\) will stay constant for all \(t\).
To this end, we prove the following theorem.
**Theorem 2.5**: _1. If \(\overline{\Gamma}_{j}\equiv\widehat{\Gamma}=\operatorname{Const}\) and \(\overline{\Pi}_{j}\equiv\widehat{\Pi}=\operatorname{Const}\) for all \(j\) at a certain time level \(t\), then_
\[\frac{\mathrm{d}}{\mathrm{d}t}\,\overline{\Gamma}_{j}=\frac{\mathrm{d}}{ \mathrm{d}t}\,\overline{\Pi}_{j}\equiv 0,\quad\forall j.\]
_2. If at a certain time level \(t=t^{n}\), \(u^{n}_{j}\equiv\widehat{u}=\operatorname{Const}\) and \(p^{n}_{j}\equiv\widehat{p}=\operatorname{Const}\) for all \(j\), then at the next time level \(t=t^{n+1}\),_
\[u^{n+1}_{j}\equiv\widehat{u}\quad\text{and}\quad p^{n+1}_{j}\equiv\widehat{p}, \quad\forall j, \tag{2.42}\]
_provided the ODE system (2.2), (2.39)-(2.41) is discretized using the forward Euler method._
**Proof:** 1. We only show that \(\frac{\mathrm{d}}{\mathrm{d}t}\,\overline{\Gamma}_{j}\equiv 0\) as \(\frac{\mathrm{d}}{\mathrm{d}t}\,\overline{\Pi}_{j}\equiv 0\) can be proved in a similar way. Since the point values of \(\Gamma\) at the cell interfaces are obtained using the piecewise linear reconstruction (2.16), we have \(\Gamma^{+}_{j+\frac{1}{2}}=\Gamma^{-}_{j+\frac{1}{2}}=\Gamma^{+}_{j-\frac{1}{ 2}}\equiv\widehat{\Gamma}\), which we substitute into (2.18), (2.19), and (2.21) to obtain
\[\big{(}F^{(4)}\big{)}^{\pm}_{j+\frac{1}{2}}=\widehat{\Gamma}u^{\pm}_{j+\frac{1 }{2}},\quad B^{(4)}_{j}=\widehat{\Gamma}\big{(}u^{-}_{j+\frac{1}{2}}-u^{+}_{j- \frac{1}{2}}\big{)},\quad B^{(4)}_{\boldsymbol{\Psi},j+\frac{1}{2}}=\widehat{ \Gamma}\big{(}u^{+}_{j+\frac{1}{2}}-u^{-}_{j+\frac{1}{2}}\big{)}. \tag{2.43}\]
We then use (2.4)-(2.6) to compute the flux differences
\[\begin{split}&\big{(}K^{(4)}\big{)}^{+}_{j+\frac{1}{2}}-\big{(}K^{(4 )}\big{)}^{-}_{j+\frac{1}{2}}=\big{(}F^{(4)}\big{)}^{+}_{j+\frac{1}{2}}-\big{(} F^{(4)}\big{)}^{-}_{j+\frac{1}{2}}-B^{(4)}_{\boldsymbol{\Psi},j+\frac{1}{2}}\stackrel{{ \eqref{eq:2.43}}}{{=}}0,\\ &\big{(}K^{(4)}\big{)}^{-}_{j+\frac{1}{2}}-\big{(}K^{(4)}\big{)}^ {+}_{j-\frac{1}{2}}=\big{(}F^{(4)}\big{)}^{-}_{j+\frac{1}{2}}-\big{(}F^{(4)} \big{)}^{+}_{j-\frac{1}{2}}-B^{(4)}_{j}\stackrel{{\eqref{eq:2.43}} }{{=}}0,\end{split} \tag{2.44}\]
which we substitute into (2.41) to verify that
\[q^{\Gamma}_{j+\frac{1}{2}}=0. \tag{2.45}\]
Finally, we substitute (2.44)-(2.45) into (2.2), (2.39)-(2.40) to end up with \(\frac{\mathrm{d}}{\mathrm{d}t}\,\overline{\Gamma}_{j}\equiv 0\). This completes the proof of the first part of the theorem.
2. Since the point values of \(u\) and \(p\) at the cell interfaces are computed using the piecewise linear reconstruction (2.16), we have \(u^{+}_{j+\frac{1}{2}}=u^{-}_{j+\frac{1}{2}}=u^{+}_{j-\frac{1}{2}}\equiv \widehat{u}\) and \(p^{+}_{j+\frac{1}{2}}=p^{-}_{j+\frac{1}{2}}=p^{+}_{j-\frac{1}{2}}\equiv \widehat{p}\) at the time level \(t=t^{n}\). This results in
\[\begin{split}&\boldsymbol{U}^{\pm}_{j+\frac{1}{2}}=\big{(}\rho^{ \pm}_{j+\frac{1}{2}},\rho^{\pm}_{j+\frac{1}{2}}\widehat{u},E^{\pm}_{j+\frac{1 }{2}},\Gamma^{\pm}_{j+\frac{1}{2}},\Pi^{\pm}_{j+\frac{1}{2}}\big{)}^{\top}, \quad E^{\pm}_{j+\frac{1}{2}}=\widehat{p}\,\Gamma^{\pm}_{j+\frac{1}{2}}+\frac {\widehat{u}^{2}}{2}\rho^{\pm}_{j+\frac{1}{2}}+\Pi^{\pm}_{j+\frac{1}{2}},\\ &\boldsymbol{F}^{\pm}_{j+\frac{1}{2}}=\big{(}\rho^{\pm}_{j+\frac{1 }{2}}\widehat{u},\rho^{\pm}_{j+\frac{1}{2}}\widehat{u}^{\,2}+\widehat{p}, \widehat{u}\,(E^{\pm}_{j+\frac{1}{2}}+\widehat{p}),\Gamma^{\pm}_{j+\frac{1}{2}} \widehat{u},\Pi^{\pm}_{j+\frac{1}{2}}\widehat{u}\big{)}^{\top},\end{split} \tag{2.46}\]
and hence, after substituting \(u^{\pm}_{j+\frac{1}{2}}\) into (2.19) and (2.21), we obtain
\[\boldsymbol{B}_{j}=\boldsymbol{B}_{\boldsymbol{\Psi},j+\frac{1}{2}}\equiv \boldsymbol{0}.\]
The latter equality implies \(\boldsymbol{K}^{\pm}_{j+\frac{1}{2}}=\boldsymbol{F}^{\pm}_{j+\frac{1}{2}}\), which together with (2.46) results in
\[\boldsymbol{F}^{+}_{j+\frac{1}{2}}-\boldsymbol{F}^{-}_{j+\frac{1}{2}}=\widehat {u}\,(\boldsymbol{U}^{+}_{j+\frac{1}{2}}-\boldsymbol{U}^{+}_{j+\frac{1}{2}}),\]
so that the first line in (2.41) can be rewritten as
\[\boldsymbol{U}^{*}_{j+\frac{1}{2}}=\frac{a^{+}_{j+\frac{1}{2}}\boldsymbol{U}^{+ }_{j+\frac{1}{2}}-a^{-}_{j+\frac{1}{2}}\boldsymbol{U}^{-}_{j+\frac{1}{2}}- \widehat{u}\,\big{(}\boldsymbol{U}^{+}_{j+\frac{1}{2}}-\boldsymbol{U}^{-}_{j+ \frac{1}{2}}\big{)}}{a^{+}_{j+\frac{1}{2}}-a^{-}_{j+\frac{1}{2}}},\quad u^{*}_{j +\frac{1}{2}}=\widehat{u}. \tag{2.47}\]
We then use (2.40), (2.41), (2.46), and (2.47) to compute the "anti-diffusion" term, which reduces to
\[\boldsymbol{q}_{j+\frac{1}{2}}=\Big{(}q^{\rho}_{j+\frac{1}{2}},\widehat{u}\,q^ {\rho}_{j+\frac{1}{2}},\widehat{p}\,q^{\Gamma}_{j+\frac{1}{2}}+\frac{\widehat{u} ^{2}}{2}\,q^{\rho}_{j+\frac{1}{2}}+q^{\Pi}_{j+\frac{1}{2}},q^{\Gamma}_{j+\frac{1 }{2}},q^{\Pi}_{j+\frac{1}{2}}\Big{)}^{\top}, \tag{2.48}\]
and then substitute (2.46) and (2.48) into (2.39) to obtain the numerical fluxes
\[\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}=\Big{(}\mathcal{K}^{(1)}_{j+\frac{1}{2}},\widehat{u}\,\mathcal{K}^{(1)}_{j+\frac{1}{2}}+\widehat{p},\widehat{p}\, \mathcal{K}^{(4)}_{j+\frac{1}{2}}+\frac{\widehat{u}^{2}}{2}\mathcal{K}^{(1)}_{j+ \frac{1}{2}}+\mathcal{K}^{(5)}_{j+\frac{1}{2}}+\widehat{u}\widehat{p},\mathcal{K}^ {(4)}_{j+\frac{1}{2}},\mathcal{K}^{(5)}_{j+\frac{1}{2}}\Big{)}^{\top},\]
which, in turn, are substituted into (2.2) to obtain the following semi-discrete relations:
\[\frac{\mathrm{d}}{\mathrm{d}t}(\overline{\rho u})_{j}^{n}=\widehat{u}\,\frac{ \mathrm{d}}{\mathrm{d}t}\,\overline{\rho}_{j}^{n},\quad\frac{\mathrm{d}}{ \mathrm{d}t}\overline{E}_{j}^{n}=\widehat{p}\,\frac{\mathrm{d}}{\mathrm{d}t} \overline{\Gamma}_{j}^{n}+\frac{\widehat{u}^{2}}{2}\frac{\mathrm{d}}{\mathrm{d }t}\,\overline{\rho}_{j}^{n}+\frac{\mathrm{d}}{\mathrm{d}t}\,\overline{\Pi}_{j }^{n}.\]
These relations are discretized using the forward Euler method, which results in
\[\begin{array}{l}\frac{(\overline{\rho u})_{j}^{n+1}-(\overline{\rho u})_{j} ^{n}}{\Delta t}=\widehat{u}\,\frac{\overline{\rho}_{j}^{n+1}-\overline{\rho}_ {j}^{n}}{\Delta t},\\ \frac{\overline{E}_{j}^{n+1}-\overline{E}_{j}^{n}}{\Delta t}= \widehat{p}\,\frac{\overline{\Gamma}_{j}^{n+1}-\overline{\Gamma}_{j}^{n}}{ \Delta t}+\frac{\widehat{u}^{2}}{2}\cdot\frac{\overline{\rho}_{j}^{n+1}- \overline{\rho}_{j}^{n}}{\Delta t}+\frac{\overline{\Pi}_{j}^{n+1}-\overline{ \Pi}_{j}^{n}}{\Delta t}.\end{array} \tag{2.49}\]
Finally, we substitute (2.11) expressed at both time levels \(t=t^{n}\) and \(t=t^{n+1}\) into (2.49) to end up with (2.42). This completes the proof of the second part of the theorem.
**Remark 2.6**: _The second part of Theorem 2.5 is still true if the forward Euler time discretization is replaced with another strong stability preserving (SSP) ODE solver; see, e.g., [16, 17]._
**Remark 2.7**: _As in [29], the computation of numerical fluxes in (2.39) should be desingularized to avoid division by zero or very small numbers. If \(a_{j+\frac{1}{2}}^{+}<\varepsilon_{0}\) and \(a_{j+\frac{1}{2}}^{-}>-\varepsilon_{0}\) for a small positive \(\varepsilon_{0}\), we replace the fluxes \(\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}\) with_
\[\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}=\frac{\boldsymbol{K}\big{(}\boldsymbol {U}_{j+\frac{1}{2}}^{-}\big{)}+\boldsymbol{K}\big{(}\boldsymbol{U}_{j+\frac{ 1}{2}}^{+}\big{)}}{2}.\]
_In all of the numerical examples reported in SS4, we have taken \(\varepsilon_{0}=10^{-12}\)._
### Flux Globalization Based LD Ai-WENO PCCU Scheme
In this section, we extend the second-order flux globalization based LD PCCU schemes from SS2.2 to the fifth order of accuracy within the A-WENO framework.
The semi-discrete fifth-order LD Ai-WENO PCCU scheme for the 1-D quasi-conservative system (2.1) reads as
\[\frac{\mathrm{d}}{\mathrm{d}t}\boldsymbol{U}_{j}=-\frac{\boldsymbol{\mathcal{ H}}_{j+\frac{1}{2}}-\boldsymbol{\mathcal{H}}_{j-\frac{1}{2}}}{\Delta x}, \tag{2.50}\]
where \(\boldsymbol{U}_{j}:\approx\boldsymbol{U}(x_{j},t)\) and the fifth-order numerical fluxes \(\boldsymbol{\mathcal{H}}_{j+\frac{1}{2}}\) are defined by
\[\boldsymbol{\mathcal{H}}_{j+\frac{1}{2}}=\boldsymbol{\mathcal{K}}_{j+\frac{1 }{2}}-\frac{\Delta x}{24}(\boldsymbol{K}_{xx})_{j+\frac{1}{2}}+\frac{7(\Delta x )^{3}}{5760}(\boldsymbol{K}_{xxxx})_{j+\frac{1}{2}}. \tag{2.51}\]
Here, \(\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}\) are the finite-volume fluxes (2.39)-(2.41), and \((\boldsymbol{K}_{xx})_{j+\frac{1}{2}}\) and \((\boldsymbol{K}_{xxxx})_{j+\frac{1}{2}}\) are approximations of the second- and fourth-order spatial derivatives of \(\boldsymbol{K}\) at \(x=x_{j+\frac{1}{2}}\), which we compute using the finite-difference approximations recently proposed in [12]:
\[\begin{array}{l}(\boldsymbol{K}_{xx})_{j+\frac{1}{2}}=\frac{1}{12(\Delta x )^{2}}\left[-\boldsymbol{\mathcal{K}}_{j-\frac{3}{2}}+16\boldsymbol{\mathcal{K}} _{j-\frac{1}{2}}-30\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}+16\boldsymbol{ \mathcal{K}}_{j+\frac{3}{2}}-\boldsymbol{\mathcal{K}}_{j+\frac{5}{2}}\right], \\ (\boldsymbol{K}_{xxxx})_{j+\frac{1}{2}}=\frac{1}{(\Delta x)^{4}} \left[\boldsymbol{\mathcal{K}}_{j-\frac{3}{2}}-4\boldsymbol{\mathcal{K}}_{j- \frac{1}{2}}+6\boldsymbol{\mathcal{K}}_{j+\frac{1}{2}}-4\boldsymbol{\mathcal{K} }_{j+\frac{3}{2}}+\boldsymbol{\mathcal{K}}_{j+\frac{5}{2}}\right].\end{array} \tag{2.52}\]
The resulting semi-discrete scheme (2.50)-(2.52) will be fifth-order accurate provided the point values \(\mathbf{U}_{j+\frac{1}{2}}^{\pm}\) are calculated using a fifth-order interpolation. To this end, we apply the recently proposed fifth-order Ai-WENO-Z interpolation [14, 31, 46], which we briefly describe in Appendix A.
## 3 Two-Dimensional Algorithms
In this section, we extend the proposed 1-D flux globalization based LD A-WENO PCCU schemes to the 2-D \(\gamma\)-based multifluid system (1.1), (1.2), (1.4). This system can be written in the vector form
\[\mathbf{U}_{t}+\mathbf{F}(\mathbf{U})_{x}+\mathbf{G}(\mathbf{U})_{y}=B(\mathbf{U})\mathbf{U}_{x}+C(\mathbf{U}) \mathbf{U}_{y},\]
or, equivalently, in the quasi-conservative form
\[\mathbf{U}_{t}+\mathbf{K}(\mathbf{U})_{x}+\mathbf{L}(\mathbf{U})_{y}=\mathbf{0} \tag{3.1}\]
with
\[\mathbf{K}(\mathbf{U})=\mathbf{F}(\mathbf{U})-\mathbf{R}(\mathbf{U}), \mathbf{L}(\mathbf{U})=\mathbf{G}(\mathbf{U})-\mathbf{S}(\mathbf{U}), \tag{3.2}\] \[\mathbf{R}(\mathbf{U})=\int\limits_{\widetilde{x}}^{x}B(\mathbf{U})\mathbf{U}_{ \xi}\,\mathrm{d}\xi, \mathbf{S}(\mathbf{U})=\int\limits_{\widetilde{y}}^{y}C(\mathbf{U})\mathbf{U}_{ \eta}\,\mathrm{d}\eta.\]
Here,
\[\mathbf{U}:=(\rho,\rho u,\rho v,E,\Gamma,\Pi)^{\top}, \tag{3.3}\] \[\mathbf{F}(\mathbf{U})=(\rho u,\rho u^{2}+p,\rho uv,u(E+p),u\Gamma,u\Pi)^ {\top}, B(\mathbf{U})\mathbf{U}_{x}=\big{(}0,0,0,0,\Gamma u_{x},\Pi u_{x}\big{)}^{ \top},\] \[\mathbf{G}(\mathbf{U})=(\rho v,\rho uv,\rho v^{2}+p,v(E+p),v\Gamma,v\Pi)^ {\top}, C(\mathbf{U})\mathbf{U}_{y}=\big{(}0,0,0,\Gamma v_{y},\Pi v_{y}\big{)}^{\top}.\]
We first introduce a uniform mesh consisting of the finite-volume cells \(C_{j,k}:=[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]\times[y_{k-\frac{1}{2}},y_{k+ \frac{1}{2}}]\) of the uniform size \(\Delta x\Delta y\) with \(x_{j+\frac{1}{2}}-x_{j-\frac{1}{2}}\equiv\Delta x\) and \(y_{k+\frac{1}{2}}-y_{k-\frac{1}{2}}\equiv\Delta y\) centered at \((x_{j},y_{k})\) with \(x_{j}=(x_{j-\frac{1}{2}}+x_{j+\frac{1}{2}})/2\) and \((y_{k-\frac{1}{2}}+y_{k+\frac{1}{2}})/2\), \(j=1,\ldots,N_{x}\), \(k=1,\ldots,N_{y}\).
We assume that at certain time level \(t\), an approximate solution, realized in terms of the cell averages \(\overline{\mathbf{U}}_{j,k}:\approx\frac{1}{\Delta x\Delta y}\iint_{C_{j,k}}\mathbf{ U}(x,y,t)\,\mathrm{d}x\,\mathrm{d}y\), is available. These cell averages are then evolved in time by solving the following system of ODEs:
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathbf{U}}_{j,k}=-\frac{\mathbf{\mathcal{K}} _{j+\frac{1}{2},k}-\mathbf{\mathcal{K}}_{j-\frac{1}{2},k}}{\Delta x}-\frac{\mathbf{ \mathcal{L}}_{j,k+\frac{1}{2}}-\mathbf{\mathcal{L}}_{j,k-\frac{1}{2}}}{\Delta y}, \tag{3.4}\]
where the \(x\)- and \(y\)-numerical fluxes are
\[\mathbf{\mathcal{K}}_{j+\frac{1}{2},k}=\frac{a_{j+\frac{1}{2},k}^{+} \mathbf{K}_{j+\frac{1}{2},k}^{-}-a_{j+\frac{1}{2},k}^{-}\mathbf{K}_{j+\frac{1}{2},k}^ {+}}{a_{j+\frac{1}{2},k}^{+}-a_{j+\frac{1}{2},k}^{-}}+\frac{a_{j+\frac{1}{2},k}^{+}a_{j+\frac{1}{2},k}^{-}}{a_{j+\frac{1}{2},k}^{+}-a_{j+\frac{1}{2},k}^{ -}}\left(\mathbf{U}_{j+\frac{1}{2},k}^{+}-\mathbf{U}_{j+\frac{1}{2},k}^{-}\right)+\mathbf{ q}_{j+\frac{1}{2},k}, \tag{3.5}\] \[\mathbf{\mathcal{L}}_{j,k+\frac{1}{2}}=\frac{b_{j,k+\frac{1}{2}}^{+} \mathbf{L}_{j,k+\frac{1}{2}}^{-}-b_{j,k+\frac{1}{2}}^{-}\mathbf{L}_{j,k+\frac{1}{2}}^{+ }}{b_{j,k+\frac{1}{2}}^{+}-b_{j,k+\frac{1}{2}}^{-}}+\frac{b_{j,k+\frac{1}{2} }^{+}b_{j,k+\frac{1}{2}}^{-}}{b_{j,k+\frac{1}{2}}^{+}-b_{j,k+\frac{1}{2}}^{-}} \left(\mathbf{U}_{j,k+\frac{1}{2}}^{+}-\mathbf{U}_{j,k+\frac{1}{2}}^{-}\right)+\mathbf{ q}_{j,k+\frac{1}{2}}. \tag{3.6}\]
The one-sided point values \(\mathbf{U}^{\pm}_{j+\frac{1}{2},k}\) and \(\mathbf{U}^{\pm}_{j,k+\frac{1}{2}}\) at the cell interfaces \((x_{j+\frac{1}{2}},y_{k})\) and \((x_{j},y_{k+\frac{1}{2}})\), respectively, are obtained as follows. We first use the cell averages \(\overline{\mathbf{U}}_{j,k}\) to compute the point values of \(u\), \(v\), and \(p\) at the cell centers:
\[u_{j,k}=\frac{(\overline{\rho u})_{j,k}}{\overline{\rho}_{j,k}},\quad v_{j,k}= \frac{(\overline{\rho v})_{j,k}}{\overline{\rho}_{j,k}},\quad p_{j,k}=\frac{1 }{\overline{\Gamma}_{j,k}}\left[\overline{E}_{j,k}-\frac{\left((\overline{ \rho u})_{j,k}\right)^{2}+\left((\overline{\rho v})_{j,k}\right)^{2}}{2 \,\overline{\rho}_{j,k}}-\overline{\Pi}_{j,k}\right],\]
and then construct the linear pieces to approximate the primitive variables \(\mathbf{V}=(\rho,u,v,p,\Gamma,\Pi)^{\top}\):
\[\widetilde{\mathbf{V}}_{j,k}(x,y)=\mathbf{V}_{j,k}+(\mathbf{V}_{x})_{j,k}(x-x_{j})+(\mathbf{V }_{y})_{j,k}(y-y_{k}),\quad(x,y)\in C_{j,k}, \tag{3.7}\]
where \(\mathbf{V}_{j,k}:=(\,\overline{\rho}_{j,k},u_{j,k},v_{j,k},p_{j,k},\overline{ \Gamma}_{j,k},\overline{\Pi}_{j,k})^{\top}\) and \((\mathbf{V}_{x})_{j,k}\) and \((\mathbf{V}_{y})_{j,k}\) are the slopes, which are supposed to be computed using a nonlinear limiter.
As in the 1-D case, we use different limiters near and away from the material interfaces, which need to be detected. In the two-fluid case, we check whether
\[(\overline{\Gamma}_{j,k}-\widehat{\Gamma})(\overline{\Gamma}_{j+1,k}-\widehat {\Gamma})<0, \tag{3.8}\]
where, as before, \(\widehat{\Gamma}=(\Gamma_{\rm I}+\Gamma_{\rm II})/2\). If (3.8) is satisfied, we then use the overcompressive SBM limiter,
\[(\mathbf{V}_{x})_{\ell,k}=\phi^{\rm SBM}_{\theta,\tau}\left(\frac{\overline{\mathbf{ V}}_{\ell+1,k}-\overline{\mathbf{V}}_{\ell,k}}{\overline{\mathbf{V}}_{\ell,k}- \overline{\mathbf{V}}_{\ell-1,k}}\right)\frac{\overline{\mathbf{V}}_{\ell+1,k}- \overline{\mathbf{V}}_{\ell,k}}{\Delta x}, \tag{3.9}\]
for \(\ell=j-1\), \(j\), \(j+1\), and \(j+2\). In (3.9), the function \(\phi^{\rm SBM}_{\theta,\tau}(r)\), given by (2.15), is applied in a componentwise manner with \(\tau=-0.5\) and \(\theta=1.3\). Otherwise, that is, away from the material interface, we use a dissipative generalized minmod limiter which is also given by the same formulae (3.9), (2.15), but with \(\tau=0.5\) and \(\theta=1.3\). We proceed similarly in the \(y\)-direction: we use
\[(\mathbf{V}_{y})_{j,m}=\phi^{\rm SBM}_{\theta,\tau}\left(\frac{\overline{\mathbf{V}}_{ j,m+1}-\overline{\mathbf{V}}_{j,m}}{\overline{\mathbf{V}}_{j,m}-\overline{\mathbf{V}}_{j,m-1} }\right)\frac{\overline{\mathbf{V}}_{j,m+1}-\overline{\mathbf{V}}_{j,m}}{\Delta y}, \tag{3.10}\]
with \(\tau=-0.5\) and \(\theta=1.3\) for \(m=k-1\), \(k\), \(k+1\), and \(k+2\) if
\[(\overline{\Gamma}_{j,k}-\widehat{\Gamma})(\overline{\Gamma}_{j,k+1}- \widehat{\Gamma})<0, \tag{3.11}\]
is satisfied, and with \(\tau=0.5\) and \(\theta=1.3\) otherwise.
Equipped with \((\mathbf{V}_{x})_{j,k}\) and \((\mathbf{V}_{y})_{j,k}\), we use (3.7) to obtain
\[\mathbf{V}^{-}_{j+\frac{1}{2},k} =\overline{\mathbf{V}}_{j,k}+\frac{\Delta x}{2}(\mathbf{V}_{x})_{j,k}, \quad\mathbf{V}^{+}_{j+\frac{1}{2},k}=\overline{\mathbf{V}}_{j+1,k}-\frac{\Delta x}{2 }(\mathbf{V}_{x})_{j+1,k},\] \[\mathbf{V}^{-}_{j,k+\frac{1}{2}} =\overline{\mathbf{V}}_{j,k}+\frac{\Delta y}{2}(\mathbf{V}_{y})_{j,k}, \quad\mathbf{V}^{+}_{j,k+\frac{1}{2}}=\overline{\mathbf{V}}_{j,k+1}-\frac{\Delta y}{2 }(\mathbf{V}_{y})_{j,k+1},\]
and then the corresponding point values of the conservative variables \(\mathbf{U}\) are
\[\mathbf{U}^{\pm}_{\ell,m}=\left(\rho^{\pm}_{\ell,m},\rho^{\pm}_{\ell,m}u^{\pm}_{ \ell,m},\rho^{\pm}_{\ell,m}v^{\pm}_{\ell,m},E^{\pm}_{\ell,m},\Gamma^{\pm}_{ \ell,m},\Pi^{\pm}_{\ell,m}\right)^{\top},\]
for \((\ell,m)=(j+\frac{1}{2},k)\) and \((j,k+\frac{1}{2})\). Here, \(E^{\pm}_{\ell,m}=\Gamma^{\pm}_{\ell,m}p^{\pm}_{\ell,m}+\rho^{\pm}_{\ell,m}((u^{ \pm}_{\ell,m})^{2}+(v^{\pm}_{\ell,m})^{2})/2+\Pi^{\pm}_{\ell,m}\).
The global fluxes \(\mathbf{K}^{\pm}_{j+\frac{1}{2},k}\) and \(\mathbf{\mathcal{L}}_{j,k+\frac{1}{2}}\) in (3.5)-(3.6) are obtained using the relations in (3.2), namely, by
\[\mathbf{K}^{\pm}_{j+\frac{1}{2},k}=\mathbf{F}^{\pm}_{j+\frac{1}{2},k}-\mathbf{R}^{\pm}_{j+ \frac{1}{2},k},\quad\mathbf{L}^{\pm}_{j,k+\frac{1}{2}}=\mathbf{G}^{\pm}_{j,k+\frac{1}{ 2}}-\mathbf{S}^{\pm}_{j,k+\frac{1}{2}},\]
where \(\mathbf{F}^{\pm}_{j+\frac{1}{2},k}:=\mathbf{F}(\mathbf{U}^{\pm}_{j+\frac{1}{2},k})\), \(\mathbf{G}^{\pm}_{j,k+\frac{1}{2}}:=\mathbf{G}(\mathbf{U}^{\pm}_{j,k+\frac{1}{2}})\), and the point values of the global variables \(\mathbf{R}\) and \(\mathbf{S}\) are computed as follows. First, we set \(\widehat{x}=x_{\frac{1}{2}}\) and \(\widehat{y}=y_{\frac{1}{2}}\) so that \(\mathbf{R}^{-}_{\frac{1}{2},k}:=\mathbf{0}\) and \(\mathbf{S}^{-}_{j,\frac{1}{2}}:=\mathbf{0}\). We then evaluate \(\mathbf{R}^{+}_{\frac{1}{2},k}=\mathbf{B}_{\mathbf{\Psi},\frac{1}{2},k}\) and \(\mathbf{S}^{+}_{j,\frac{1}{2}}=\mathbf{B}_{\mathbf{\Psi},j,\frac{1}{2}}\) and recursively compute the rest of the required point values:
\[\mathbf{R}^{-}_{j+\frac{1}{2},k}=\mathbf{R}^{+}_{j-\frac{1}{2},k}+\mathbf{B}^ {x}_{j,k},\quad\mathbf{R}^{+}_{j+\frac{1}{2},k}=\mathbf{R}^{-}_{j+\frac{1}{2},k}+\mathbf{ B}_{\mathbf{\Psi},j+\frac{1}{2},k},\quad j=1,\ldots,N_{x},\] \[\mathbf{S}^{-}_{j,k+\frac{1}{2}}=\mathbf{S}^{+}_{j,k-\frac{1}{2}}+\mathbf{B}^ {y}_{j,k},\quad\mathbf{S}^{+}_{j,k+\frac{1}{2}}=\mathbf{S}^{-}_{j,k+\frac{1}{2}}+\mathbf{ B}_{\mathbf{\Psi},j,k+\frac{1}{2}},\quad k=1,\ldots,N_{y}.\]
Here, \(\mathbf{B}^{x}_{j,k}\), \(\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2},k}\), \(\mathbf{B}^{y}_{j,k}\), and \(\mathbf{B}_{\mathbf{\Psi},j,k+\frac{1}{2},k}\) are evaluated precisely the same way as in SS2.1.1:
\[\mathbf{B}^{x}_{j,k}=\Big{(}0,0,0,0,\frac{\Gamma^{-}_{j+\frac{1}{2},k }+\Gamma^{+}_{j-\frac{1}{2},k}}{2}\big{(}u^{-}_{j+\frac{1}{2},k}-u^{+}_{j- \frac{1}{2},k}\big{)},\frac{\Pi^{-}_{j+\frac{1}{2},k}+\Pi^{+}_{j-\frac{1}{2},k }}{2}\big{(}u^{-}_{j+\frac{1}{2},k}-u^{+}_{j-\frac{1}{2},k}\big{)}\Big{)}^{ \top},\] \[\mathbf{B}_{\mathbf{\Psi},j+\frac{1}{2},k}=\Big{(}0,0,0,0,\frac{\Gamma^{+ }_{j+\frac{1}{2},k}+\Gamma^{-}_{j+\frac{1}{2},k}}{2}\big{(}u^{+}_{j+\frac{1}{2 },k}-u^{-}_{j+\frac{1}{2},k}\big{)},\frac{\Pi^{+}_{j+\frac{1}{2},k}+\Pi^{-}_{j +\frac{1}{2},k}}{2}\big{(}u^{+}_{j+\frac{1}{2},k}-u^{-}_{j+\frac{1}{2},k}\big{)} \Big{)}^{\top},\] \[\mathbf{B}^{y}_{j,k}=\Big{(}0,0,0,0,\frac{\Gamma^{-}_{j,k+\frac{1}{2} }+\Gamma^{+}_{j,k-\frac{1}{2}}}{2}\big{(}v^{-}_{j,k+\frac{1}{2}}-v^{+}_{j,k- \frac{1}{2}}\big{)},\frac{\Pi^{-}_{j,k+\frac{1}{2}}+\Pi^{+}_{j,k-\frac{1}{2}}}{ 2}\big{(}v^{-}_{j,k+\frac{1}{2}}-v^{+}_{j,k-\frac{1}{2}}\big{)}\Big{)}^{\top},\] \[\mathbf{B}_{\mathbf{\Psi},j,k+\frac{1}{2}}=\Big{(}0,0,0,0,\frac{\Gamma^{+ }_{j,k+\frac{1}{2}}+\Gamma^{-}_{j,k+\frac{1}{2}}}{2}\big{(}v^{+}_{j,k+\frac{1} {2}}-v^{-}_{j,k+\frac{1}{2}}\big{)},\frac{\Pi^{+}_{j,k+\frac{1}{2}}+\Pi^{-}_{j,k+\frac{1}{2}}}{2}\big{(}v^{+}_{j,k+\frac{1}{2}}-v^{-}_{j,k+\frac{1}{2}}\big{)} \Big{)}^{\top}.\]
Finally, the one-sided local-speeds of propagation \(a^{\pm}_{j+\frac{1}{2},k}\) and \(b^{\pm}_{j,k+\frac{1}{2}}\) can be estimated by
\[a^{+}_{j+\frac{1}{2},k}=\max\left\{u^{-}_{j+\frac{1}{2},k}+c^{- }_{j+\frac{1}{2},k},u^{+}_{j+\frac{1}{2},k}+c^{+}_{j+\frac{1}{2},k},0\right\},\] \[a^{-}_{j+\frac{1}{2},k}=\min\left\{u^{-}_{j+\frac{1}{2},k}-c^{-} _{j+\frac{1}{2},k},u^{+}_{j+\frac{1}{2},k}-c^{+}_{j+\frac{1}{2},k},0\right\},\] \[b^{+}_{j,k+\frac{1}{2}}=\max\left\{v^{-}_{j,k+\frac{1}{2}}+c^{-} _{j,k+\frac{1}{2}},v^{+}_{j,k+\frac{1}{2}}+c^{+}_{j,k+\frac{1}{2}},0\right\},\] \[b^{-}_{j,k+\frac{1}{2}}=\min\left\{v^{-}_{j,k+\frac{1}{2}}-c^{-} _{j,k+\frac{1}{2}},v^{+}_{j,k+\frac{1}{2}}-c^{+}_{j,k+\frac{1}{2}},0\right\}.\]
### "Built-in" Anti-Diffusion
In this section, we discuss the derivation of the "built-in" anti-diffusion terms \(\mathbf{q}_{j+\frac{1}{2},k}\) and \(\mathbf{q}_{j,k+\frac{1}{2}}\) in (3.5)-(3.6) in a "dimension-by-dimension" manner following the idea introduced in [29].
In order to derive the formula for \(\mathbf{q}_{j+\frac{1}{2},k}\), we consider the 1-D restriction of the system (3.1)-(3.3) along the lines \(y=y_{k}\):
\[\mathbf{U}_{t}(x,y_{k},t)+\mathbf{K}\big{(}\mathbf{U}(x,y_{k},t)\big{)}_{x}=\mathbf{0},\quad k=1, \ldots,N_{y}. \tag{3.12}\]
We then can go through all of the steps in the derivation of the 1-D fully discrete scheme for the systems in (3.12) following SS2.2.1 up to (2.26), which now reads as
\[\widetilde{\mathbf{U}}^{\,\text{int}}_{j+\frac{1}{2},k}(x,y_{k})=\begin{cases} \overline{\mathbf{U}}^{\,\text{int},\text{L}}_{j+\frac{1}{2},k},&x<x_{j+\frac{1}{2}}, \\ \overline{\mathbf{U}}^{\,\text{int},\text{R}}_{j+\frac{1}{2},k},&x>x_{j+\frac{1}{2}}, \end{cases}\]
and the corresponding local conservation requirements (2.27) become
\[a^{+}_{j+\frac{1}{2},k}\,\overline{\mathbf{U}}^{\mathrm{int,R}}_{j+\frac{1}{2},k}-a^ {-}_{j+\frac{1}{2},k}\,\overline{\mathbf{U}}^{\mathrm{int,L}}_{j+\frac{1}{2},k}= \big{(}a^{+}_{j+\frac{1}{2},k}-a^{-}_{j+\frac{1}{2},k}\big{)}\,\overline{\mathbf{U} }^{\mathrm{int}}_{j+\frac{1}{2},k}. \tag{3.13}\]
In addition to the six conservation constraints given by (3.13), we have six degrees of freedom, which we use as in the 1-D case to enforce a sharp approximation of quasi 1-D isolated contact waves propagating in the \(x\)-direction. To this end, we enforce the continuity of \(u\) and \(p\) across the cell interfaces \(x=x_{j+\frac{1}{2}}\) by setting
\[\frac{(\overline{\rho u})^{\mathrm{int,L}}_{j+\frac{1}{2},k}}{ \overline{\rho}^{\mathrm{int,L}}_{j+\frac{1}{2},k}}=\frac{(\overline{\rho u}) ^{\mathrm{int,R}}_{j+\frac{1}{2},k}}{\overline{\rho}^{\mathrm{int,R}}_{j+ \frac{1}{2},k}}, \frac{1}{\overline{\Gamma}^{\mathrm{int,L}}_{j+\frac{1}{2},k}} \left(\overline{E}^{\mathrm{int,L}}_{j+\frac{1}{2},k}-\frac{\big{(} (\overline{\rho u})^{\mathrm{int,L}}_{j+\frac{1}{2},k}\big{)}^{2}+\big{(}( \overline{\rho v})^{\mathrm{int,L}}_{j+\frac{1}{2},k}\big{)}^{2}}{2\overline{ \rho}^{\mathrm{int,L}}_{j+\frac{1}{2},k}}-\overline{\Pi}^{\mathrm{int,L}}_{j+ \frac{1}{2},k}\right)\] \[= \frac{1}{\overline{\Gamma}^{\mathrm{int,R}}_{j+\frac{1}{2},k}} \left(\overline{E}^{\mathrm{int,R}}_{j+\frac{1}{2},k}-\frac{\big{(}( \overline{\rho v})^{\mathrm{int,R}}_{j+\frac{1}{2},k}\big{)}^{2}+\big{(}( \overline{\rho v})^{\mathrm{int,R}}_{j+\frac{1}{2},k}\big{)}^{2}}{2\overline{ \rho}^{\mathrm{int,R}}_{j+\frac{1}{2},k}}-\overline{\Pi}^{\mathrm{int,R}}_{j+ \frac{1}{2},k}\right)\!,\]
and then proceed as in SS2.2.1 to enforce sharp (yet, non-oscillatory) jumps of the \(\rho\)-, \(\rho v\)-, \(\Gamma\)-, and \(\Pi\)-components. This leads to the following formulae analogous to (2.30)-(2.31):
\[\overline{\rho}^{\mathrm{int,L}}_{j+\frac{1}{2},k}=\,\overline{ \rho}^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\rho}_{j+\frac{1}{2},k}} {a^{-}_{j+\frac{1}{2},k}}, \overline{\rho}^{\mathrm{int,R}}_{j+\frac{1}{2},k}=\,\overline{\rho}^{ \mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\rho}_{j+\frac{1}{2},k}}{a^{+}_{ j+\frac{1}{2},k}},\] \[(\overline{\rho v})^{\mathrm{int,L}}_{j+\frac{1}{2},k}=\,( \overline{\rho v})^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\rho v}_{j+ \frac{1}{2},k}}{a^{-}_{j+\frac{1}{2},k}}, \left(\overline{\rho v}\right)^{\mathrm{int,R}}_{j+\frac{1}{2},k}=( \overline{\rho v})^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\rho v}_{j+ \frac{1}{2},k}}{a^{+}_{j+\frac{1}{2},k}},\] \[\overline{\Gamma}^{\mathrm{int,L}}_{j+\frac{1}{2},k}=\,\overline{ \Gamma}^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\Gamma}_{j+\frac{1}{2},k }}{a^{-}_{j+\frac{1}{2},k}}, \overline{\Gamma}^{\mathrm{int,R}}_{j+\frac{1}{2},k}=\,\overline{ \Gamma}^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\Gamma}_{j+\frac{1}{2}, k}}{a^{+}_{j+\frac{1}{2},k}},\] \[\overline{\Pi}^{\mathrm{int,L}}_{j+\frac{1}{2},k}=\,\overline{ \Pi}^{\mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\Pi}_{j+\frac{1}{2},k}}{a^{ -}_{j+\frac{1}{2},k}}, \overline{\Pi}^{\mathrm{int,R}}_{j+\frac{1}{2},k}=\,\overline{\Pi}^{ \mathrm{int}}_{j+\frac{1}{2},k}+\frac{\delta^{\Pi}_{j+\frac{1}{2},k}}{a^{+}_{j+ \frac{1}{2},k}},\]
where
\[\delta^{\rho}_{j+\frac{1}{2},k}=\mathrm{minmod}\left(-a^{-}_{j+ \frac{1}{2},k}\big{[}\,\overline{\rho}^{\mathrm{int}}_{j+\frac{1}{2},k}- \big{(}\rho^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\big{]},\,a^{+}_{j+ \frac{1}{2},k}\big{[}\big{(}\rho^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}- \,\overline{\rho}^{\mathrm{int}}_{j+\frac{1}{2},k}\big{]}\right),\] \[\delta^{\rho v}_{j+\frac{1}{2},k}=\mathrm{minmod}\left(-a^{-}_{j+ \frac{1}{2},k}\big{[}\,\overline{\rho v}\big{)}^{\mathrm{int}}_{j+\frac{1}{2},k}- \big{(}(\rho v)^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\big{]},\,a^{+}_{j+ \frac{1}{2},k}\big{[}\big{(}(\rho v)^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}- \,\overline{\rho v}\big{)}^{\mathrm{int}}_{j+\frac{1}{2},k}\big{]}\right),\] \[\delta^{\Gamma}_{j+\frac{1}{2},k}=\mathrm{minmod}\left(-a^{-}_{j+ \frac{1}{2},k}\big{[}\,\overline{\Pi}^{\mathrm{int}}_{j+\frac{1}{2},k}-\big{(} \Gamma^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\big{]},\,a^{+}_{j+\frac{1 }{2},k}\big{[}\big{(}\Gamma^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}-\, \overline{\Gamma}^{\mathrm{int}}_{j+\frac{1}{2},k}\big{]}\right),\] \[\delta^{\Pi}_{j+\frac{1}{2},k}=\mathrm{minmod}\left(-a^{-}_{j+ \frac{1}{2},k}\big{[}\,\overline{\Pi}^{\mathrm{int}}_{j+\frac{1}{2},k}-\big{(} \Pi^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\big{]},\,a^{+}_{j+\frac{1 }{2},k}\big{[}\big{(}\Pi^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}-\, \overline{\Pi}^{\mathrm{int}}_{j+\frac{1}{2},k}\big{]}\right).\]
Here, the values \(\big{(}\rho^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\), \(\big{(}\rho^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}\), \(\big{(}(\rho v)^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\), \(\big{(}(\rho v)^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}\), \(\big{(}\Gamma^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\), \(\big{(}\Gamma^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}\), \(\big{(}\Pi^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{\ell}\), and \(\big{(}\Pi^{\mathrm{int}}_{j+\frac{1}{2},k}\big{)}_{r}\), are obtained using the Taylor expansions as it was done in (2.32).
We then proceed as in the remaining part of SS2.2.1 and complete the derivation of the fully discrete scheme (not shown here for the sake of brevity), and after this, we pass to the semi-discrete limit and end up with the LD PCCU flux (3.5) with the following "built-in" anti-diffusion term:
\[\mathbf{q}_{j+\frac{1}{2},k}=\left
Here,
\[\mathbf{U}^{*}_{j+\frac{1}{2},k}=\frac{a^{+}_{j+\frac{1}{2},k}\mathbf{U}^{+}_{j +\frac{1}{2},k}-a^{-}_{j+\frac{1}{2},k}\mathbf{U}^{-}_{j+\frac{1}{2},k}-\left(\mathbf{K} ^{+}_{j+\frac{1}{2},k}-\mathbf{K}^{-}_{j+\frac{1}{2},k}\right)}{a^{+}_{j+\frac{1}{2 },k}-a^{-}_{j+\frac{1}{2},k}},\quad u^{*}_{j+\frac{1}{2},k}=\frac{(\rho u)^{*}_ {j+\frac{1}{2},k}}{\rho^{*}_{j+\frac{1}{2},k}},\] \[q^{\rho}_{j+\frac{1}{2},k}=\text{minmod}\left(-a^{-}_{j+\frac{1} {2},k}\big{(}\rho^{*}_{j+\frac{1}{2},k}-\rho^{-}_{j+\frac{1}{2},k}\big{)},a^{+} _{j+\frac{1}{2},k}\big{(}\rho^{+}_{j+\frac{1}{2},k}-\rho^{*}_{j+\frac{1}{2},k} \big{)}\right),\] \[q^{\rho v}_{j+\frac{1}{2},k}=\text{minmod}\left(-a^{-}_{j+\frac{1 }{2},k}\big{(}(\rho v)^{*}_{j+\frac{1}{2},k}-(\rho v)^{-}_{j+\frac{1}{2},k} \big{)},a^{+}_{j+\frac{1}{2},k}\big{(}(\rho v)^{+}_{j+\frac{1}{2},k}-(\rho v)^ {*}_{j+\frac{1}{2},k}\big{)}\right),\] \[q^{\Gamma}_{j+\frac{1}{2},k}=\text{minmod}\left(-a^{-}_{j+\frac{1 }{2},k}\big{(}\Gamma^{*}_{j+\frac{1}{2},k}-\Gamma^{-}_{j+\frac{1}{2},k}\big{)}, a^{+}_{j+\frac{1}{2},k}\big{(}\Gamma^{+}_{j+\frac{1}{2},k}-\Gamma^{-}_{j+ \frac{1}{2},k}\big{)}\right),\] \[q^{\Pi}_{j+\frac{1}{2},k}=\text{minmod}\left(-a^{-}_{j+\frac{1 }{2},k}\big{(}\Pi^{*}_{j+\frac{1}{2},k}-\Pi^{-}_{j+\frac{1}{2},k}\big{)},a^{+} _{j+\frac{1}{2},k}\big{(}\Pi^{+}_{j+\frac{1}{2},k}-\Pi^{*}_{j+\frac{1}{2},k} \big{)}\right),\] \[q^{E}_{j+\frac{1}{2},k}=\frac{a^{+}_{j+\frac{1}{2},k}a^{-}_{j+ \frac{1}{2},k}}{a^{+}_{j+\frac{1}{2},k}-a^{-}_{j+\frac{1}{2},k}}\left\{\frac{( d^{\Gamma})^{-}_{j+\frac{1}{2},k}\Bigg{(}(\rho v)^{*}_{j+\frac{1}{2},k}+\frac{q^{ \rho v}_{j+\frac{1}{2},k}}{a^{+}_{j+\frac{1}{2},k}}\Bigg{)}^{2}}{2\Bigg{(}\rho ^{*}_{j+\frac{1}{2},k}+\frac{q^{\rho}_{j+\frac{1}{2},k}}{a^{+}_{j+\frac{1}{2},k}}\Bigg{)}^{2}}-\frac{(d^{\Gamma})^{+}_{j+\frac{1}{2},k}\Bigg{(}(\rho v)^{*} _{j+\frac{1}{2},k}+\frac{q^{\rho v}_{j+\frac{1}{2},k}}{a^{-}_{j+\frac{1}{2},k }}\Bigg{)}^{2}}{2\Bigg{(}\rho^{*}_{j+\frac{1}{2},k}+\frac{q^{\rho}_{j+\frac{1} {2},k}}{a^{-}_{j+\frac{1}{2},k}}\Bigg{)}^{2}}\right\}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\[+\frac{\big{(}v^{*}_{j,k+\frac{1}{2}}\big{)}^{2}}{2}q^{\rho}_{j,k+\frac{1}{2}}+ \frac{q^{\Gamma}_{j,k+\frac{1}{2}}}{\Gamma^{*}_{j,k+\frac{1}{2}}}\bigg{[}E^{*}_{ j,k+\frac{1}{2}}-\frac{\big{(}(\rho v)^{*}_{j,k+\frac{1}{2}}\big{)}^{2}}{2\rho^{*}_{j,k+ \frac{1}{2}}}-\Pi^{*}_{j,k+\frac{1}{2}}\bigg{]}+q^{\Pi}_{j,k+\frac{1}{2}},\]
where
\[(d^{\Gamma})^{\pm}_{j,k+\frac{1}{2}}=1-\frac{q^{\Gamma}_{j,k+\frac{1}{2}}}{b^ {\pm}_{j,k+\frac{1}{2}}\Gamma^{*}_{j,k+\frac{1}{2}}}.\]
**Remark 3.1**: _As in [29], the computation of numerical fluxes in (3.5) should be desingularized to avoid division by zero or very small numbers. First, if \(a^{+}_{j+\frac{1}{2},k}<\varepsilon_{0}\) and \(a^{-}_{j+\frac{1}{2},k}>-\varepsilon_{0}\) for a small positive \(\varepsilon_{0}\), we replace the flux \(\boldsymbol{\mathcal{K}}_{j+\frac{1}{2},k}\) with_
\[\boldsymbol{\mathcal{K}}_{j+\frac{1}{2},k}=\frac{\boldsymbol{K}\big{(}U^{-}_{ j+\frac{1}{2},k}\big{)}+\boldsymbol{K}\big{(}U^{+}_{j+\frac{1}{2},k}\big{)}}{2}.\]
_Similarly, if \(b^{+}_{j,k+\frac{1}{2}}<\varepsilon_{0}\) and \(b^{-}_{j,k+\frac{1}{2}}>-\varepsilon_{0}\), we replace the flux \(\boldsymbol{\mathcal{L}}_{j,k+\frac{1}{2}}\) with_
\[\boldsymbol{\mathcal{L}}_{j,k+\frac{1}{2}}=\frac{\boldsymbol{L}\big{(}U^{-}_{ j,k+\frac{1}{2}}\big{)}+\boldsymbol{L}\big{(}U^{+}_{j,k+\frac{1}{2}}\big{)}}{2}.\]
_In addition, the computation of the energy numerical fluxes have to be desingularized even in the case when only one of the local speeds is very small. In particular,_
\[\text{if }a^{+}_{j+\frac{1}{2},k}<\varepsilon_{0}\text{ but }a^{-}_{j+\frac{1}{2},k}<- \varepsilon_{0},\ \ \text{we take }\ \mathcal{K}^{(4)}_{j+\frac{1}{2},k}=u^{-}_{j+\frac{1}{2},k}\big{(}E^{-}_{j+ \frac{1}{2},k}+p^{-}_{j+\frac{1}{2},k}\big{)};\] \[\text{if }a^{-}_{j+\frac{1}{2},k}>-\varepsilon_{0}\text{ but }a^{+}_{j+\frac{1}{2},k}>\varepsilon_{0},\ \ \text{we take }\ \mathcal{K}^{(4)}_{j+\frac{1}{2},k}=u^{+}_{j+\frac{1}{2},k}\big{(}E^{+}_{j+ \frac{1}{2},k}+p^{+}_{j+\frac{1}{2},k}\big{)};\] \[\text{if }b^{+}_{j,k+\frac{1}{2}}<\varepsilon_{0}\text{ but }b^{-}_{j,k+\frac{1}{2}}<- \varepsilon_{0},\ \ \text{we take }\ \mathcal{L}^{(4)}_{j,k+\frac{1}{2}}=v^{-}_{j,k+\frac{1}{2}}\big{(}E^{-}_{j,k+ \frac{1}{2}}+p^{-}_{j,k+\frac{1}{2}}\big{)};\] \[\text{if }b^{-}_{j,k+\frac{1}{2}}>-\varepsilon_{0}\text{ but }b^{+}_{j,k+\frac{1}{2}}> \varepsilon_{0},\ \ \ \text{we take }\ \mathcal{L}^{(4)}_{j,k+\frac{1}{2}}=v^{+}_{j,k+\frac{1}{2}}\big{(}E^{+}_{j,k+ \frac{1}{2}}+p^{+}_{j,k+\frac{1}{2}}\big{)}.\]
_As in the 1-D case, we take \(\varepsilon_{0}=10^{-12}\) in all of the numerical examples._
### Flux Globalization Based LD Ai-WENO PCCU Scheme
In this section, we extend the 2-D second-order flux globalization based LD PCCU schemes from SS3.1 to the fifth order of accuracy within the A-WENO framework.
The semi-discrete fifth-order LD Ai-WENO PCCU scheme for the 2-D quasi-conservative system (3.1)-(3.2) reads as
\[\frac{\mathrm{d}}{\mathrm{d}t}\boldsymbol{U}_{j,k}=-\frac{\boldsymbol{ \mathcal{H}}^{x}_{j+\frac{1}{2},k}-\boldsymbol{\mathcal{H}}^{x}_{j-\frac{1}{2}, k}}{\Delta x}-\frac{\boldsymbol{\mathcal{H}}^{y}_{j,k+\frac{1}{2}}- \boldsymbol{\mathcal{H}}^{y}_{j,k-\frac{1}{2}}}{\Delta y}, \tag{3.16}\]
where the fifth-order numerical fluxes \(\boldsymbol{\mathcal{H}}^{x}_{j+\frac{1}{2},k}\) and \(\boldsymbol{\mathcal{H}}^{y}_{j,k+\frac{1}{2}}\) are defined by
\[\begin{split}&\boldsymbol{\mathcal{H}}^{x}_{j+\frac{1}{2},k}= \boldsymbol{\mathcal{K}}_{j+\frac{1}{2},k}-\frac{\Delta x}{24}(\boldsymbol{K} _{xx})_{j+\frac{1}{2},k}+\frac{7(\Delta x)^{3}}{5760}(\boldsymbol{K}_{xxxx})_{j+ \frac{1}{2},k},\\ &\boldsymbol{\mathcal{H}}^{y}_{j,k+\frac{1}{2}}=\boldsymbol{ \mathcal{L}}_{j,k+\frac{1}{2}}-\frac{\Delta y}{24}(\boldsymbol{L}_{yy})_{j,k+ \frac{1}{2}}+\frac{7(\Delta y)^{3}}{5760}(\boldsymbol{L}_{yyyy})_{j,k+\frac{1}{ 2}}.\end{split} \tag{3.17}\]
Here, \(\mathbf{\mathcal{K}}_{j+\frac{1}{2},k}\) and \(\mathbf{\mathcal{L}}_{j,k+\frac{1}{2}}\) are the finite-volume fluxes (3.5), and \((\mathbf{K}_{xx})_{j+\frac{1}{2},k}\), \((\mathbf{K}_{xxxx})_{j+\frac{1}{2},k}\), \((\mathbf{L}_{yy})_{j,k+\frac{1}{2}}\), and \((\mathbf{L}_{yyyy})_{j,k+\frac{1}{2}}\) are approximations of the second- and fourth-order spatial derivatives of \(\mathbf{K}\) at \((x,y)=(x_{j+\frac{1}{2}},y_{k})\) and \(\mathbf{L}\) at \((x,y)=(x_{j},y_{k+\frac{1}{2}})\), respectively. We compute these quantities using the finite-difference approximations analogous to (2.52):
\[(\mathbf{K}_{xx})_{j+\frac{1}{2},k}=\frac{1}{12(\Delta x)^{2}}\left[ -\mathbf{\mathcal{K}}_{j-\frac{3}{2},k}+16\mathbf{\mathcal{K}}_{j-\frac{1}{2},k}-30 \mathbf{\mathcal{K}}_{j+\frac{1}{2},k}+16\mathbf{\mathcal{K}}_{j+\frac{3}{2},k}-\mathbf{ \mathcal{K}}_{j+\frac{5}{2},k}\right],\] \[(\mathbf{K}_{xxxx})_{j+\frac{1}{2},k}=\frac{1}{(\Delta x)^{4}}\left[ \mathbf{\mathcal{K}}_{j-\frac{3}{2},k}-4\mathbf{\mathcal{K}}_{j-\frac{1}{2},k}+6\mathbf{ \mathcal{K}}_{j+\frac{1}{2},k}-4\mathbf{\mathcal{K}}_{j+\frac{3}{2},k}+\mathbf{ \mathcal{K}}_{j+\frac{5}{2},k}\right],\] \[(\mathbf{L}_{yy})_{j,k+\frac{1}{2}}=\frac{1}{12(\Delta y)^{2}}\left[ -\mathbf{\mathcal{L}}_{j,k-\frac{3}{2}}+16\mathbf{\mathcal{L}}_{j,k-\frac{1}{2}}-30 \mathbf{\mathcal{L}}_{j,k+\frac{1}{2}}+16\mathbf{\mathcal{L}}_{j,k+\frac{3}{2}}-\mathbf{ \mathcal{L}}_{j,k+\frac{5}{2}}\right],\] \[(\mathbf{L}_{yyyy})_{j,k+\frac{1}{2}}=\frac{1}{(\Delta y)^{4}}\left[ \mathbf{\mathcal{L}}_{j,k-\frac{3}{2}}-4\mathbf{\mathcal{L}}_{j,k-\frac{1}{2}}+6\mathbf{ \mathcal{L}}_{j,k+\frac{1}{2}}-4\mathbf{\mathcal{L}}_{j,k+\frac{3}{2}}+\mathbf{ \mathcal{L}}_{j,k+\frac{5}{2}}\right].\]
As in the 1-D case, the resulting semi-discrete scheme (3.16)-(3.17) is fifth-order accurate provided the point values \(\mathbf{U}_{j+\frac{1}{2},k}^{\pm}\) and \(\mathbf{U}_{j,k+\frac{1}{2}}^{\pm}\) are calculated using a fifth-order interpolation. To this end, we apply the recently proposed fifth-order Ai-WENO-Z interpolation [14, 31, 46], which can be performed in the \(x\)- and \(y\)-directions similarly to the 1-D case discussed in Appendix A; we omit the details for the sake of brevity.
**Remark 3.2**: _As in the 1-D case, one needs to apply the Ai-WENO-Z interpolation procedure in the local characteristic variables to reduce the magnitude of the numerical oscillations. In Appendix B, we provide a detailed explanation on how to apply the LCD to the 2-D system (3.1)-(3.3)._
## 4 Numerical Examples
In this section, we apply the developed schemes to several 1-D and 2-D numerical examples and compare the performance of the second-order flux globalization based PCCU, the second-order flux globalization based LD PCCU, and the fifth-order flux globalization based LD Ai-WENO PCCU schemes, which will be referred to as the _PCCU_, _LD PCCU_, and _Ai-WENO_ schemes, respectively.
In all of the numerical examples, we have solved the ODE systems (2.2), (2.50), (3.4), and (3.16) using the three-stage third-order strong stability preserving (SSP) Runge-Kutta method (see, e.g., [16, 17]) and used the CFL number 0.45.
### One-Dimensional Examples
#### Example 1--"Shock-Bubble" Interaction
In the first example, we consider the "shock-bubble" interaction problem, which is a two-fluid modification of a single-fluid example from [29]. The initial data are given by
\[(\rho,u,p;\gamma,\pi_{\infty})(x,0)=\begin{cases}(13.1538,0,1;5/3,0),&|x|<0.25, \\ (1.3333,-0.3535,1.5;1.4,0),&x>0.75,\\ (1,0,1;1.4,0),&\text{otherwise},\end{cases}\]
which correspond to a left-moving shock, initially located at \(x=0.75\), and a resting "bubble" with a radius of \(0.25\), initially located at the origin. These inital data are considered in the computational domain \([-1,2]\) subject to the solid wall boundary conditions imposed at both \(x=-1\) and \(x=2\).
We apply the studied PCCU, LD PCCU, and Ai-WENO schemes to this initial-boundary value problem and compute its numerical solutions until the final time \(t=3\) on a uniform mesh with \(\Delta x=1/100\). The obtained densities and velocities are presented in Figures 4.1 and 4.2 together with the reference solution computed by the PCCU scheme on a much finer mesh with \(\Delta x=1/2000\). As one can see, the LD PCCU scheme achieves sharper resolution than the PCCU one, and the use of a fifth-order Ai-WENO scheme further enhances the resolution.
correspond to the left-moving shock, initially located at \(x=11.4\), and a resting air "bubble" with a radius \(3\), initially located at \(x=6\). The initial conditions are prescribed in the computational domain \([0,18]\) subject to the free boundary conditions.
We compute the numerical solution until the final time \(t=0.045\) on a uniform mesh with \(\Delta x=1/10\) by the PCCU, LD PCCU, and Ai-WENO schemes and plot the obtained density in Figure 4.3 together with the reference solution computed by the PCCU scheme on a much finer mesh with \(\Delta x=1/400\). As one can observe, the LD PCCU solution achieves sharper resolution (especially of the contact wave located at about \(x=3\)) compared with its PCCU counterpart, but it produces certain oscillations. The Ai-WENO scheme, on the other hand, achieves even higher resolution and the obtained fifth-order results are oscillation-free. We believe that this is thanks to the fact that the Ai-WENO-Z interpolation is performed using the LCD.
**Example 3--Water-Air Model With a Very Stiff Equation of State**
In the last 1-D example taken from [2, 10], we consider another gas-liquid multifluid system with the water component modeled using even stiffer EOS than the one used in Example 2. The initial conditions that correspond to a severe water-air shock tube problem,
\[(\rho,u,p;\gamma,\pi_{\infty})=\begin{cases}(1000,0,10^{9};4.4,6\cdot 10^{8}),&x <0.7,\\ (50,0,10^{5};1.4,0),&x>0.7,\end{cases}\]
are prescribed in the computational domain \([0,1]\) subject to the free boundary conditions.
We compute the numerical solutions by the studied PCCU, LD PCCU, and Ai-WENO schemes until the final time \(t=0.00025\) on a uniform mesh with \(\Delta x=1/400\). The obtained densities are shown in Figure 4.4 along with the reference solution computed by the PCCU scheme on a much finer mesh with \(\Delta x=1/6400\). One can observe that all of the studied schemes produce non-oscillatory numerical solutions, and the Ai-WENO solution is slightly sharper than the solutions computed by the PCCU and LD PCCU schemes.
### Two-Dimensional Examples
In this section, we present four 2-D numerical examples. In all of them, we plot the Schlieren images of the magnitude of the density gradient field, \(|\nabla\rho|\). To this end, we have used the
Figure 4.3: Example 2: Density \(\rho\) (left) and zoom at \(x\in[2.5,4.8]\) (right).
following shading function:
\[\exp\bigg{(}-\frac{80|\nabla\rho|}{\max(|\nabla\rho|)}\bigg{)},\]
where the numerical derivatives of the density are computed using standard central differencing.
### Example 4--Shock-Helium Bubble Interaction
In the first 2-D example taken from [10, 40], a shock wave in the air hits the light resting bubble which contains helium. We take the following initial conditions:
\[(\rho,u,v,p;\gamma,p_{\infty})=\begin{cases}(4/29,0,0,1;5/3,0),&\text{in region A},\\ (1,0,0,1;1.4,0),&\text{in region B},\\ (4/3,-0.3535,0,1.5;1.4,0),&\text{in region C},\end{cases}\]
where regions A, B, and C are outlined in Figure 4.5, and the computational domain is \([-3,1]\times[-0.5,0.5]\). We impose the solid wall boundary conditions on the top and bottom and the free boundary conditions on the left and right edges of the computational domain.
We compute the numerical solutions until the final time \(t=3\) on a uniform mesh with \(\Delta x=\Delta y=1/500\). In Figures 4.6 and 4.7, we present different stages of the shock-bubble interaction computed by the PCCU, LD PCCU, and Ai-WENO schemes. Notice that the bubble changes its shape and propagates to the left, but in order to focus on the details of the bubble structure, we only zoom at \([\sigma,\sigma+1]\times[-0.5,0.5]\) square area containing the bubble (\(\sigma\) is decreasing in time from -0.5 to -1.6). As one can observe from the numerical results, the bubble interface
Figure 4.5: Initial setting for the 2-D numerical examples.
develops very complex structures after the bubble is hit by the shock, especially at large times. The obtained results are in a good agreement with the experimental findings presented in [18] and the numerical results reported in [9, 10, 40]. At the same time, one can see that at a small time \(t=0.5\), the resolution of the bubble interface is significantly improved by the use of the LD PCCU and especially the Ai-WENO schemes. At larger times, the interface develops instabilities which are smeared by a more dissipative PCCU scheme. The differences in the achieved resolution of the small solution structures become even more pronounced at the larger times \(t=2\), \(2.5\), and especially at \(t=3\). This clearly indicates that the LD PCCU scheme outperforms the PCCU one, and the further improvement in the Ai-WENO results is much more obvious than in the 1-D examples.
Figure 4.6: Example 4: Shock-helium bubble interaction by the PCCU (left column), LD PCCU (middle column), and Ai-WENO (right column) schemes at times \(t=0.5\), \(1\), and \(1.5\).
**Example 5--Shock-R22 Bubble Interaction**
In the second 2-D example also taken from [10, 40], a shock wave in the air hits the heavy resting bubble which contains R22. The initial conditions are
\[(\rho,u,v,p;\gamma,\pi_{\infty})=\begin{cases}(3.1538,0,0,1;1.249,0),&\text{in region A},\\ (1,0,0,1;1.4,0),&\text{in region B},\\ (4/3,-0.3535,0,1.5;1.4,0),&\text{in region C}.\end{cases}\]
The regions A, B, and C are the same as in Example 4 and they are specified in Figure 4.7. In this example, we impose the same boundary conditions and use the same computational domain as in Example 4.
Figure 4.7: Same as in Figure 4.6, but at larger times \(t=2\), 2.5, and 3.
We compute the numerical solutions until the final time \(t=3\) on a uniform mesh with \(\Delta x=\Delta y=1/500\). In Figures 4.8 and 4.9, we present different stages of the shock-bubble interaction computed by the PCCU, LD PCCU, and Ai-WENO schemes. As one can see, the bubble changes its shape and propagates to the left, and in order to focus on the details of the bubble structure, we only zoom at \([\sigma,\sigma+1]\times[-0.5,0.5]\) square area containing the bubble (\(\sigma\) is decreasing in time from -0.5 to -1.15). Compared with Example 4, the bubble moves to the left a little slower and develops totally different structures as the R22 is heavier than the Helium. The obtained results are in a good agreement with the numerical results reported in [9, 10, 40]. Similar to Example 4, at a small time \(t=0.5\), the resolution of the bubble interface is significantly improved by the use of either the LD PCCU or Ai-WENO schemes, and the improvement in this example is even more pronounced. By the time \(t=1\), the interface develop instabilities which are smeared by a more dissipative PCCU scheme. As time progresses, the solutions develop very complex small structures, which are better resolved by the schemes containing smaller amount of numerical dissipation, namely, by the LD PCCU and Ai-WENO schemes.
Figure 4.9: Same as in Figure 4.8, but at larger times \(t=1.5\), 2, 2.5, and 3.
The initial conditions are given by
\[(\rho,u,v,p;\gamma,\pi_{\infty})=\begin{cases}(1.27,0,0,8290;2,0),&(x-5)^{2}+(y-2) ^{2}<1,\\ (0.02,0,0,1;1.4,0),&y>4,\\ (1,0,0,1;7.15,3309),&\text{otherwise},\end{cases}\]
the solid wall boundary conditions are imposed at the bottom, and the free boundary conditions are prescribed on the other sides of the computational domain \([0,10]\times[0,6]\).
Notice that the initial conditions contain three--not two--different fluids, and therefore we need to modify the way the fluid interfaces are detected as the criteria (3.8) and (3.11) are applicable in the two-fluid case only. We first set \(\Gamma_{\mathrm{I}}=1/(\gamma_{\mathrm{I}}-1)\), \(\Gamma_{\mathrm{II}}=1/(\gamma_{\mathrm{II}}-1)\), and \(\Gamma_{III}=1/(\gamma_{\mathrm{III}}-1)\), where \(\gamma_{\mathrm{I}}=2\), \(\gamma_{\mathrm{II}}=1.4\), and \(\gamma_{\mathrm{III}}=7.15\) are the specific heat ratios for the three fluids. We then introduce \(\widehat{\Gamma}_{1}:=(\Gamma_{\mathrm{I}}+\Gamma_{\mathrm{III}})/2\), \(\widehat{\Gamma}_{2}:=(\Gamma_{\mathrm{II}}+\Gamma_{\mathrm{III}})/2\), and replace the conditions (3.8) and (3.11) with
\[(\overline{\Gamma}_{j,k}-\widehat{\Gamma}_{1})(\overline{\Gamma}_{j+1,k}- \widehat{\Gamma}_{1})<0\quad\text{or}\quad(\overline{\Gamma}_{j,k}-\widehat{ \Gamma}_{2})(\overline{\Gamma}_{j+1,k}-\widehat{\Gamma}_{2})<0\]
and
\[(\overline{\Gamma}_{j,k}-\widehat{\Gamma}_{1})(\overline{\Gamma}_{j,k+1}- \widehat{\Gamma}_{1})<0\quad\text{or}\quad(\overline{\Gamma}_{j,k}-\widehat{ \Gamma}_{2})(\overline{\Gamma}_{j,k+1}-\widehat{\Gamma}_{2})<0\]
respectively.
We compute the numerical solutions until the final time \(t=0.02\) on a uniform mesh with \(\Delta x=\Delta y=1/80\) by the studied PCCU, LD PCCU, and Ai-WENO schemes. In Figure 4.10, we present time snapshots of the obtained results, which qualitatively look very similar to the numerical results reported in [52]. As one can see, the LD PCCU scheme captures both the material interfaces and many of the developed wave structures substantially sharper than the PCCU scheme and the use of the Ai-WENO scheme further enhances the resolution. In particular, one can observe more pronounced small structures in the Ai-WENO solution (especially inside the bubble) at the large time \(t=0.02\); see Figure 4.11, where we zoom at the bubble area. This example, once again, clearly indicates that the LD PCCU and Ai-WENO schemes outperform the PCCU one.
be compressed by the water, propagates to the left, and changes its shape until losing its integrity and breaking up. The obtained results are in a good qualitative agreement with the numerical results reported in [8]. As in the previous examples, the resolution of the bubble interface is significantly improved by the use of the LD PCCU and Ai-WENO schemes, especially for the small times \(t=0.0204\), \(0.0305\), and \(0.0368\); see Figure 4.12. At the same time, the differences near the bubble interfaces between the LD PCCU and Ai-WENO solutions are minor.
## 5 Conclusion
In this paper, we have developed flux globalization based low-dissipation (LD) path-conservative central-upwind (PCCU) schemes for one- (1-D) and two-dimensional (2-D) compressible multifluids. The LD PCCU schemes are based on the flux globalization based PCCU schemes and employ the recently proposed LDCU fluxes to reduce the numerical dissipation present in the original PCCU schemes. In order to further enhance the resolution of material interfaces, we track their locations and use the overcompressive SBM limiter in their neighborhoods, while in the rest of the computational domain, a dissipative generalized minmod limiter is utilized. The new second-order finite-volume method is then extended to the fifth order of accuracy via the finite-difference A-WENO framework. We have applied the developed schemes to a number of 1-D and 2-D examples and the obtained numerical results clearly demonstrate that both of the LD PCCU and LD A-WENO schemes outperform the flux globalization based PCCU scheme that employs the original central-upwind numerical flux from [25]. At the same time, these examples show that the fifth-order LD Ai-WENO scheme enhances the resolution achieved by the second-order LD PCCU scheme.
### Acknowledgment
The work of A. Kurganov was supported in part by NSFC grant 12171226 and by the fund of the Guangdong Provincial Key Laboratory of Computational Science and Material Design (No. 2019B030301001).
## Appendix A 1-D Fifth-Order Ai-WENO-Z Interpolation
In this appendix, we briefly describe the fifth-order Ai-WENO-Z interpolation recently introduced in [14, 31, 46].
Assume that the point values \(W_{j+\ell}\) of a certain quantity \(W\) at the uniform grid points \(x=x_{j+\ell}\), \(\ell=-2,\ldots,3\) are available. We now show how to obtain an interpolated left-sided value of \(W\) at \(x=x_{j+\frac{1}{2}}\), denoted by \(W_{j+\frac{1}{2}}^{-}\). The right-sided value \(W_{j+\frac{1}{2}}^{+}\) can then be obtained in the mirror-symmetric way.
Figure 4.11: Example 6: Solutions computed by the PCCU (left), LD PCCU (middle), and Ai-WENO (right) schemes at time \(t=0.02\); zoom at the bubble area.
The value \(W^{-}_{j+\frac{1}{2}}\) is calculated using a weighted average of the three parabolic interpolants \(\mathcal{P}_{0}(x)\), \(\mathcal{P}_{1}(x)\), and \(\mathcal{P}_{2}(x)\) obtained using the stencils \([x_{j-2},x_{j-1},x_{j}]\), \([x_{j-1},x_{j},x_{j+1}]\), and \([x_{j},x_{j+1},x_{j+2}]\), respectively:
\[W^{-}_{j+\frac{1}{2}}=\sum_{k=0}^{2}\omega_{k}\mathcal{P}_{k}\big{(}x_{j+\frac {1}{2}}\big{)},\]
where
\[\mathcal{P}_{0}(x_{j+\frac{1}{2}}) =\frac{3}{8}\,W_{j-2}-\frac{5}{4}\,W_{j-1}+\frac{15}{8}\,W_{j}, \quad\mathcal{P}_{1}(x_{j+\frac{1}{2}})=-\frac{1}{8}\,W_{j-1}+\frac{3}{4}\,W_ {j}+\frac{3}{8}\,W_{j+1},\] \[\mathcal{P}_{2}(x_{j+\frac{1}{2}}) =\frac{3}{8}\,W_{j}+\frac{3}{4}\,W_{j+1}-\frac{1}{8}\,W_{j+2},\]
Figure 4.12: Example 7: Shock-air bubble interaction computed by the PCCU (left column), LD PCCU (middle column), and Ai-WENO (right column) schemes at times \(t=0.0204\), 0.0305, and 0.0368.
and the Ai-weights \(\omega_{k}\) are computed by
\[\omega_{k}=\frac{\alpha_{k}}{\alpha_{0}+\alpha_{1}+\alpha_{2}},\quad\alpha_{k}=d_ {k}\left[1+\left(\frac{\tau_{5}}{\beta_{k}+\varepsilon\mu_{j}^{2}}\right)^{r} \right],\quad k=0,1,2,\] (A.1)
with \(d_{0}=\frac{1}{16}\), \(d_{1}=\frac{5}{8}\), and \(d_{2}=\frac{5}{16}\). The smoothness indicators \(\beta_{k}\) for the corresponding parabolic interpolants \(\mathcal{P}_{k}(x)\) are given by
\[\beta_{0} =\frac{13}{12}\big{(}W_{j-2}-2W_{j-1}+W_{j}\big{)}^{2}+\frac{1}{4} \big{(}W_{j-2}-4W_{j-1}+3W_{j}\big{)}^{2},\] \[\beta_{1} =\frac{13}{12}\big{(}W_{j-1}-2W_{j}+W_{j+1}\big{)}^{2}+\frac{1}{4 }\big{(}W_{j-1}-W_{j+1}\big{)}^{2},\] \[\beta_{2} =\frac{13}{12}\big{(}W_{j}-2W_{j+1}+W_{j+2}\big{)}^{2}+\frac{1}{4 }\big{(}3W_{j}-4W_{j+1}+W_{j+2}\big{)}^{2}.\]
Finally, in formula (A.1), \(\tau_{5}=|\beta_{2}-\beta_{0}|\), \(\mu_{j}=\frac{1}{5}\sum_{\ell=j-2}^{j+2}|W_{\ell}-\widehat{W}_{j}|+10^{-40}\) with \(\widehat{W}_{j}:=\frac{1}{5}\sum_{\ell=j-2}^{j+2}W_{\ell}\), and in all of the numerical examples, we have chosen \(r=2\) and \(\varepsilon=10^{-12}\).
### 1-D Local Characteristic Decomposition
In SS2.3, the Ai-WENO-Z interpolant is applied to the local characteristic variables, which are obtained using the LCD. To this end, we first rewrite the studied \(\gamma\)-based multifluid system in
Figure 4.13: Example 7: Same as in Figure 4.12, but at larger times \(t=0.0405\) and 0.045.
terms of the primitive variables \(\mathbf{V}=(\rho,u,p,\Gamma,\Pi)^{\top}\):
\[\mathbf{V}_{t}+\mathcal{A}\mathbf{V}_{x}=\mathbf{0},\quad\mathcal{A}:=\begin{pmatrix}u&\rho& 0&0&0\\ 0&u&\dfrac{1}{\rho}&0&0\\ 0&\gamma(p+\pi_{\infty})&u&0&0\\ 0&0&0&u&0\\ 0&0&0&0&u\end{pmatrix},\]
and introduce the locally averaged matrices
\[\widehat{\mathcal{A}}_{j+\frac{1}{2}}:=\begin{pmatrix}\hat{u}&\hat{\rho}&0&0& 0\\ 0&\hat{u}&\dfrac{1}{\hat{\rho}}&0&0\\ 0&\hat{\gamma}(\hat{p}+\hat{\pi}_{\infty})&\hat{u}&0&0\\ 0&0&0&\hat{u}&0\\ 0&0&0&0&\hat{u}\end{pmatrix},\] (A.2)
where \(\hat{(\cdot)}\) stands for the following averages (see [23]):
\[\hat{\rho} =\sqrt{\rho_{j}\rho_{j+1}},\quad\hat{u}=\frac{\sqrt{\rho_{j}}u_{j }+\sqrt{\rho_{j+1}}u_{j+1}}{\sqrt{\rho_{j}}+\sqrt{\rho_{j+1}}},\quad\hat{p}= \frac{\sqrt{\rho_{j}}p_{j}+\sqrt{\rho_{j+1}}p_{j+1}}{\sqrt{\rho_{j}}+\sqrt{ \rho_{j+1}}},\] \[\hat{\gamma} =\frac{\sqrt{\rho_{j}}\gamma_{j}+\sqrt{\rho_{j+1}}\gamma_{j+1}}{ \sqrt{\rho_{j}}+\sqrt{\rho_{j+1}}},\quad\hat{\pi}_{\infty}=\frac{\sqrt{\rho_{j }}(\pi_{\infty})_{j}+\sqrt{\rho_{j+1}}(\pi_{\infty})_{j+1}}{\sqrt{\rho_{j}}+ \sqrt{\rho_{j+1}}},\]
where \(\gamma_{j}=1+1/\Gamma_{j}\) and \((\pi_{\infty})_{j}=\Pi_{j}/(1+\Gamma_{j})\).
We then compute the matrix \(R_{j+\frac{1}{2}}\) composed of the right eigenvectors of \(\widehat{\mathcal{A}}_{j+\frac{1}{2}}\) and obtain
\[R_{j+\frac{1}{2}}=\begin{pmatrix}\dfrac{1}{\hat{c}^{2}}&0&0&1&\dfrac{1}{\hat{ c}^{2}}\\ -\dfrac{1}{\hat{\rho}\hat{c}}&0&0&0&\dfrac{1}{\hat{\rho}\hat{c}}\\ 1&0&0&0&1\\ 0&0&1&0&0\\ 0&1&0&0&0\end{pmatrix}\quad\text{and}\quad R_{j+\frac{1}{2}}^{-1}=\begin{pmatrix} 0&-\dfrac{\hat{\rho}\hat{c}}{2}&\dfrac{1}{2}&0&0\\ 0&0&0&0&1\\ 0&0&0&1&0\\ 1&0&-\dfrac{1}{\hat{c}^{2}}&0&0\\ 0&\dfrac{\hat{\rho}\hat{c}}{2}&\dfrac{1}{2}&0&0\end{pmatrix},\] (A.3)
where \(\hat{c}=\sqrt{\gamma(\hat{p}+\hat{\pi}_{\infty})/\hat{\rho}}\). Notice that all of the \(\hat{(\cdot)}\) quantities in (A.2)-(A.3) have to have a subscript index, that is, \(\hat{(\cdot)}=\hat{(\cdot)}_{j+\frac{1}{2}}\), but we have omitted it for the sake of brevity for all of the quantities except for \(\widehat{\mathcal{A}}_{j+\frac{1}{2}}\).
Finally, we introduce the local characteristic variables in the neighborhood of \(x=x_{j+\frac{1}{2}}\):
\[\mathbf{W}_{j+\ell}=R_{j+\frac{1}{2}}^{-1}\mathbf{V}_{j+\ell},\quad\ell=-2,\ldots,3,\]
and apply the Ai-WENO-Z interpolation to every component of \(\mathbf{W}\) to obtain \(\mathbf{W}_{j+\frac{1}{2}}^{\pm}\), and then we end up with
\[\mathbf{V}_{j+\frac{1}{2}}^{\pm}=R_{j+\frac{1}{2}}\mathbf{W}_{j+\frac{1}{2}}^{\pm}.\]
## Appendix B 2-D Local Characteristic Decomposition
In this appendix, we extend the 1-D LCD described in Appendix A.1 to the system (3.12), which can be rewritten in terms of the primitive variables \(\mathbf{V}=(\rho,u,v,p,\Gamma,\Pi)^{\top}\) as
\[\mathbf{V}_{t}+\mathcal{A}\mathbf{V}_{x}=\mathbf{0},\quad\mathcal{A}:=\begin{pmatrix}u& \rho&0&0&0&0\\ 0&u&0&\dfrac{1}{\rho}&0&0\\ 0&0&u&0&0&0\\ 0&\gamma(p+\pi_{\infty})&0&u&0&0\\ 0&0&0&0&u&0\\ 0&0&0&0&0&u\end{pmatrix},\]
and introduce the locally averaged matrices
\[\widehat{\mathcal{A}}_{j+\frac{1}{2},k}=\begin{pmatrix}\hat{u}&\hat{\rho}&0&0& 0&0\\ 0&\hat{u}&0&\dfrac{1}{\hat{\rho}}&0&0\\ 0&0&\hat{u}&0&0&0\\ 0&\hat{\gamma}(\hat{p}+\hat{\pi}_{\infty})&0&\hat{u}&0&0\\ 0&0&0&0&\hat{u}&0\\ 0&0&0&0&0&\hat{u}\end{pmatrix},\]
where \((\hat{\cdot})\) stands for the following averages:
\[\hat{\rho} =\sqrt{\rho_{j,k}\,\rho_{j+1,k}},\quad\hat{u}=\frac{\sqrt{\rho_{j,k}}u_{j,k}+\sqrt{\rho_{j+1,k}}u_{j+1,k}}{\sqrt{\rho_{j,k}}+\sqrt{\rho_{j+1,k}} },\quad\hat{p}=\frac{\sqrt{\rho_{j,k}}p_{j,k}+\sqrt{\rho_{j+1,k}}p_{j+1,k}}{ \sqrt{\rho_{j,k}}+\sqrt{\rho_{j+1,k}}},\] \[\hat{\gamma} =\frac{\sqrt{\rho_{j,k}}\,\gamma_{j,k}+\sqrt{\rho_{j+1,k}}\, \gamma_{j+1,k}}{\sqrt{\rho_{j,k}}+\sqrt{\rho_{j+1,k}}},\quad\hat{\pi}_{\infty} =\frac{\sqrt{\rho_{j,k}}(\pi_{\infty})_{j,k}+\sqrt{\rho_{j+1,k}}(\pi_{\infty} )_{j+1,k}}{\sqrt{\rho_{j,k}}+\sqrt{\rho_{j+1,k}}},\]
with \(\gamma_{j,k}=1+1/\Gamma_{j,k}\) and \((\pi_{\infty})_{j,k}=\Pi_{j,k}/(1+\Gamma_{j,k})\).
We then compute the matrices \(R_{j+\frac{1}{2},k}\) and \(R_{j+\frac{1}{2},k}^{-1}\) such that the matrix \(R_{j+\frac{1}{2},k}^{-1}\widehat{\mathcal{A}}_{j+\frac{1}{2},k}R_{j+\frac{1}{ 2},k}\) is diagonal and obtain
\[R_{j+\frac{1}{2},k}=\begin{pmatrix}\dfrac{1}{\hat{c}^{2}}&0&0&0&1&\dfrac{1}{ \hat{c}^{2}}\\ -\dfrac{1}{\hat{\rho}\hat{c}}&0&0&0&0&\dfrac{1}{\hat{\rho}\hat{c}}\\ 0&0&0&1&0&0\\ 1&0&0&0&0&1\\ 0&1&0&0&0&0\end{pmatrix}\quad\text{and}\quad R_{j+\frac{1}{2},k}^{-1}= \begin{pmatrix}0&-\dfrac{\hat{\rho}\hat{c}}{2}&0&\dfrac{1}{2}&0&0\\ 0&0&0&0&0&1\\ 0&0&0&1&0\\ 0&0&1&0&0&0\\ 1&0&0&-\dfrac{1}{\hat{c}^{2}}&0&0\\ 0&\dfrac{\hat{\rho}\hat{c}}{2}&0&\dfrac{1}{2}&0&0\end{pmatrix}.\]
Appendix A.1, we have omitted the \((j+\frac{1}{2},k)\) indices for all of the \((\hat{\cdot})\) quantities except for \(\widehat{\mathcal{A}}_{j+\frac{1}{2},k}\).
Finally, given the matrices \(R_{j+\frac{1}{2},k}^{-1}\) and \(R_{j+\frac{1}{2},k}\), we introduce the local characteristic variablesnin the neighborhood of \((x,y)=(x_{j+\frac{1}{2}},y_{k})\):
\[\mathbf{W}_{j+\ell,k}=R_{j+\frac{1}{2},k}^{-1}\mathbf{V}_{j+\ell,k},\quad\ell=-2,\ldots,3,\]
apply the Ai-WENO-Z interpolation to every component of \(\mathbf{W}\) to obtain \(\mathbf{W}_{j+\frac{1}{2},k}^{\pm}\), and end up with
\[\mathbf{V}_{j+\frac{1}{2},k}^{\pm}=R_{j+\frac{1}{2},k}\mathbf{W}_{j+\frac{1}{2},k}^{ \pm}.\]
Notice that the point values \(\mathbf{V}_{j,k+\frac{1}{2}}^{\pm}\) are obtained in a similar manner and we omit the details for the sake of brevity.
|
2309.03547 | Security assessment of common open source MQTT brokers and clients | Security and dependability of devices are paramount for the IoT ecosystem.
Message Queuing Telemetry Transport protocol (MQTT) is the de facto standard
and the most common alternative for those limited devices that cannot leverage
HTTP. However, the MQTT protocol was designed with no security concern since
initially designed for private networks of the oil and gas industry. Since MQTT
is widely used for real applications, it is under the lens of the security
community, also considering the widespread attacks targeting IoT devices.
Following this direction research, in this paper we present an empirical
security evaluation of several widespread implementations of MQTT system
components, namely five broker libraries and three client libraries. While the
results of our research do not capture very critical flaws, there are several
scenarios where some libraries do not fully adhere to the standard and leave
some margins that could be maliciously exploited and potentially cause system
inconsistencies. | Edoardo Di Paolo, Enrico Bassetti, Angelo Spognardi | 2023-09-07T08:08:54Z | http://arxiv.org/abs/2309.03547v1 | # Security assessment of common open source MQTT brokers and clients
###### Abstract
Security and dependability of devices are paramount for the IoT ecosystem. _Message Queuing Telemetry Transport_ protocol (MQTT) is the de facto standard and the most common alternative for those limited devices that cannot leverage HTTP. However, the MQTT protocol was designed with no security concern since initially designed for private networks of the oil and gas industry. Since MQTT is widely used for real applications, it is under the lens of the security community, also considering the widespread attacks targeting IoT devices. Following this direction research, in this paper we present an empirical security evaluation of several widespread implementations of MQTT system components, namely five broker libraries and three client libraries. While the results of our research do not capture very critical flaws, there are several scenarios where some libraries do not fully adhere to the standard and leave some margins that could be maliciously exploited and potentially cause system inconsistencies.
## 1 Introduction
The number of devices connected to the Internet is growing very rapidly recently, driving a new wave of technologies and applications in various fields. One of these trends is the so-called _Internet-of-Things_: the explosion of low-cost, small/micro devices (often single-purposes and with an IP stack), Ethernet port and some space for programming paved the way for a whole new spectrum of applications.
Usually, these devices have a tiny amount of resources, so that common protocols like HTTP cannot be efficiently implemented without sacrificing key features of the protocol itself (leading to non-standard implementation) or key parts of the "business logic" (i.e. the main purpose of the device). To overcome this limitation, several lightweight protocols were invented, like MQTT (_Message Queuing Telemetry Transport_) protocol or AMQP (_Advanced Message Queuing Protocol_) [21]. When resources are severely limited (simple sensors/actuators) and the system is in under-constrained environments (low-speed wireless access), the former is the preferred choice [18]. MQTT is a publish-subscribe protocol based on a simple message structure, basic features and a minimal packet size (considering the message headers). Thanks to this design, nearly all IoT devices use MQTT or similar lightweight protocols to talk to each other and communicate with the rest of the world. Also, it has undergone several standard processes, and MQTT v. 3.1.1 and 5.0 are both ISO standards [15].
The protocol was conceived with no security concern, since initially designed for private networks of the oil and gas industry [20]. The adoption of the protocol has ramped up, and several statistics show that many devices use it without any protection [16]. Also, considering the privacy aspects, given its quite limited features, the MQTT protocol has no built-in encryption features; farther, the use of TLS to provide a secure communication channels is very limited: at the time of writing, comparing with the Shodan search engine the prevalence of the exposed IoT and IIoT devices using MQTT, we have that those that use port 8883 (MQTT over SSL/TLS) is 42, while those using port 1883 (MQTT-unencrypted) is 154632 [16]. Moreover, MQTT applications keep receiving critiques, with the claim that they adopt weak protocol implementations, even if some of them, like the Mosquito library, offer extension plugins to improve security1 (i.e. role-based authentication or Access Control List, not part of the MQTT standard).
Footnote 1: [https://mosquito.org/documentation/dynamic-security](https://mosquito.org/documentation/dynamic-security)
Since MQTT is widely used for real applications, it is under the lens of the security community, also considering the widespread attacks targeting IoT devices. The research is focusing on shifting towards ensuring secure IoT systems, for example, implementing access control mechanisms [7], lightweight cryptography [10] or remote attestation of devices [11]. An essential aspect of this context is discovering unforeseen security risks resulting from the necessary interoperability with different implementations of MQTT libraries.
Following this research direction, in this paper we present an empirical security evaluation of several widespread implementations of MQTT system components, namely five broker libraries and three client libraries. Moreover, we also applied our security analysis to an MQTT client embedded in a real IoT device, namely a Shelly DUO Bulb. This IoT device is a remote-controlled LED light bulb. It supports Wi-Fi connectivity and acts as an MQTT subscriber to receive commands, like powering on/off or light dimming.
Our evaluation has aimed to verify the responses of the components of the different libraries to different MQTT messages to see their behaviour in situations where the standard does not indicate clearly how the message (or the connection itself) is supposed to be handled. These mishandling might create interoperability issues or even open doors to malicious attackers. While the results of our research do not capture very critical flaws, there are several scenarios where some of the libraries do not fully adhere to the standard and leave some margins that could be maliciously exploited and potentially cause system inconsistencies.
The structure of the paper is the following: Section 2 reports the state of the art concerning the security analysis of MQTT, while in Section 3 we provide the details about our research methodology. Section 4 reports the results of our security analysis, and the last section concludes the paper with some remarks and future directions.
## 2 Related works
As the abundance of surveys suggests [17, 2, 22], security and dependability of IoT devices is paramount for the whole ecosystem. In this context, the MQTT protocol plays a determinant role. In 1999 Andy Stanford-Clark (IBM) and
Arlen Nipper (then working for Eurotech, Inc.) proposed the MQTT protocol [4] to monitor oil pipelines within the SCADA framework [8]. Since then, it has been revised in two main versions, namely 3.1.1 (last update December 2015) and 5 (last update March 2019). To date, the former is by far the most used in real applications, the latter being much newer and still not well adopted [8].
Like all the network protocols becoming a standard, it has undergone many reviews both formally and empirically. Several papers focus on MQTT formal modelling and performance analysis [14, 13, 6], others on its possible vulnerabilities, and many others on its security analysis. In this research, we focused on the security analysis and the comparison of several of the most spread software libraries implementing the MQTT protocol. Instead of using static analysis of their code, as in [1], we performed a dynamic analysis using the _fuzzing_ methodology. In [12], the authors proposed a template-based fuzzing framework and tested its effectiveness against two implementations of MQTT. Using this method, they found some security issues: Moquette and Mosquito brokers were affected by a vulnerability that would have led to a DoS attack in specific settings if exploited. In our research, we are focusing not only on possible DoS attacks but also on the effects of standard violations of both brokers and clients. Moreover, our analysis applies to five different brokers, three clients and a physical device.
In [25], the authors evaluated the robustness of several MQTT implementations against a subtle family of attacks known as low-rate denial of service. Similarly to this work, a real testbed was set up, and several experiments performed, validating the open vulnerability of all the MQTT implementations.
In [3], authors describe a new strategy to test MQTT through fuzzing and how much it is efficient against the protocol. However, they do not present any results about the application of their strategy. A similar approach is adopted in [5], where the authors propose to apply fuzzing techniques in a container-based environment (Docker). This would allow a large scale test of the MQTT protocol. However, again, the authors do not compare different implementations (they only consider Mosquito), neither describe the type of attacks they performed.
A different methodology based on _attack patterns_[23] was proposed by _Sochor et al._ and was used to spot hidden vulnerabilities in different broker implementations. They adopted a method to randomly generate test sequences (Randoop) to challenge the different brokers, and they were able to find several failures and unhandled exceptions. Our research adopted a different methodology, tested different broker MQTT implementations, and included clients.
Another methodology to perform a dynamic analysis is model-based testing, as proposed for MQTT applications in [24]. The methodology considers using a finite state machine that verifies the properties of the software and proposes extensions to model-based tools for MQTT applications.
### MQTT overview
MQTT implements the publish-subscribe communication paradigm (Figure 1): _clients_ send messages on a _topic_ to servers (named _brokers_) that are responsible for delivering them to the interested clients, the final recipients of the messages. Brokers are, then, intermediaries that accept messages and forward a copy of each message to the clients who previously _subscribed_ for a given _topic_. A topic is
a UTF-8 string obtained joining one or more topic levels with the slash character, like in /home/basement/kitchen/temperature/ and the client subscriptions can be made to a topic or part of it, thanks to the use of wildcards, like in /home/basement/#.
In a general MQTT session, a client establishes a connection with a broker (CONNECT-CONNACK exchange), subscribes to one or several topics (SUBSCRIBE-SUBACK exchange), publishes contents (PUBLISH-PUBACK exchange), receives other client contents (PUBLISH or PUBLISH-PUBREC-PUBREL exchange, according to the QoS) and terminates the session (client DISCONNECT). The CONNECT packet can implement an authentication mechanism, based on username and password. All the exchanges happen using a clear text TCP session on port 1883 or, if TLS is used, using an encrypted session on port 8883. Encrypted exchanges are mainly used when authentication is enforced, so that username and password are protected against eavesdropping.
In addition to the topic, any message also has a _Quality-of-Service_ value (_QoS_), taken in the range 0-2. A QoS equals 0 corresponds to no guarantees, and it means that the message can be lost or delivered multiple times. A client sending a message with QoS equals 1 requires that the message should never be discarded, while it might be delivered multiple times; in this case the sender stores the message until it receives back a PUBACK packet that ensures reception. Similarly, with a QoS set to 2, the message should never be discarded and it should be delivered exactly one time; in this case the client will wait for a PUBREC packet and, once received, it will discard the PUBLISH and it will send a PUBREL packet. The last packet that the client will receive in this exchange is the PUBCOMP that will release the id of the PUBLISH for reuse. It is important to highlight that the publisher QoS is not associated with the subscriber QoS - unless the implementation supports this non-standardized feature.
Figure 1: Typical MQTT architecture: IoT devices (clients) publish their messages to the broker. Subscribers ask the broker to receive only those messages with topics they subscribe. Broker relays (publishes) to each subscriber only messages with subscribed topics.
Methodology
The purpose of our research was to compare the behaviour of different implementations of MQTT. The first step has been finding and setting up the most popular open-source brokers and client libraries that people use to manage their devices or develop common software solutions. To determine the popularity, we took into account the number of stars and forks on GitHub repositories and the number of blog posts citing the examined brokers.
We focused only on open source libraries, namely: _Mosquito, EMQ X, HiveMQ, Moquette_ and _Aedes_ for the brokers, _paho, mqttools and mqtt.js_ for the clients. We will discuss the brokers and the clients respectively in Section 4.1 and in Section 4.2. Some of these have thousands of instances running in "production" environments, in common consumer and business-to-business solutions. We also tested a popular low-cost _Internet-of-Things_ device, namely the Shelly DUO Bulb (Section 4.3).
The next step has been to evaluate the type of tests to apply, considering the MQTT standard specification, version 3.1.1. We specifically looked for undefined behaviours, unspecified states or other missing information about message handling. Also, we looked for parts of the standard that might lead to a wrong implementation (e.g. expected actions by the broker/client that are implied but not specified or not clearly specified). This allowed us to focus our testing on a restricted subset of cases, as explained in Section 4.
We created different sets of experiments to find possible anomalies in MQTT implementations, developing our fuzzer written in python, with the help of the twisted library2. Our custom fuzzer allowed us to manage different streams correctly and send custom packets: for example, we could change every bit of the packets to see the brokers behaviour even in the presence of malformed packets. Standard, common MQTT libraries, instead, implement some state-machine which are expected in some part of the protocol (e.g. QoS2): a straightforward use of such libraries would not allow arbitrary changes in the flow of the messages, like out-of-order messages. Each experiment has been codified in a _JSON_ file that specifies the sequence of actions or packets that the test should run on/against the software under test, and the final behaviour of the involved parties have been logged and analyzed.
Footnote 2: [https://github.com/twisted/twisted](https://github.com/twisted/twisted)
## 4 Experimental results
### Brokers
A broker is a fundamental component in an MQTT architecture. Its job is to accept messages from clients (acting as "publishers") and then forward them to all clients with subscriptions ("subscribers") matching the topic of the message. This loosely coupled architecture allows the clients to not communicate directly with each other.
Modern brokers support many concurrent connections and messages per second. A flaw in message state machines, packet parsing, topic logic, etc., might expose a high impact vulnerability, which a malicious actor might exploit to launch some attacks like a _Denial-of-Service_.
We analyzed five common MQTT brokers:
* **Mosquito3**: it is one of the most used MQTT brokers in the world. It is a single-threaded, lightweight broker written in C. This broker has been widely used thanks to its flexibility. Footnote 3: [https://mosquito.org/](https://mosquito.org/)
* **EMQ X4**: it is written in Erlang and it claims to be so efficient to be "the Leader in Open Source MQTT Broker for IoT". Footnote 4: [https://www.emqx.io/](https://www.emqx.io/)
* **HiveMQ5**: a broker written in Java. It supports MQTT version 3.x and 5.0 and it is widely used in automation and industrial systems. We tested the Community Edition. Footnote 5: [https://www.hivemq.com/developers/community/](https://www.hivemq.com/developers/community/)
* **Moquette6**: another Java-powered open-source broker. It is very lightweight but it is less-known and less-used, when compared to other brokers. Footnote 6: [https://github.com/moquette-io/moquette](https://github.com/moquette-io/moquette)
* **Aedes7**: a broker written in JavaScript/NodeJS. It is the successor of MoscaJS. It does not support version 5 of MQTT, but it is fully compatible with version 3.x and supports several extension libraries. Footnote 7: [https://github.com/moscasjs/aedes](https://github.com/moscasjs/aedes)
Each broker underwent the same set of tests specified in the next section. We performed more than 60 different experiments on a consumer-grade PC with local connections. A summary of the results is in Table 1.
#### 4.1.1 Experiments and results
Publish QoS 2 and 1.In this experiment, the client performs the following steps:
1. it sends a SUBSCRIBE packet with a specific topic;
2. it sends the first PUBLISH packet with a _quality of service_ 2 and with id 1 over the topic specified in the subscription;
3. it sends the second PUBLISH packet with a _quality of service_ 1, still with id 1 over the topic specified in the subscription;
Figure 2: Schema of the testbed for the experiments: the fuzzer, which acts as a typical client, takes in input a “JSON experiment file” containing the client’s packets to the MQTT broker. The fuzzer will also receive all the PUBLISH packets sent to the broker. The MQTT Client, instead, uses one of the libraries that are examined in the subsection 4.2.
4. it sends a PUBREL packet for the first packet sent.
We noticed different broker behaviours: Mosquito publishes the first received packet with QoS 2, but then it loses the second packet that is not published to the clients due to the PUBCOMP packet that is not received, and so the packet id is not available for use. The EMQ X broker publishes both packets; it handles the flow for the first packet and then the flow for the second one. In HiveMQ and in Moquette, the client that sends packets receives the publication first and after the _pubcomp_, concerning the first packet. Additionally, in HiveMQ the client receives the _pubcomp_ back first and then the _pubrec_. Aedes publishes both packets, but the _pubcomp_ arrives at the client after the two publications. This behaviour repeated several times, also in the other experiments regarding the _quality of service_ that are described below.
Publish QoS 2 and 0.This experiment is very similar to the one described above, but in this case, the client performs the following steps:
1. it sends a SUBSCRIBE packet with a specific topic;
2. it sends the first PUBLISH packet with a _quality of service_ 2 and with the id 1 over the topic specified in the subscription;
3. it sends the second PUBLISH packet with a _quality of service_ 0 and with the id 1 over the topic specified in the subscription;
4. it sends a PUBREL packet for the first packet sent.
Mosquito, in this case, publishes both packets but in reverse order: it handles the one with _quality of service_ 0 first, and then it handles, correctly, all the flow regarding the first packet sent with _quality of service_ 2. EMQ X and HiveMQ maintain the order of the packets published by the client; also, in the case of HiveMQ, the client received back the _pubcomp_ first and then the _pubrec_ regarding the packet with _quality of service_ 2. Moquette behaves similarly to EMQ X, but, in this case, the _pubcomp_ arrives after the publication of the second packet. Aedes has the same behaviour as Mosquito, but the _pubcomp_ arrives after the publication as in the previous experiment.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Broker** & **Anomalies found** & **Security problems** & **Version** \\ \hline Mosquito & when handling _quality of service_. & Possible unwanted application scenarios. & 1.16.12 \\ \hline EMQ X & when handling _quality of service_ & Possible unwanted application scenarios. & 4.2.1 \\ \hline HiveMQ & when handling _quality of service_ & Possible unwanted application scenarios. & 2020.5 \\ \hline Moquette & when handling _quality of service_ & Possible _denial of service_ & 0.13 \\ & and _long topics_. & and unwanted application scenarios. & \\ \hline Aedes & when handling _quality of service_ & Possible _denial of service_. & 0.43.0 \\ & and _packet references_. & & \\ \hline \end{tabular}
\end{table}
Table 1: Brokers test result summary. The tested versions were the latest stable, available at the time of our experiments.
Double publish QoS 2.In this experiment, the client performs the following steps:
1. it sends a SUBSCRIBE packet with a specific topic;
2. it sends the first PUBLISH packet with a _quality of service_ 2 and with the id 1 over the topic specified in the subscription;
3. it sends the second PUBLISH packet with a _quality of service_ 2 and with the id 1 over the topic specified in the subscription;
4. it sends a PUBREL packet for the first packet sent;
5. it sends a PUBREL packet for the second packet sent.
In Mosquito, only one publication referred to the first packet sent, but the flow regarding the _quality of service_ is properly handled. EMQ X, in this case, has the same behaviour as Mosquito. Instead, HiveMQ and Moquette publish both packets in the correct order. In Aedes, there is a different behaviour: the broker publishes two packets, but they are the same packet, the first one sent by the client.
Long topic.In the MQTT standard, the maximum length topic is 65536 bytes, but we saw that in the source code of EMQ X there is a constant that represents the maximum length, and its value is 4096. So, we tried to subscribe to a topic with more than 4096 bytes. In Mosquito the subscription to the topic is successful. Instead, HiveMQ cuts the topic to which the client is subscribing to. In Moquette there is an _IOException_ and then the client disconnection. In Aedes there is a crash of the broker and the client; in particular, the exception thrown by the experiment generated an error like "_too many words_". In EMQ X the client disconnects.
Other experiments.Further experiments are listed below. They have been briefly summarized, since the behaviour of all the brokers was correct.
* some experiment where the _client id_ value in the packet contains characters non-UTF-8 encoded: no anomalies. In detail, we have built a connection packet with the _client id_ containing particular characters and the experiment was handled correctly by all brokers;
* _Keep-alive_ field in connection packet as a string: in all brokers there is the client disconnection due to malformed packet;
* subscription (or publication) in an invalid _wildcard_: in all brokers there is the client disconnection due to "invalid topic";
* topics and _wildcard_ encoded in: _utf-16, zzlib, bz2_ and _base64_. In the last three cases, there were no anomalies to report. In the _utf-16_ experiment, in all brokers the client disconnects. A particular experiment was the one with many " / " in the topic value; in _Mosquito, EMQ X, Moquette_ and _Aedes_ there was client disconnection while in HiveMQ there was a cut of the topic and then the client subscription;
* packets flood with QoS 0: all brokers handled the flood well;
* invalid protocol name (or version) in the connection packet: in all cases, the client disconnects;
* sending a _pubrel_ packet that references a publication packet that was never sent: all brokers, except for Aedes, sent back a _pubcomp_ message. In Aedes there is the client disconnection.
### Clients
In addition to tests on brokers, we also carried out tests on client libraries available in the web. In particular, we studied three different client libraries: _paho-mqtt_8, _mqttools_9 and _mqtt.js_10; the first two are written in _python_ while the third is written in _javascript_. Again, we considered metrics like the number of stars and forks on GitHub repositories. However, the experiments have not found particular anomalies. Here there is a list of tests we tried:
Footnote 8: [https://pypi.org/project/paho-mqtt/](https://pypi.org/project/paho-mqtt/)
Footnote 9: [https://pypi.org/project/mqttools/](https://pypi.org/project/mqttools/)
Footnote 10: [https://github.com/mqtjs/MQTT.js](https://github.com/mqtjs/MQTT.js)
* invalid QoS level: all libraries report an error about the QoS, blocking the sending of the packet;
* invalid _wildcard_ subscription: in this case _mqtt.js_ generates an "Invalid topic" error, while the other two libraries timeout;
* _client id_ not encoded in utf-8: in _mqttools_ the client cannot connect to the broker, in _paho-mqtt_ there is a successful connection to the broker and _mqtt.js_ generates an error with the consequent client disconnection;
* publication (or subscription) to a topic with a length more than 65536 characters: in all libraries there is the client disconnection.
### Physical device
In "home automation" the MQTT protocol is widely used as most smart devices supports it. Many software applications allow you to use the protocol to
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Library** & **Anomalies found** & **Security problems** & **Version** \\ \hline paho-mqtt & when handling subscription (or publication) to an _invalid wildcard_. & It produces an _hang_. & 1.5.1 \\ \hline MQTT.js & when handling an invalid _quality of service_. & There is a _crash_ of the client due to a _TypeError_. & 4.2.1 \\ \hline mqttools & when handling subscription (or publication) to an _invalid wildcard_ and when the _client id_ value contains not _utf-8_ characters. & It produces an _hang_ and an infinite connection loop. & 0.47.0 \\ \hline \end{tabular}
\end{table}
Table 2: Client libraries test results. The tested versions were the latest stable, available at the time of our experiments.
manage the smart devices, and one of them, for example, is _Home Assistant_; also _Amazon_, in _AWS IoT_, uses MQTT to connect the user's devices to other devices and other services.
We decided to perform the previous experiments on a physical device. In this case, we tested a _Shelly_ light bulb that supports MQTT: it is possible, for example, to turn on or off the device through specific commands sent in the local network. In this device, the protocol configuration can be done in a simple web interface, available in the local network, where the user has to specify the broker's IP and port. The username and password are not mandatory during the configuration; this could be a security issue, depending on the context. This device does not have an "anti-flood" regarding the packets it receives; for example, it is possible to turn off and on the light bulb repeatedly and quickly by sending a PUBLISH packet on the specific topic with specific content. The software that runs in the light bulb is the same as other Shelly devices, so this problem also affects them. Therefore, it is possible to send many packets that overload the device's electronic components to make it useless.
To confirm the obtained results in the brokers, we performed on the device the same set of experiments previously carried out on both brokers and clients. For example, the experiment "Publish QoS 2 and 1" (discussed in Section 4.1) has confirmed the expected results. In this case, we sent a "turn on packet" with QoS 2 and then we sent a "turn off packet" with QoS 1. When Mosquito was the broker used, the light bulb turned on but then did not go off, while in all other cases, the light bulb turned on and then turned off.
In addition to these experiments already performed for the various brokers, we have tried to generate some _buffer overflows_, through the payload sent to the device, with a consequent _DoS_. However, the light bulb passed all tests without errors; in particular, the device ignores any form of payload other than what it expects to receive.
### Discussion
Interesting results have been obtained from the experiments carried out on brokers, clients, and the physical device. In [16] Kant et al. shown that many consumer-grade devices do not use a secure transport for MQTT (like TLS); this is due to the few resources on-board (in turn, this is caused by the target price of these devices in the consumer market, which is very low). Sometimes these devices lack proper authentication protocols, again due to missing resources or insecure default configurations. These security issues could lead attackers to control devices (e.g. they could control an entire _home environment_ remotely) directly in some cases (e.g. with _Man-in-the-Middle_ attacks).
In our work, the model of the attacker includes the capability to modify the MQTT packet flow, delaying the transmission or making it out of order, or modifying MQTT packets payload, injecting invalid values. This capability can be exploited with limited access to the broker or intermediate network devices, or even remotely, by using other attacks like _Distributed Denial-of-Service_ or _flooding_ against a network device in the path of the packet flow (for delaying packets, for example). Some of these vulnerabilities can be exploited with an older version of TLS protocol itself11: for example, SSL used a vulnera
ble _Message Authentication Code_ until TLS [9]; vulnerabilities in TLS HMAC implementations are still found years after the standard [19].
We described bad behaviours for brokers in Section 4.1.1. Some of them can be classified as vulnerabilities, and they can lead to attacks if some conditions happen. In fact, we saw that brokers publish some messages violating the protocol state machine in some tests. An example of this is the out-of-order Aedes (and other brokers) use of _pubcomp_, which might be used to trigger a **replay attack** if the _pubcomp_ itself is delayed or dropped by a malicious actor. This attack can disrupt or even damage some devices: for example, IoT-mechanical devices can be continuously triggered until the mechanical part is over-stressed.
Another violation of the standard which leads to a vulnerability (in all brokers but Mosquito) is the bad handling of long topics: in the MQTT standard, the maximum length topic is 65536 bytes. However, trying to publish to a very long topic (\(>\)4096 bytes) leads to a disconnection of the client. A malicious actor that can inject (even indirectly, think user-provided information) some characters in the topic may cause a Denial of Service for that client. **Even worst, in Aedes there is a crash of the broker itself, leading to a _Denial of Service_ for the entire MQTT network**.
Some violations of the standard are so misinterpreted that each broker does a different thing: in _Double publish QoS 2_ test, nearly all brokers (the only exception is Mosquito) violates the "unique identifier" feature of MQTT in different ways. This causes no direct impact as a violation, but it can be exploited if some client library shows some bad handling of this situation.
Among all brokers, our tests show that Mosquito seems to be the strongest one in terms of MQTT standard adoption, and so the safest from a security point of view.
Instead, client libraries have shown only minor issues, many of them relating to encoding errors or long topic subscription issues. Our tests show that they are quite robust, sometimes even better than some brokers.
## 5 Conclusions
MQTT is considered one of the enabling technologies for the IoT ecosystem. It is adopted by almost all IoT applications that run on devices with limited computational power, thanks to the high availability of open source libraries implementing MQTT. In this paper, we have presented an empirical study of the most popular implementations of both brokers and clients, considering the possible behaviour deviations from the standard that could lead applications to possibly inconsistent or even critical states. We also tested a physical smartphone device. The results of our experiments were noticeable: while almost all the considered libraries are correctly handling most of the interactions, as expected, some anomalies have been detected that could be exploited and target the applications, mainly exposing them to denial of service attacks.
Given the promising results, we plan to expand the number of considered libraries further and to include in our experiments some real applications to create some proof-of-concept attacks that exploit the found anomalies. Similarly, we plan to extend the experiments also considering the libraries that also support the new version of the MQTT protocol, namely version 5.
## Acknowledgments
This research has been partially supported by MIUR (Italian Ministry of Education, University, and Research) under grant "Dipartimenti di eccellenza 2018-2022" of the Computer Science Department of Sapienza University of Rome.
|
2309.03221 | SPAIC: A sub-$μ$W/Channel, 16-Channel General-Purpose Event-Based
Analog Front-End with Dual-Mode Encoders | Low-power event-based analog front-ends (AFE) are a crucial component
required to build efficient end-to-end neuromorphic processing systems for edge
computing. Although several neuromorphic chips have been developed for
implementing spiking neural networks (SNNs) and solving a wide range of sensory
processing tasks, there are only a few general-purpose analog front-end devices
that can be used to convert analog sensory signals into spikes and interfaced
to neuromorphic processors. In this work, we present a novel, highly
configurable analog front-end chip, denoted as SPAIC (signal-to-spike converter
for analog AI computation), that offers a general-purpose dual-mode analog
signal-to-spike encoding with delta modulation and pulse frequency modulation,
with tunable frequency bands. The ASIC is designed in a 180 nm process. It
supports and encodes a wide variety of signals spanning 4 orders of magnitude
in frequency, and provides an event-based output that is compatible with
existing neuromorphic processors. We validated the ASIC for its functions and
present initial silicon measurement results characterizing the basic building
blocks of the chip. | Shyam Narayanan, Matteo Cartiglia, Arianna Rubino, Charles Lego, Charlotte Frenkel, Giacomo Indiveri | 2023-08-31T19:53:04Z | http://arxiv.org/abs/2309.03221v1 | SPAIC: A sub-\(\mu\)W/Channel, 16-Channel General-Purpose Event-Based Analog Front-End with Dual-Mode Encoders
###### Abstract
Low-power event-based analog front-ends (AFE) are a crucial component required to build efficient end-to-end neuromorphic processing systems for edge computing. Although several neuromorphic chips have been developed for implementing spiking neural networks (SNNs) and solving a wide range of sensory processing tasks, there are only a few general-purpose analog front-end devices that can be used to convert analog sensory signals into spikes and interfaced to neuromorphic processors. In this work, we present a novel, highly configurable analog front-end chip, denoted as "SPAIC" (signal-to-spike converter for analog AI computation), that offers a general-purpose dual-mode analog signal-to-spike encoding with delta modulation and pulse frequency modulation, with tunable frequency bands. The ASIC is designed in a 180 nm process. It supports and encodes a wide variety of signals spanning 4 orders of magnitude in frequency, and provides an event-based output that is compatible with existing neuromorphic processors. We validated the ASIC for its functions and present initial silicon measurement results characterizing the basic building blocks of the chip.
Neuromorphic, Analog Front-End (AFE), Encoder, Spiking Neural Network (SNN)
## I Introduction
Spiking Neural Networks (SNNs) represent a powerful low-power event-based processing computing paradigm for processing streaming data on the edge [1].
To best exploit this novel emerging computing paradigm and build a robust end-to-end SNN sensory processing pipeline, designing efficient event-based analog front-ends is paramount. Fig. 1 illustrates such an end-to-end pipeline. There are various methods of encoding analog signals to spikes [2, 3]. On the hardware front, several analog-to-spike encoders have been developed and demonstrated on silicon, either using Delta Modulation (DM) schemes [4], thereby encoding the temporal changes in the original signal with spike timing, or with Pulse-Frequency Modulation (PFM) schemes [5, 6, 7], thereby encoding the amplitude of the signal with spike rates. However, these encoders were always optimized for a specific application and for the corresponding frequency bands.
To the best of our knowledge, no general-purpose solution has been proposed to allow exploration and prototyping with existing SNN neuromorphic computing platforms. In this work, we present a highly configurable analog front-end ASIC, denoted as "SPAIC" (signal to spike converter for analog AI computation), which is compatible with existing neuromorphic processors [8, 9, 10, 11, 12], and which offers a general-purpose dual-mode (DM and PFM) analog-to-spike encoding with tunable frequency bands (see Fig. 2 for the chip micrograph).
## II ASIC architecture
SPAIC comprises 16 identical analog channels feeding a common Address Event Representation (AER) interface as shown in Fig. 3. Each channel has four stages: a low noise amplifier section (LNA), a fourth-order flipped voltage follower (FVF) bandpass filter, a programmable gain stage (PGA),
Fig. 1: Illustration of sensory signal processing pipeline using a general purpose analog front end and SNN.
Fig. 2: SPAIC chip micrograph.
and the encoding stage. The LNA amplifies weak signals with tunable gain of 0 to 24dB. The FVF BPF filters the amplified signal with a tunable center frequency and Q [13]. The PGA further amplifies the filtered signal with a tunable gain up to 24dB. The amplified signal is then split in two mutually exclusive paths where either the asynchronous delta modulator encodes the temporal derivative of the amplified signal, or the integrate and fire neuron [14] is enabled and the signal is encoded as a pulse-frequency-modulated spike train. All configurations to the chip are provided in three ways: a configurable on-chip bias generator that generates the necessary bias currents for all circuits in the analog front-end, an 8-bit capacitor DAC (CDAC) to tune the filter parameters and a voltage DAC (VDAC) to set the UP and DN thresholds for the Asynchronous Delta Modulator (ADM). All three of these DACs are configurable via Serial Peripheral Interface (SPI) protocol.
## III Circuit implementation
The SPAIC ASIC was designed and fabricated in a bulk 180 nm technology node. The chip dimensions including the seal ring are 2.5 mm\(\times\)2.5 mm. The circuit implementation of the major building blocks is described hereafter.
### _Low noise amplifiers_
For modularity reasons, both amplifiers in the analog front-end (LNA and PGA) are built on the same Operational Transconductance Amplifier (OTA) core with minor changes adapted for noise and power. The structure of the OTA is based on a well-known wide input range current-mirror-type transconductance amplifier [15]. The primary reason for this design choice was to accommodate for large input changes that may or may not be present depending on the input sensor. The amplifiers are operated in a closed loop as capacitive feedback amplifiers with a DC-Servo loop (DSL) implemented with tunable pseudo resistors as shown in Fig. 3. The capacitance ratio of the feedback to input capacitance determines the overall closed-loop gain. This gain configuration is implemented as a 4-bit binary weighted capacitance DAC, leading to 16 distinct gain values. The gains as well as the noise performance were validated on silicon and are described in Section IV.
### _Flipped Voltage Follower-based bandpass filter_
Every AFE channel has a 4th-order BPF with tunable center frequency and Q. The FVF-based filter topology shown in Fig. 3 helps achieve better noise performance due to the inherent current reuse present in the architecture.
The filter's center frequency (\(\omega_{0}\)) is tunable with the on-chip bias generator. The capacitors of this filter are implemented as an 8-bit CDAC. By configuring \(C_{1}\) and \(C_{2}\), the Q and the center frequency can be adjusted in a precise manner with a resolution of \(C_{1,2}/256\) steps, as described in Eq. (1).
\[\omega_{0}=\sqrt{\frac{gm_{1}\cdot gm_{2}}{C_{1}\cdot C_{2}}},Q=\sqrt{\frac{gm _{2}\cdot C_{2}}{gm_{1}\cdot C_{1}}} \tag{1}\]
As each filter's center frequency and Q are programmable, they can be configured as a parallel filter bank or as identical parallel electrode interface channels. The frequency response, when configured as a filter bank, measured from the functional silicon is shown in Fig. 6, Section IV.
### _Asynchronous Delta Modulation encoder_
The ADM is designed based on the foundation of a level-crossing ADC. The amplified and conditioned signal is
Fig. 3: Architecture of SPAIC Analog Front-End ASIC.
compared with two known voltage thresholds (an up threshold and a down threshold) set by an on-chip voltage DAC. When the signal crosses the up threshold, an "UP" spike is generated and similarly, in the other direction a "DOWN" spike is generated when the signal goes below the set down threshold. The DACs were simulated running Monte Carlo analysis and the absolute accuracy (3\(\sigma\)) of the voltage thresholds was found to be \(\approx\pm\) 900 \(\mu V\). The comparator used in this ADM was designed with hysteresis [16, 17] to avoid false triggering due to fluctuations in the signal path arriving at the input of the comparator.
### _Pulse Frequency Modulation encoder_
The PFM encoder was designed using a leaky integrate and fire neuron (LIF) circuit [14] which inherently encodes the amplitude of the input current into pulse frequency. Therefore, before being encoded into spikes, the input signal is half-wave rectified and converted into current by a wide-range transconductance amplifier. The output current of this amplifier flows into the input of the neuron which generates a spike when the input current crosses an externally set spiking threshold of the neuron.
## IV Silicon validation and measurement results
A custom evaluation board with a Teensy microcontroller was designed to evaluate the functionality of the ASIC. The setup is shown in Fig. 4. The microcontroller communicates with the ASIC and programs its internal registers via an SPI communication protocol. The two separate encoding paths are fully independent of each other and have their own AER circuitry. The AER circuitry operates based on a well-established four-phase hand-shaking mechanism described in Fig. 4[18]. The asynchronous nature of the AER interface ensures sparse event-driven communication with other asynchronous devices.
The frequency response of the amplifiers and the noise power spectral density (Noise-PSD) measurement of the AFE are shown in Fig. 5. The plots show the noise PSD in the 1 Hz-1 kHz band, which is mainly dominated by flicker noise. The input-referred noise is 1.4 \(\mu V\) measured at the output of the first LNA. Fig 6 shows a measurement where the bias currents were programmed in octaves to obtain center frequencies spaced in octaves. When the filter channels are octave-spaced, 11 channels can cover a range of 100 Hz-100 kHz. When used as identical channels, all 16 curves roughly overlap each other. The filter has a dynamic range superior to 40dB, however, reducing the gain of the first amplification stage helps to avoid distortion in the filtering stage. The silicon measurement of signal-to-noise and distortion ratio (SNDR) of a single AFE channel reveals a dynamic range of 42dB at the output of PGA as shown in Fig. 7. The encoding functionality of both ADM and PFM were measured on the silicon. A 100 Hz pure tone was encoded with ADM and the UP and DOWN events were recorded. To validate the encoding, the signal was reconstructed from the spikes and is shown in Fig. 8. A finer or coarser signal reconstruction can
Fig. 4: Measurement setup of the ASIC with a Teensy microcontroller, communicating via a 4-phase asynchronous handshaking protocol.
Fig. 5: Frequency Response of LNA (top) noise power spectral density (PSD) measurement (bottom).
Fig. 6: Frequency Response of the bandpass filter bank
be achieved depending on the configured voltage thresholds. The final encoding accuracy is further determined by the input referred offset of the comparators. Similarly, the functionality of the neuron as a PFM encoder was validated by physical measurement of its membrane potential in response to a 100Hz pure tone as shown in Fig. 9. The red trace in Fig. 9 shows the response to a 100mV input and blue trace to a 400mV input. The almost linear relationship of the spiking frequency to input amplitude shown in Fig. 10 confirms the intended functionality of the block.
Table I compares the performance of this work with similar event-based analog front-ends. This design can cover a programmable range from 100 Hz to 100 kHz. The AFE consumes about 800 nW with the channel processing at the highest frequency (100 kHz) enabled. All other lower-frequency channels consume proportionally lower power. The power consumption was normalized to 5kHz and the normalized power for SPAIC AFE is about 532 nW.
## V Conclusions
In this paper, we proposed a highly configurable general-purpose signal-to-spike encoding ASIC with a dual-mode encoding scheme, designed and fabricated in 180 nm technology.
We demonstrated a generic signal conditioning and dual-way encoding scheme that one can pair with a spiking neural network to build an event-based neuromorphic sensing system. The design is comparable to the state-of-the-art in terms of dynamic range and silicon area and is better in terms of the operating range across a larger frequency range and noise performance.
Future work includes quantitatively benchmarking the encoding schemes with a few selected applications such as low-frequency biomedical signals to auditory signals and high-frequency ultrasonic signals and quantifying the normalized power consumption for any selected application.
This work represents a key enabler for building an end-to-end signal acquisition pipeline for edge computing nodes with spiking neural networks.
Fig. 8: ADM spiking response to a 100Hz signal and reconstruction of the signal from spikes.
\begin{table}
\begin{tabular}{c c c c c} & **This Work** & **[6]** & **[4]** & **[7]** \\ Technology [mm] & 180 & 180 & 180 & 90 \\ Feature & Analog & Analog to & Analog to & Analog to & Analog to \\ Events & Events & Events & Events & Events \\ Channels & 16 & 16 & 642 & 16 \\ Bandwidth [Hz] & <100:100K & 100:6K & 8:20K & 75:5K \\ Feature/Channel [bW] & <3000 & <6000K & 24 & 440 & 380 \\ Normalized Power [bW] & <52 & 88 & 821 & 1600 \\ Input Reference Noise [bW] & 1.40\(\,\)LNA & <8.3 & 30 & 325 \\ Dynamic Range [bW] & <54\(\,\)LNA & <014\(\,\)LNA & <8.0 & 45\(\,\)LNA \\ & 4000 PCA & & & & \\ AreaChannel [mm] & 0.69 & 0.1 & 0.26 & 0.13 \\ Functional Building & LNA & LNA & LNA & LNA \\ Blocks & BPF & BPF & BPF & BPF \\ & PGA & FWR & PGA & FWR \\ & ADM & IAF & ADM & IFF \\ & \multicolumn{4}{c}{OTA+AE+B+RF} \\ \end{tabular}
\end{table} TABLE I: Comparison with prior work
Fig. 10: Spike frequency response to different input amplitudes.
Fig. 7: SNDR Measurement of an AFE Channel.
Fig. 9: Neuron’s membrane potential response for different amplitudes of the input signal of the AFE. Red (100mV), Blue (400mV). |
2309.15443 | On the Cauchy Problem for the Dispersion Generalized Camassa-Holm
Equation | In this paper, we establish local well-posedness of the Cauchy problem for a
recently proposed dispersion generalized Camassa-Holm equation by using Kato's
semigroup approach for quasi-linear evolution equations. We show that for
initial data in the Sobolev space $H^{s}(\mathbb{R})$ with $s>\frac{7}{2}+p$,
the Cauchy problem is locally well-posed, where $p$ is an even real number
determined by the order of the positive differential operator $L$ corresponding
to the dispersive effect added to the Camassa-Holm equation. | Nesibe Ayhan, Nilay Duruk Mutlubas | 2023-09-27T07:07:47Z | http://arxiv.org/abs/2309.15443v1 | # On the Cauchy Problem for the Dispersion Generalized Camassa-Holm Equation
###### Abstract
In this paper, we establish local well-posedness of the Cauchy problem for a recently proposed dispersion generalized Camassa-Holm equation by using Kato's semigroup approach for quasi-linear evolution equations. We show that for initial data in the Sobolev space \(H^{s}(\mathbb{R})\) with \(s>\frac{7}{2}+p\), the Cauchy problem is locally well-posed, where \(p\) is an even real number determined by the order of the positive differential operator \(L\) corresponding to the dispersive effect added to the Camassa-Holm equation.
**Keywords:** Camassa-Holm equation, Dispersion, Local well-posedness, Semigroup Theory
**MSC classification 2010:** Primary: 35G25, 35L30, 47D60; Secondary: 35B65, 47D03.
## 1 Introduction
The nonlinear dispersive wave equation
\[u_{t}-u_{xxt}+3uu_{x}=2u_{x}u_{xx}+uu_{xxx}, \tag{1.1}\]
was introduced by Camassa and Holm [1] to model the unidirectional propagation of shallow water waves over a flat bottom. Here, \(u(x,t)\) represents the fluid velocity at time \(t\) and in the spatial direction \(x\). The equation is known as the Camassa-Holm (CH) equation, whose various generalizations have appeared in the literature in recent years. These generalizations are usually done qualitatively on the structure of the equation, without paying much attention on the physical meaning and derivation.
The Korteweg deVries (KdV) and Benjamin-Bona-Mahony (BBM) equations are canonical wave equations which characterizes unidirectional dispersive wave propagation in continuum having different physical properties. Although the CH equation was proposed primarily to model propagation of waves of moderate amplitude in shallow water, it is expected to have the equation arising as a model equation in different continuum where dispersive and nonlinear affects are balanced properly. As a matter of fact there are studies in this direction (for instance, see [5] about the nonlinear waves propagating along a circular cylindrical rod composed of a compressible elastic material and [2] about the nonlinear waves propagating in a pre-stressed, thin elastic plate composed of a compressible hyperelastic material). Since linear dispersive affect can occur in different forms for different continuum due to various reasons such as nonlocal effects, inhomogeneity, anisotropy, multidimensionality, etc and this changes the inertia term \(u_{t}-u_{xxt}\), having generalized form of inertia term, writing the corresponding equation and analyzing the new model are interesting research problems. Therefore, in this paper, we focus on the dispersive nature of CH equation and analyze how the behavior of the solutions change when the dispersive effect is generalized.
When the terms in the CH equation are examined in detail, the \(uu_{x}\) term denotes nonlinear steeping, the \(u_{xxt}\) term denotes the linear dispersion effect, and the \(2u_{x}u_{xx}+uu_{xxx}\) terms denote the nonlinear dispersion effect. When the momentum density \(m=(1-\partial_{x}^{2})u\) is defined, the CH equation becomes
\[m_{t}+m_{x}u+2mu_{x}=0. \tag{1.2}\]
There are generalizations of the CH equation for different momentum density forms in the literature. Important examples of these can be given as follows:
\((i)\) Hunter and Saxton in [10] considered \(m=-\partial_{x}^{2}u\).
\((ii)\) Holm et al. in [9] considered \(m=(1-\alpha^{2}\partial_{x}^{2})u\), where \(\alpha\) is a constant.
\((iii)\) Khesin et al. in [12] introduced a \(\mu\)-version of Camassa-Holm equation as follows
\[m_{t}+2mu_{x}+m_{x}u=0,\ \ \ \ m=(\mu-\partial_{x}^{2})u,\]
where \(u(x,t)\) is a time-dependent function on the unit circle \(\mathbb{S}=\mathbb{R}/\mathbb{Z}\) and \(\mu(u)=\int_{\mathbb{S}}udx\) denotes its mean. This equation describes evolution of rotators in liquid crystals with external magnetic field and self-interaction. It is also studied in [6], [8], [22], [23].
For the periodic case, Wang [21] considered \(m=\mu(u)-u_{xx}+u_{xxxx}\). Moreover, Wang studied the modified \(\mu\)-version of Camassa-Holm equation in [19] as follows
\[m_{t}+(2\mu(u)u-u_{x}^{2})m)_{x}=0.\]
\((iv)\) Wang considered \(m=\mu(u)+u_{xxxx}\) in [20].
\((v)\) Ding et al. considered \(m=u-u_{xx}+u_{xxxx}\) in [7].
\((vi)\) There are also higher order forms of the CH equation, where \(m=(1-\partial_{x}^{2})^{k}u\) for positive integer \(k\)[4], which describe exponential curves of the manifold of smooth orientation-preserving diffeomorphisms of the unit circle in the plane. Studies for \(k=2\) can be found in [14], [15], [17].
\((vii)\) For \(r\geq 1\), Camassa-Holm system with two components, where \(m=(1-\partial_{x}^{2})^{r}u\) is studied in [3].
The common feature of the examples in the literature is that they put different effects on linear and non-linear dispersive terms to observe the results. In this paper, our main aim is to study
\[m_{t}+bm_{x}u+amu_{x}=0,\quad a,b>0, \tag{1.3}\]
with the following form of momentum density:
\[m=(1-L\partial_{x}^{2})u. \tag{1.4}\]
Here, \(L\) is a general differential operator in spatial variable \(x\) whose order is an even real number \(p\). With this momentum density, the dispersive effect in (1.1) is generalized in (1.3). Note that choosing \(L\) as the identity operator with \(a=2\) and \(b=1\), (1.3) corresponds to the CH equation given by (1.1). Furthermore, choosing \(L\) as \(\alpha^{2}\) multiple of identity operator corresponds to the example \((ii)\), as \(Id-\partial_{x}^{2}\) corresponds to the example \((v)\), as \(2-\partial_{x}^{2}\) corresponds to the example \((vi)\).
The CH equation with the generalized dispersion feature defined in (1.4) has been proposed for the first time in the literature. In the equation (1.3), both the linear and the non-linear dispersion effects have changed. In terms of qualitative analysis, this makes a big difference since \(L\) operator is given in closed form and applies also on nonlinear terms in the equation.
This paper presents the dispersion generalized Camassa-Holm equation given by (1.3)-(1.4) and proves local well-posedness of the solutions for the corresponding Cauchy problem. It can be considered as a non-local and nonlinear dispersive partial differential equation and a mathematical generalization of the classical Camassa-Holm equation rather than a physical generalization. We prove that the Cauchy problem is locally well-posed on the real line for the initial data in Sobolev space \(H^{s}(\mathbb{R})=H^{s}\), \(s>\frac{7}{2}+p\). The even real number \(p\) is the order of the general differential operator \(L\) which appears in closed form.
Our proof is based on Kato's semigroup approach for quasilinear equations. For this reason, in Section 2 we present a short review of the theorem we rely on and establish local well-posedness for the dispersion generalized Camassa-Holm equation. We make use of commutator operators to obtain a suitable form of the equation to use Kato's semigroup approach. Main reason for rewriting is that \(L\) is in closed form among the nonlinear terms in the equation as well and usual differentiation rules do not apply.
In Section 3, we compare the results for Camassa-Holm type equations with those of the dispersion generalized Camassa-Holm equation. The changes in the dispersive effect, the differences in their non-local forms, the initial data classes chosen for the corresponding Cauchy problems are discussed.
We end the paper with Section 4 in which we provide open problems to be discussed. According to mathematical analysis questions for the Camassa-Holm equation appearing in the literature, we can say that further qualitative analysis is also possible for the dispersion generalized Camassa-Holm equation, such as global well-posedness and finite time blow-up.
## 2 Local Well-posedness
### Semigroup Approach
Consider the abstract quasi-linear evolution equation in the Hilbert space \(X\):
\[u_{t}+A(u)u=f(u),\ \ \ \ t\geq 0,\ \ \ \ \ u(0)=u_{0}. \tag{2.1}\]
Let \(Y\) be a second Hilbert space such that \(Y\) is continuously and densely injected into \(X\) and let \(S:Y\to X\) be a topological isomorphism. Assume that
1. For any given \(r>0\) it holds that for all \(u\in\mathrm{B}_{r}(0)\subseteq Y\) (the ball around the origin in \(Y\) with radius \(r\)), the linear operator \(A(u)\colon X\to X\) generates a strongly continuous semigroup \(T_{u}(t)\) in \(X\) which satisfies \[\left\|T_{u}(t)\right\|_{\mathcal{L}(X)}\leq\mathrm{e}^{\omega_{r}t}\ \ \ \text{for all}\ \ \ t\in[0,\infty)\] for a uniform constant \(\omega_{r}>0\).
2. \(A\) maps \(Y\) into \(\mathcal{L}(Y,X)\), more precisely the domain \(D(A(u))\) contains \(Y\) and the restriction \(A(u)|_{Y}\) belongs to \(\mathcal{L}(Y,X)\) for any \(u\in Y\). Furthermore \(A\) is Lipschitz continuous in the sense that for all \(r>0\) there exists a constant \(C_{1}\) which only depends on \(r\) such that \[\left\|A(u)-A(v)\right\|_{\mathcal{L}(Y,X)}\leq C_{1}\left\|u-v\right\|_{X}\] for all \(u,\ v\) inside \(\mathrm{B}_{r}(0)\subseteq Y\).
3. For any \(u\in Y\) there exists a bounded linear operator \(B(u)\in\mathcal{L}(X)\) satisfying \(B(u)=SA(u)S^{-1}-A(u)\) and \(B\colon Y\to\mathcal{L}(X)\) is uniformly bounded on bounded sets in \(Y\). Furthermore for all \(r>0\) there exists a constant \(C_{2}\) which depends only on \(r\) such that \[\left\|B(u)-B(v)\right\|_{\mathcal{L}(X)}\leq C_{2}\left\|u-v\right\|_{Y}\] for all \(u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\).
* For all \(t\in[0,\infty)\), \(f\) is uniformly bounded on bounded sets in \(Y\). Moreover, the map \(f\colon Y\to Y\) is locally \(X\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{3}>0\), depending only on \(r\), such that \[\|f(u)-f(v)\|_{X}\leq C_{3}\,\|u-v\|_{X}\quad\text{for all }u,\ v\in\mathrm{B}_{r}(0)\subseteq Y\] and locally \(Y\)-Lipschitz continuous in the sense that for every \(r>0\) there exists a constant \(C_{4}>0\), depending only on \(r\), such that \[\|f(u)-f(v)\|_{Y}\leq C_{4}\,\|u-v\|_{Y}\quad\text{for all }u,\ v\in\mathrm{B}_{r}(0)\subseteq Y.\]
**Theorem 2.1**.: _[_11_]_ _Assume that (A1)-(A4) hold. Then for given \(u_{0}\in Y\), there is a maximal time of existence \(T>0\), depending on \(u_{0}\), and a unique solution \(u\) to (2.1) in \(X\) such that_
\[u=u(u_{0},.)\in C([0,T),Y)\cap C^{1}([0,T),X).\]
_Moreover, the solution depends continuously on the initial data, i.e. the map \(u_{0}\to u(u_{0},.)\) is continuous from \(Y\) to \(C([0,T),Y)\cap C^{1}([0,T),X).\)_
### Local Well-posedness Theorem
In this subsection, we apply Kato's semigroup approach to establish local well-posedness for the Cauchy problem associated to the generalized Camassa-Holm equation:
\[\left\{\begin{aligned} & u_{t}-L\partial_{x}^{2}u_{t}+(a+b)uu_{x}-au_{x }L\partial_{x}^{2}u-buL\partial_{x}^{2}u_{x}=0,& t>0,\ x\in\mathbb{R},\\ & u(x,0)=u_{0}(x),& x\in\mathbb{R},\end{aligned}\right. \tag{2.2}\]
where \(a\) and \(b\) are positive constants, \(L\) is a positive operator with even order \(p\). Note that we rewrite the equation (1.3) for which (1.4) is valid. To construct a non-local form of this equation, we use the usual commutator of two operators \([.,.]\):
\[[L\partial_{x}^{2},u]u_{x}=L\partial_{x}^{2}(uu_{x})-uL\partial_{x}^{2}u_{x}.\]
Then,
\[(a+b)[L\partial_{x}^{2},u]u_{x}= (a+b)L\partial_{x}^{2}(uu_{x})-(a+b)uL\partial_{x}^{2}u_{x}\] \[= (a+b)L\partial_{x}^{2}(uu_{x})-auL\partial_{x}^{2}u_{x}-buL \partial_{x}^{2}u_{x}\qquad\ \.\]
So, we can write
\[(a+b)uu_{x}-buL\partial_{x}^{2}u_{x}=(a+b)(1-L\partial_{x}^{2})(uu_{x})+(a+b)[ L\partial_{x}^{2},u]u_{x}+auL\partial_{x}^{2}u_{x}.\]
Then, we have
\[(1-L\partial_{x}^{2})u_{t}+(a+b)(1-L\partial_{x}^{2})(uu_{x})+(a+b)[L\partial_{x}^ {2},u]u_{x}+auL\partial_{x}^{2}u_{x}-au_{x}L\partial_{x}^{2}u=0.\]
Also, it can be seen that
\[auL\partial_{x}^{2}u_{x}-au_{x}L\partial_{x}^{2}u =a(uL\partial_{x}^{2}u_{x}-u_{x}L\partial_{x}^{2}u)\] \[=a(uL\partial_{x}^{2}u_{x}-(L\partial_{x}^{2}u)u_{x})\] \[=a(uL\partial_{x}^{2}-(L\partial_{x}^{2}u))u_{x}\] \[=-a[L\partial_{x}^{2},u]u_{x}.\]
Then, the equation (2.2) becomes
\[(1-L\partial_{x}^{2})u_{t}+(a+b)(1-L\partial_{x}^{2})(uu_{x})+b[L\partial_{x} ^{2},u]u_{x}=0.\]
With \(\Gamma^{s}=(1-L\partial_{x}^{2})^{s/p+2}\), this equation takes the following quasi-linear form:
\[u_{t}+A(u)u=f(u),\]
where
\[A(u)=(a+b)u\partial_{x}+b\Gamma^{-(p+2)}[L\partial_{x}^{2},u]\partial_{x}=r( u)\partial_{x}, \tag{2.3}\]
and
\[f(u)=0. \tag{2.4}\]
Here, notice that the operator \(L\) is in a closed form, and even though there are more than one possible way of writing non-local form of an equation, the form where we collect nonlinear effects in the operator \(A(u)\), as above, holds and serves for our purpose. Furthermore, recalling the approach, we choose the spaces \(X:=(H^{s-1},||.||_{s-1})\), \(Y:=(H^{s},||.||_{s})\) and consider the topological isomorphism \(S:=\Gamma=(1-L\partial_{x}^{2})^{1/p+2}\colon Y\to X\) between these spaces, which defines an isometry, i.e. \(\|\Gamma u\|_{s-1}=\|u\|_{s}\) for all \(u\in H^{s}\).
Hence, the main result of this paper is the following:
**Theorem 2.2**.: _Assume that assumptions \((A1)-(A4)\) hold for (2.2)-(2.4). Let \(u_{0}\in H^{s}\), \(s>\frac{7}{2}+p\) be given. Then, there exists a maximal time of existence \(T>0\), depending on \(u_{0}\), such that there is a unique solution \(u\) satisfying_
\[u\in C([0,T),H^{s})\cap C^{1}([0,T),H^{s-1}).\]
_Moreover, the solution depends continuously on the initial data, i.e, the map \(u_{0}\to u(u_{0},.)\) is continuous from \(H^{s}\) to \(C([0,T),H^{s})\cap C^{1}([0,T),H^{s-1})\)._
In order to prove this result, we will apply Kato's semigroup approach. Since \(f(u)=0\) in our Cauchy problem, we only need to verify assumptions (A1)-(A3).
Note that if \(L\) is the identity operator, \(a=2\) and \(b=1\), we get the Camassa-Holm equation. Considering this, the steps in the proof can be followed clearly.
### Proof of Assumption (A1):
Below, you will find the lemmas to be used in the proof of assumption \((A1)\).
**Lemma 2.1**.: _The operator \(A(u)=r(u)\partial_{x}\) in \(L^{2}\), with \(u\in H^{s},s>\frac{7}{2}+p\) is quasi-m-accretive._
Proof.: Since \(L^{2}\) is a Hilbert space with standard inner product \((.,.)_{0}\), \(A(u)\) is quasi-m-accretive if and only if there is a real number \(\beta\) such that
* \((A(u)w,w)_{0}\geq-\beta||w||_{0}^{2}\),
* The range of \(A(u)+\lambda\) is all of \(X\) for some (or all) \(\lambda>\beta\).
First, we will prove part \((a)\). By using integration by parts, the left-hand side of the equality can be written as follows:
\[(A(u)w,w)_{0}=(r(u)\partial_{x}w,w)_{0}=\frac{-1}{2}((r(u))_{x}w,w)_{0}\]
since if we let
\[K=(r(u)w_{x},\ w)_{0}\ =-(r(u)w_{x},\ w)_{0}\ -\ ((r(u))_{x}w,\ w)_{0},\]
where \(-(r(u)w_{x},\ w)_{0}=-K\). Then,
\[2K=-((r(u))_{x}w,\ w)_{0}\]
\[K=-\frac{1}{2}((r(u))_{x}w,\ w)_{0}.\]
Then, it follows that
\[|(r(u)\partial_{x}w,w)_{0}| =|\frac{-1}{2}((r(u))_{x}w,w)_{0}|\] \[\leq c||(r(u))_{x}w||_{0}||w||_{0}\] \[\leq c||(r(u))_{x}||_{L^{\infty}}||w||_{0}^{2}\] \[\leq c||(r(u)||_{s}||w||_{0}^{2}\] \[\leq\beta||w||_{0}^{2},\]
where \(\beta\) is a constant depending on \(||u||_{s}\). Since \(u\in H^{s}\) with \(s>\frac{7}{2}+p\), it follows that \(||u_{x}||_{L^{\infty}}\leq||u||_{s}\). We show that the operator \(A\) is dissipative for all \(\lambda>\beta\). That means the operator \(-A\) is accretive, where we have \((A(u)w,w)_{0}\geq-\beta||w||_{0}^{2}\) as required.
Now, we will prove part \((b)\). Note that if \(A\) is a closed operator, then \(A(u)+\lambda\) has closed range in \(X\) for all \(\lambda>\beta\). So, it is enough to show that \(A(u)+\lambda\) has dense range in \(x\) for all \(\lambda>\beta\).
First, we will show that \(A\) is a closed operator in \(L^{2}\). Let \((v_{n})\) be a sequence in \(\mathcal{D}(A)\) which converges to \(v\in L^{2}\) and \(Av_{n}\) converges to \(w\in L^{2}\). Then, since \(v_{n}\in\mathcal{D}(A)\) and \(\mathcal{D}(A)=\{w\in L^{2}\ |\ r(u)w\in H^{1}\}\subset L^{2}\), we can conclude that \(rv_{n}\in H^{1}\). Also, by the continuity of the multiplication \(H^{r}\times L^{2}\to L^{2}\) for \(r>\frac{1}{2}\), both \(rv_{n}\to rv\) and \(r_{x}v_{n}\to r_{x}v\) in \(L^{2}\), which implies that \((rv_{n})_{x}\to w+r_{x}v\) in \(L^{2}\). Then, we have the sequences \((rv_{n})\) and \((rv_{n})_{x}\) converges in \(L^{2}\). Then we can conclude that \((rv_{n})\) converges to \(rv\) in \(H^{1}\), which implies that \(v\in\mathcal{D}(A)\). Moreover, by continuity of \(\partial_{x}:H^{1}\to L^{2}\) implies that \(\lim_{n\to\infty}(rv_{n})_{x}=(rv)_{x}\), thus we get \(w=(rv)_{x}-r_{x}v=Av\). Hence, by definition, we showed that A is a closed operator.
Now, we need to show that \((A(u)+\lambda)\) has dense range in \(L^{2}\) for all \(\lambda>\beta\). Note that the adjoint operator of the \(A(u)=r(u)\partial_{x}\) can be written as
\[A^{*}(u)=-r_{x}(u)-r(u)\partial_{x}.\]
Then,
\[A^{*}(u)w=-r_{x}(u)w-r(u)w_{x}=-(r(u)w)_{x},\]
where \(r_{x}(u)w\in L^{2}\) since \(u_{x}\in L^{\infty}\) and \(w\in L^{2}\), and \(r(u)w_{x}=A(u)\in L^{2}\) for \(w\in\mathcal{D}(A)\). Hence, we can obtain that
\[\mathcal{D}(A^{*})=\{w\in L^{2}\ |\ A^{*}(u)w\in L^{2}\}.\]
On the contrary, assume that the range of \((A(u)+\lambda)\) is not all of \(L^{2}\). Then, there exists \(z\neq 0\in L^{2}\) such that
\[(A(u)w,z)_{0}=0,\ \ \forall w\in\mathcal{D}(A(u)).\]
Since \(H^{1}\subset\mathcal{D}(A)\), \(\mathcal{D}(A)\) is dense in \(L^{2}\). Then, due to \(\mathcal{D}(A^{*})\) is closed, \(z\in\mathcal{D}(A^{*})\). Then, by using the fact that \(\mathcal{D}(A)=\mathcal{D}(A^{*})\), we can write
\[((A(u)+\lambda)w,z)_{0}=(w,(A(u)+\lambda)^{*}z)_{0}=0,\]
which implies that \((A(u)^{*}+\lambda)z=0\) in \(L^{2}\). After multiplying this equality by \(z\), we can rewrite it as
\[0=((A^{*}(u)+\lambda)z,z)_{0} =(A^{*}(u)z,z)_{0}+(\lambda z,z)_{0}\] \[=(z,A(u)z)_{0}+(\lambda z,z)_{0}\] \[\geq(\lambda-\beta)||z||_{0}^{2},\quad\forall\lambda>\beta.\]
Since for all \(\lambda>\beta\), the term \((\lambda-\beta)>0\). Therefore, \(z=0\). However, it contradicts with the assumption \(z\neq 0\), which completes the proof of Lemma 2.1.
Now, we give the commutator estimate needed for the upcoming lemma:
**Proposition 2.1** ([13])).: _Let \(n>0\), \(s\geq 0\), and \(3/2<s+n\leq\sigma\). Then, for all \(f\in H^{\sigma}\) and \(g\in H^{s+n-1}\), one has_
\[||[\Lambda^{n},\ f]g||_{s}\leq c||f||_{\sigma}||g||_{s+n-1},\]
_where \(c\) is a constant which is independent of \(f\) and \(g\)._
**Lemma 2.2**.: _The operator \(A(u)=r(u)\partial_{x}\) in \(H^{s-1}\), with \(u\in H^{s},s>\frac{7}{2}+p\) is quasi-m-accretive._
Proof.: Since \(H^{s-1}\) is a Hilbert space, \(A(u)=r(u)\partial_{x}\) is quasi-m-accretive if and only if there is a real number \(\beta\) such that
* \((A(u)w,w)_{s-1}\geq-\beta||w||_{s-1}^{2}\),
* -\(A(u)\) is the infinitesimal generator of a \(C_{0}\)-semigroup on \(H^{s-1}\), for some (or all) \(\lambda>\beta\).
First, we will prove part \((a)\). Since \(u\in H^{s}\), with \(s>\frac{7}{2}\), we can say that \(u\) and \(u_{x}\) belong to \(L^{\infty}\). Then, it follows that \(||u_{x}||_{L^{\infty}}\leq||u||_{s}\). Note that
\[\Gamma^{s-1}(r(u)\partial_{x}w) =[\Gamma^{s-1},r(u)]\partial_{x}w+r(u)\Gamma^{s-1}(\partial_{x}w)\] \[=\ [\Gamma^{s-1},r(u)]\partial_{x}w+r(u)\partial_{x}\Gamma^{s-1}w.\]
Then, we have
\[(A(u)w,w)_{s-1}=\ (r(u)\partial_{x}w,\ w)_{s-1}\]
\[=\ (\Gamma^{s-1}r(u)\partial_{x}w,\ \Gamma^{s-1}w)_{0}\]
\[=\ ([\Gamma^{s-1},r(u)]\partial_{x}w,\ \Gamma^{s-1}w)_{0}+(r(u)\partial_{x} \Gamma^{s-1}w,\ \Lambda^{s-1}w)_{0}.\]
For the first term \(([\Gamma^{s-1},r(u)]\partial_{x}w,\ \Gamma^{s-1}w)_{0}\), use the commutator estimate (Proposition 2.1) with \(n=s-1\), and \(\sigma=s\). Then, we get
\[|([\Gamma^{s-1},r(u)]\partial_{x}w,\ \Gamma^{s-1}w)_{0}|\leq\ c||(r(u)||_{s}|| \partial_{x}w||_{s-2}||w||_{s-1}\]
\[\leq\ \tilde{c}||w||_{s-1}^{2},\]
where \(\tilde{c}\) is a constant depending on \(||u||_{s}\).
For the second term \((r(u)\partial_{x}\Gamma^{s-1}w,\ \Gamma^{s-1}w)_{0}\), use the integration by parts to get
\[|(r(u)\partial_{x}\Gamma^{s-1}w,\ \Gamma^{s-1}w)_{0}|=\ \big{|}-\frac{1}{2}(r(u)_{ x},\ \Gamma^{s-1}w)^{2})_{0}\big{|}\]
\[\leq\ c||r(u)_{x}||_{L^{\infty}}||\Gamma^{s-1}w||_{0}^{2}\]
\[\leq\ c||r(u)_{x}||_{L^{\infty}}||w||_{s-1}^{2}\]
\[\leq\ \tilde{c}||w||_{s-1}^{2},\]
where \(\tilde{c}\) is a constant depending on \(||u||_{s}\).
Set \(\beta=\tilde{c}||u||_{s}\). Then, we get \((A(u)w,w)_{s-1}\geq-\beta||w||_{s-1}^{2}\), as required.
Now, we will prove part \((b)\). Let \(Q=\Gamma^{s-1}\), note that \(Q\) is an isomorphism of \(H^{s-1}\) to \(L^{2}\), and \(H^{s-1}\) is continuously and densely embedded into \(L^{2}\) as \(s>\frac{3}{2}\). Define
\[A_{1}(u):=QA(u)Q^{-1}=\ \Gamma^{s-1}A(u)\Gamma^{1-s}\]
\[=\ \Gamma^{s-1}r(u)\partial_{x}\Gamma^{1-s}\]
\[=\ \Gamma^{s-1}r(u)\Gamma^{1-s}\partial_{x}\]
and \(B_{1}=\ A_{1}(u)+\ A(u)\).
Let \(w\in L^{2}\) and \(u\in H^{s}\), where \(s>\frac{7}{2}+p\). Then, we have
\[||B_{1}(u)||_{0}=\ ||[\Gamma^{s-1},\ A(u)]\Gamma^{1-s}w||_{0}\]
\[=\ ||[\Gamma^{s-1},\ r(u)]\Gamma^{1-s}\partial_{x}w||_{0}\]
\[\leq\ c||r(u)||_{s}||\Gamma^{1-s}\partial_{x}w||_{s-2}\]
\[\leq\ c||r(u)||_{s}||w||_{0}\ \
Then, we have
\[||(A(u)-A(v))w||_{s-1} =||((a+b)(u-v)\partial_{x}+b\Gamma^{-(p+2)}[L\partial_{x}^{2},(u-v)] \partial_{x})w||_{s-1}\] \[\leq||(a+b)(u-v)\partial_{x}w||_{s-1}+||b\Gamma^{-(p+2)}[L\partial_ {x}^{2},(u-v)]\partial_{x}w||_{s-1}\] \[\leq||(u-v)||_{s-1}||\partial_{x}w||_{s-1}+||[L\partial_{x}^{2},(u -v)]\partial_{x}w||_{s-p-3}\]
We will use the commutator estimate (Proposition 2.1) with \(n=p+2\), \(s=s-p-3\), and \(\sigma=s-1\), which implies \(s+n-1=s-2\). Then, for \(f=u-v\) and \(g=\partial_{x}w\), we get
\[||[L\partial_{x}^{2},(u-v)]\partial_{x}w||_{s-p-3} \leq c||(u-v)||_{s-1}||\partial_{x}w||_{s-2}\] \[\leq c||u-v||_{s-1}||w||_{s-1}\] \[\leq\lambda_{1}||u-v||_{s-1}||w||_{s},\]
where \(\lambda_{1}\) is a constant. Then, we get
\[||(A(u)-A(v))w||_{s-1}\leq\lambda_{1}||u-v||_{s-1}||w||_{s}.\]
Take \(v=0\) in the above inequality to obtain \(A(u)\in\mathcal{L}(H^{s},\ H^{s-1})\). This completes the proof of Lemma 2.4, and thus assumption \((A2)\).
### Proof of Assumption (A3):
Below, you will find the needed lemmas to be used in the proof of assumption \((A3)\).
**Lemma 2.5**.: _For any \(u\in H^{s}\), there exists a bounded linear operator \(B(u)\in\mathcal{L}(H^{s-1})\) satisfying \(B(u)=\Gamma A(u)\Gamma^{-1}-A(u)\), where \(B:H^{s}\to\mathcal{L}(H^{s-1})\) is uniformly bounded sets in \(H^{s-1}\). Moreover,_
\[||(B(u)-B(v))w||_{s-1}\leq\lambda_{2}||u-v||_{s}||w||_{s-1}\text{, \ \ \ }u,v\in H^{s}\text{, }w\in H^{s-1}\text{.}\]
Proof.: Note that since \(\partial_{x}\) and \(\Gamma\) commute, we can rewrite \(B(u)\) as
\[B(u)=\Gamma A(u)\Gamma^{-1}-A(u)=\Gamma r(u)\partial_{x}\Gamma^{-1}-r(u) \partial_{x}=[\Gamma,r(u)]\Gamma^{-1}\partial_{x}.\]
First, we will show that \(B(u)\) is bounded. To do that again we will use the commutator estimate (Proposition 2.1) with \(n=1\), \(s=s-1\), and \(\sigma=s-1\)
which implies \(s+n-1=s-1\). Then, for \(f=r(u)\) and \(g=\Gamma^{-1}\partial_{x}w\), where \(w\in H^{s-1}\), we can write
\[||B(u)w||_{s-1} =||[\Gamma,r(u)]\Gamma^{-1}\partial_{x}w||_{s-1}\] \[\leq||r(u)||_{s}||\Gamma^{-1}\partial_{x}w||_{s-1}\] \[\leq||r(u)||_{s}||w||_{s-1}\] \[\leq c||w||_{s-1},\]
where \(c\) depends on \(||u||_{s}\).
Moreover,
\[||(B(u)-B(v))w||_{s-1} =||[\Gamma,r(u)-r(v)]\Gamma^{-1}\partial_{x}w||_{s-1}\] \[\leq||r(u)-r(v)||_{s}||\Gamma^{-1}\partial_{x}w||_{s-1}\] \[\leq||r(u)-r(v)||_{s}||w||_{s-1}\] \[\leq||(a+b)(u-v)+a\Gamma^{-(p+2)}[L\partial_{x}^{2},u-v]||_{s}|| w||_{s-1}\] \[\leq\big{(}||u-v||_{s}+||[L\partial_{x}^{2},u-v]_{s-p-2}\big{)}|| w||_{s-1}\] \[\leq\big{(}||u-v||_{s}+||u-v||_{s}\big{)}||w||_{s-1}\] \[\leq\lambda_{2}||w||_{s-1},\]
where \(\lambda_{2}\) is a constant depending on \(||u||_{s}\) and \(||v||_{s}\). Take \(v=0\) in the above inequality to obtain \(B(u)\in\mathcal{L}(H^{s-1})\). This completes the proof of Lemma 2.5, and thus assumption \((A3)\).
### Proof of Assumption (A4):
Since \(f(u)=0\), it is trivial.
As we verify all the assumptions \((A1)-(A4)\) in Theorem 2.2, local well-posedness for the dispersion generalized Camassa-Holm equation is established.
Comparison of Regularity
As we mentioned before, there are numerous studies in which different forms of momentum density appear. Starting from the Camassa-Holm equation, we will refer to some of the regularity results for the solutions of Cauchy problems associated with these different types of evolution equations. We will verify that the result we provide in this paper is consistent with the ones appeared in the literature. Generalizing the dispersive effect also generalizes the degree of the regularity obtained for the solutions.
Below, we give sample cases:
* **Case 1:**[18]\(m=1-\partial_{x}^{2}\) which leads to the classical Camassa-Holm equation for which \(L\) is the identity operator, local well-posedness is proved for \(s>3/2\).
* **Case 2:**[15]\(m=(1-\partial_{x}^{2})^{2}u\), which is a special case of L with \(p=2\), local well-posedness is proved for \(s>7/2\).
* **Case 3:**[20]\(m=(\mu-\partial_{x}^{2}+\partial_{x}^{4})u\), which is a special case of L with \(p=2\), local well-posedness is proved for \(s>7/2\).
Checking these results, we can confirm that the result we found in this paper provide the sufficient regularity of the solution corresponding to the Cauchy problem with initial data \(u_{0}\in H^{s}\) and it generalizes the results previously reported in the literature.
## 4 Discussion
Working on one of the main qualitative analysis questions related to the dispersive equation proposed in the present work, we can declare follow up questions as future work:
1. Can we find globally well-posed solution?
2. Are there any conserved quantities? Is the energy conserved as it is for the Camassa-Holm equation?
3. Is there a finite time at which blow-up occurs? If there is finite-time blow up, which norm of \(u\) becomes unbounded? Is it in the form of wave breaking as it is for the Camassa-Holm equation? How does the change in the dispersive effect change the blow-up time?
## 5 Conclusion
In this study, we qualitatively generalize the dispersive effect through the generalized Camassa-Holm equation and establish the local well-posedness of the corresponding Cauchy problem by using Kato's semigroup approach. There
are various studies in the literature for Camassa-Holm equation since it is a dispersive equation which can model wave breaking in shallow water wave theory. The 2:1 ratio of the coefficients corresponding to nonlinear terms enables to write non-local form of CH equation in a simple manner. In our case, having the operator \(L\) in a closed form and being applied on nonlinear terms as well makes things difficult. The operator \(L\) with even order \(p\) represents the generalization of the dispersive effect. Choosing \(L\) as the identity operator and constants \(a=2\), \(b=1\), the equation reduces to Camassa-Holm equation. We obtain the results by making assumptions only on the order of \(L\). So, getting those results without writing the operator \(L\) explicitly will enable to evaluate the results of the equation in a very wide class. After a large number of trials, we observe that it is needed to write the quasi-linear form of the equation by collecting all nonlinear terms in the operator \(A(u)\). Therefore, in this case, \(f\) becomes zero. Thus, it is enough to verify the assumptions \((A1)-(A3)\) of the theorem. At the end, we see that choosing the initial data \(u_{0}\) from \(H^{s}\), where \(s>\frac{7}{2}+p\), the local well-posedness is established. One can observe that the initial data class needs to be more regular compared to Camassa-Holm equation.
## 6 Acknowledgement
N. D. Mutlubas is supported by the Turkish Academy of Sciences within the framework of the Outstanding Young Scientists Awards Program (TUBA-GEBIP-2022).
|
2309.05650 | Potentials of Deterministic Radio Propagation Simulation for AI-Enabled
Localization and Sensing | Machine leaning (ML) and artificial intelligence (AI) enable new methods for
localization and sensing in next-generation networks to fulfill a wide range of
use cases. These approaches rely on learning approaches that require large
amounts of training and validation data. This paper addresses the data
generation bottleneck to develop and validate such methods by proposing an
integrated toolchain based on deterministic channel modeling and radio
propagation simulation. The toolchain is demonstrated exemplary for scenario
classification to obtain localization-related channel parameters within an
aircraft cabin environment. | Albrecht Michler, Jonas Ninnemann, Jakob Krauthäuser, Paul Schwarzbach, Oliver Michler | 2023-09-11T17:40:43Z | http://arxiv.org/abs/2309.05650v1 | # Potentials of Deterministic Radio Propagation Simulation for AI-Enabled Localization and Sensing
###### Abstract
Machine leaning (ML) and artificial intelligence (AI) enable new methods for localization and sensing in next-generation networks to fulfill a wide range of use cases. These approaches rely on learning approaches that require large amounts of training and validation data. This paper addresses the data generation bottleneck to develop and validate such methods by proposing an integrated toolchain based on deterministic channel modeling and radio propagation simulation. The toolchain is demonstrated exemplary for scenario classification to obtain localization-related channel parameters within an aircraft cabin environment.
Deterministic Radio Propagation Simulation, Localization, Sensing, AI-Enabled, Scenario Classification.
## I Introduction
Location-based services (LBS) significantly increase the potential and application scenarios of wireless systems by extracting geometric information from radio signals to determine the location of a user or object. LBS are powered by global navigation satellite systems (GNSS) or indoor positioning systems (IPS) [1, 2] to provide a solution for various tasks, such as localization, tracking, counting of objects and people. In addition to an device-based active localization, the integrated use of wireless sensor networks (WSNs) also enables device-free radio sensing functionalities for radar-like imaging and object detection [3]. Concurrently, techniques for machine learning (ML), as a branch of artificial intelligence (AI), have become crucial in next-generation wireless communication systems such as 6G [4]. ML techniques can improve conventional methods relating to channel modeling [5], channel measurements [4], antenna-channel optimization [6], wireless networking [7] and fingerprinting [8]. But as stated in [6] one key challenge remains: the generation of massive amount of data for learning and training. This necessitates specific hardware and expensive, time-consuming measurement campaigns.
This paper addresses the data generation bottleneck for AI-enabled localization by proposing a toolchain based on deterministic channel modeling and radio propagation simulation to generate channel state information (CSI). This approach is particularly useful for high-mobility scenarios and specific environments, as it facilitates the development and initial validation of AI-enabled localization and sensing methods. These methods can be applied in various applications, such as intelligent transportation system (ITS), logistics, localization, object detection and counting. The overall structure of the paper is depicted in fig. 1.
## II Problem outline
### _Data generation for AI-based localization_
AI-based approaches for localization and sensing employ machine-learning and artificial intelligence methods to improve accuracy, efficiency and integrity in determining the location of devices or sensing the environment. Localization algorithms utilize various approaches that can operate on raw channel state information data (e.g. channel impulse response (CIR)), derived data (e.g. range) or during the sensor fusion stage (e.g. integrating radio-based localization and inertial navigation). AI-based channel models offer a key advantage over conventional models by exhibiting high adaptability to different environments, enhancing the overall robustness of the localization process. However, in order to apply such models within the radio localization domain, a learning procedure
Fig. 1: Structure of the paper to enable data generation by using deterministic radio propagation simulation for AI-enabled localization and sensing.
must be performed first. This generally involves the following steps:
1. Data collection or generation
2. Data annotation
3. Training
4. Validation
5. Deployment
Within this sequence, data generation or collection plays a critical role. Existing literature indicates that low-quality training data limits the overall performance of the system [6]. Multiple approaches can be employed for data collection or generation. The most straightforward is to gather sensor data within the designated deployment area. This method captures the full complexity of physical effects within the data. On the contrary, live measurement campaigns can be time consuming and the data is scenario-specific and therefore may not generalize well. Furthermore, calculating associated ground truth values can be challenging, making it harder to label the data correctly.
To address these challenges, simulation methods can be utilized. Such methods provide synthetic data that closely resemble real-world data while offering certain advantages. The primary advantage lies in the ability to generate an arbitrary number of data samples in a short time, making this approach more efficient than measurement-based data generation. Furthermore, data can be labeled and assigned ground-truth values directly.
### _Sensor specifics for radio based localization_
There is a variety of sensors, that are generally used for localization and sensing. These include radio based systems, optical camera systems, laser-based systems and inertial measurement systems. When using one or multiple sensor systems in a single-sensor or data-fusion localization system, it is essential to generate matching training data to establish a consistent AI-based localization system.
For optical systems, generative adversarial networks (GAN) and data augmentation on existing data sets can be applied to generate new training data [9]. Inertial measurements can be generated by calculating motion forces along a predefined trajectory and adding sensor noise according to a sensor specification [10]. Laser-based or quasi-optical radio localization, which relies on distance or angle estimation, can be simulated using simple ray-tracing approaches, where non line-of-sight (NLOS) paths can be omitted due to shading.
In contrast, for radio-based localization, a variety of primary measures based on CSI can be used. This include time of flight (ToF), received signal strength indicator (RSSI), signal-to-noise ratio (SNR) or angle of arrival (AoA) estimates. Especially in high-dynamic scenarios, models need to be able to generate spatially consistent signal parameters, since real channel information exhibits sensitivity to spatial and temporal correlations [11]. As a result, solely applying stochastic channel models may lead to unrealistic and overly simplistic data, which in turn leads to mis-trained AI-based models. The impact of spatial consistency is further discussed in section III-B. Additionally, running real-world measurement campaigns poses challenges as it requires the availability of hardware samples. For newly specified radio technologies, off-the-shelf options are often unavailable and need to be specifically designed and manufactured.
Deterministic radio propagation simulation addresses these challenges by providing the necessary channel and signal parameters for a desired environment, radio properties and antenna parameters.
## III Fundamentals and Modeling
### _Signal Propagation_
The radio channel consists of the transmitting antenna, the propagation channel and the receiving antenna. While propagation, the electromagnetic wave encounters different objects in the environment causing the three basic propagation phenomena: reflection, diffraction, and scattering. This leads to various channel characteristics, which can be categorized into large-scale and small-scale fading effects. Large-scale fading is caused by the change in signal strength over distance due to path loss and shadowing by obstacles. Small-scale fading, on the other hand, is caused by multipath propagation and constructive or destructive interaction and interference of the electromagnetic wave during propagation.
In the case of multipath propagation, the signal reaches the receiver via multiple paths. When the direct line-of-sight (LOS) path between transmitter and receiver is obstructed, non-line-of-sight (NLOS) propagation can occur. In this case, the received signal is composited of different delayed, attenuated, and phase-shifted waves from various directions. This results in various multipath components (MPCs) in the CIR. This multipath fading and NLOS propagation have severe effects on the localization performance, because the ToF and range are determined incorrectly. To address this source of error various ML algorithms for scenario classification of LOS/NLOS is applied (cf. section V).
### _Spatial Consistency_
Spatial consistency enables channel models to provide spatially consistent and time-evolving CIRs for different sensor locations and environments. Therefore, spatial consistency is crucial for evaluation of localization and sensing in a specific use cases. However, most current statistical channel models are drop-based, which are only able to generate CIRs for a particular user at a randomly chosen location and provide no spatial correlation between consecutive simulation runs [12]. This is a limitation for AI-based localization approaches, which rely of full CSI in order to infer geometric relations of the scenario. Therefore, the goal is to generate smoothly time-evolving CIRs based on the user movement in high-mobility scenarios, such as vehicle-to-everything (V2X) communications. This way, AI-enabled localization and sensing methods can be trained and validated using the desired user motion and environment. Additionally, in millimeter-wave (mmWave) and terahertz (THz) band the narrow antenna beams results in highly correlated channel characteristics [3].
### _Channel Models_
In principle, three different types of channel models can be distinguished: deterministic, stochastic, and hybrid models. Table I compares these type of channel models in terms of their properties, requirements, and available simulators.
**Deterministic channel models** solve Maxwell's equations numerically in a given geometric environment. They describe the channel and temporal variations due to the number, position, and characteristics of reflectors in the environment. One example of a deterministic algorithm is Ray Tracing (RT), where the individual propagation paths or rays are calculated individually based on the channel characteristics. [4, 13]
**Stochastic channel models** use statistical approximations, employing random distributions to characterize the received signal and channel parameters such as path loss, delay, number of paths, and fading. These models are primarily based on measurements and empirical observations in specific types of environments (rural, urban, indoor, micro, macro, etc.), rather than on the position of the sensors. Therefore, the channel is described at a random location of the sensor in a defined type of environment, leading to a lack of spatial consistency between multiple simulation runs and different sensor locations [4, 12, 13].
**Hybrid channel models** combine deterministic and stochastic approaches and therefore offer a balance between accuracy and complexity. Geometry-based stochastic model (GBSM), such as the 3GPP TR 38.901 [14] model and the WINNER II model [15], incorporate the geometry between transmitter and receiver through the length and angle of the LOS signal component. However, they still use stochastic models and distributions to describe the various signal parameters. Quasi-deterministic channel models compute the dominant propagation path with a highly simplified environment map and add clusters of stochastic modeled MPC to the model. Such models are only partially spatial consistency and therefore not suitable for the evaluation localization and sensing functionalities in a specific environment and use case. [13]
## IV Toolchain
The overall toolchain describes the process from scenario definition and modeling, to running the radio propagation simulation, extracting relevant parameters, generating signal data and finally training and evaluating the AI model. A schema of the toolchain is depicted in fig. 2.
### _Scenario Modeling_
In order to generate data for AI applications, it is necessary to define the use case and scenario. To facilitate data generation using deterministic simulation, it is important to model the properties of the radio channel and the environment in as much detail as possible. This involves two main aspects: Firstly, the geometry of the physical surroundings needs to be accurately represented within the simulation framework, for example in form of a 3D computer-aided design (CAD) model. This entails, that electromagnetic material constants (permittivity, permeability) for all elements are provided in order to properly simulate all propagation effects. Secondly, the radio channel and the simulated hardware need to be parameterized. This includes the center frequency, transmit power, antenna gain, 3D antenna pattern, and other simulation parameters.
### _Deterministic Radio Propagation Simulation_
The environmental model and radio channel model are used for the 3D deterministic radio propagation simulation to compute all propagation paths or rays of the radio wave, between the transmitter and the desired receiver location. This
Fig. 2: Toolchain for data generation with deterministic radio propagation simulation to support AI-enabled localization and sensing.
deterministic approach takes the effect of the environment on the propagation into account accurately. In addition, multiple parameters, such as path loss, RSSI CIR, AoA, and ToF could be predicted with one simulation run.
The radio rays are computed considering refraction, reflection, diffraction or scattering either by ray tracing and ray launching. Ray tracing determines the individual paths backwards from receiver to the transmitter and ray launching launches a number of rays from the transmitter and calculate their paths from there. With a time-variant simulation it is also possible to compute the signal propagation in a dynamic scenario based on a trajectory.
Various simulators exist for 3D ray tracing and ray launching, such as NYURay for mmWave [12], CloudRT [16], MaxRay [17]. In this case, the standard ray tracing model of Altair WinProp 2022.2.2 is employed to simulate signal propagation and generate corresponding signal parameters [18].
### _Feature Extraction and Signal Parameter_
A variety of signal parameters can be leveraged for localization and sensing purposes, ranging from CSI to secondary parameters. Channel estimation is the method used to obtain the CSI from a wireless communication link. CSI represents the properties and parameters of the channel, describing how the signal propagates from the transmitter to the receiver. The CIR can indicate the instantaneous channel conditions and consists of Multipath Components (MPCs) resulting from the set of all propagation paths.
Up until this point, all simulation steps are deterministic and can directly be reconstructed. However, one simulation step yields only one data sample, which does not address the issue of a high number of samples needed for AI training. Furthermore, some physical parameters are presented in an idealized form. Therefore, it is necessary to reconstruct the physical channel properties.
### _Data Generation and Augmentation_
The CIR \(h(\tau)\) is obtained from the ray tracing simulation consisting of multiple Dirac pulses \(\delta\) at a certain propagation delay \(\tau_{i}\) and amplitude \(a_{i}\) of the \(i\)-th propagation path or MPC computed from the length of the ray path and the path loss. Mathematically, the CIR \(h(\tau)\) of the time-invariant propagation channel can be described as following:
\[h(\tau)=\sum_{i=1}^{N}a_{i}\delta(\tau-\tau_{i}) \tag{1}\]
The CIR obtained from the simulation assumes an unlimited bandwidth. However, in real wireless systems, the channel is band-limited by a bandwidth \(B\). Therefore, the CIR needs to be reconstructed with band-limited conditions by applying the Whittaker-Shannon interpolation formula or sinc interpolation. This results in MPCs with a certain width and a limited range resolution, which are important for localization and sensing purposes [19]. An exemplary reconstruction of a CIR obtained in NLOS conditions is shown fig. 3.
More signal parameters and features for ML methods can be derived from the CIR and the deterministic simulation. These features include RSSI, SNR, AoA, ToF, time difference of arrival (TDoA), mean excess delay, root mean square (RMS) delay spread, and kurtosis.
To generate multiple data samples from a single simulation step, the interpolation filter properties, such as bandwidth, can be varied. Furthermore, convolution with a skewed noise function is possible to introduce additional variation. These techniques help prevent overfitting to a specific scenario.
### _AI-Enabled Localization and Sensing_
Data driven methods enable a variety of applications in modern localization and sensing models. They can be applied at different levels of signal properties and features. Table II lists the several ML task and methods related to localization and sensing.
For each ML task (classification, clustering, detection) there are several algorithms available to solve the corresponding problem. In general, this algorithms can be divided into supervised, unsupervised and reinforcement learning methods.
## V Example for Scenario (LOS/NLOS) Classification
The toolchain described in section IV is exemplarily applied to generate data for a scenario (LOS/NLOS) classification using supervised learning. For the purpose of active localization and evaluating scenario classification, a use case in the area of Intelligent Transportation Systems (ITS) is chosen.
Fig. 3: Band-limited reconstructed CIR.
Specifically, the toolchain is evaluated in the context of the connected aircraft cabin, considering the significant potential of LBS in this environment. Examples of potential applications include passenger boarding/deboarding, technology-based social distancing methods (COVID-19) [26] and object detection. Localization methods based on RSSI or ToF for distance estimation are heavily influenced by the environment due to the signal multipath reception in the aircraft cabin, where scenario classification yields potential for improving localization accuracy and robustness. Additionally, the limited accessibility of aircraft cabins can hinder extensive real-world measurement campaigns.
For scenario, a 3D CAD model of the Airbus A340 cabin was utilized. The study is aimed at identifying areas with LOS/NLOS coverage given a fixed anchor position. As radio access technology, ultra-wideband (UWB) was chosen, with a center frequency of (\(3500\,\mathrm{MHz}\)) and maximum transmit power of -16 dbm. UWB as radio technology for localization and tracking is widely used and achieves a very high ranging accuracy (\(10\,\mathrm{cm}\)) due to the high bandwidth (\(500\,\mathrm{MHz}\)). This real-time locating system (RTLS) is suitable for an aircraft cabin use-case [27]. Both transmission and reception utilized omnidirectional antennas in the simulation.
For the scenario classification, various features are extracted from the reconstructed CIR. These features included RMS delay spread, amplitude, kurtosis, total received energy, mean excess delay, maximal amplitude, and RSSI (see fig. 5). Recursive feature elimination was employed to select the most relevant and suitable features in the training dataset.
Data in form of the CIR was generated for one transmit antenna position in the middle of the aircraft and a grid of possible receiver locations. The random forest classifier was used as the machine learning algorithm for scenario classification.
The results of the classification were presented and compared to the ground truth obtained from the radio propagation simulation with ray tracing (see fig. 4). The overall classification accuracy achieved using the random forest model was \(98.51\,\mathrm{\char 37}\).
## VI Conclusion
The paper presents a developed toolchain for generating data for AI-driven localization and sensing. Compared to real-world data generation, the toolchain offers advantages in terms of accessibility, reproducibility, and the availability of ground truth for verification. The main advantage compared to stochastic channel modelling lies in the preservation of spatial consistency, which in turn leads to a more realistic and accurate channel simulation and enables the utilization of the full channel state information in localization and sensing algorithms. The toolchain's effectiveness is demonstrated through a UWB LOS/NLOS classification study conducted in an aircraft cabin. This approach can be extended and generalized to other environments and radio access technologies. In order to verify models, real-world tests can be conducted. This is particularly important to identify any biases that may arise during the simulation-based learning phase. Furthermore, a correction framework can be developed to address such biases or unknown effects observed during the real-world validation process and feed back corrected parameters to the original model.
There is potential for automating scenario design using machine learning techniques, although careful consideration must be given to bias propagation within such a setup. Additionally, the application layer of the toolchain can be expanded and validated with other machine learning/AI methods, such as deep learning methods.
Fig. 4: Scenario classification (LOS/NLOS) in the aircraft cabin. Ground truth obtained from the radio propagation simulation with ray tracing (top) and estimate by the random forest classifier (bottom). Sender position is marked with a red dot.
Fig. 5: Parameters obtained from a channel impulse response (CIR) sample.
## Acknowledgment
This work has been funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) following a resolution of the German Federal Parliament within the projects CANARIA (FKZ: 20D1931C) and INTACT (FKZ: 20D2128D).
|
2309.14985 | Types and Semantics for Extensible Data Types (Extended Version) | Developing and maintaining software commonly requires (1) adding new data
type constructors to existing applications, but also (2) adding new functions
that work on existing data. Most programming languages have native support for
defining data types and functions in a way that supports either (1) or (2), but
not both. This lack of native support makes it difficult to use and extend
libraries. A theoretically well-studied solution is to define data types and
functions using initial algebra semantics. While it is possible to encode this
solution in existing programming languages, such encodings add syntactic and
interpretive overhead, and commonly fail to take advantage of the map and fold
fusion laws of initial algebras which compilers could exploit to generate more
efficient code. A solution to these is to provide native support for initial
algebra semantics. In this paper, we develop such a solution and present a type
discipline and core calculus for a language with native support for initial
algebra semantics. | Cas van der Rest, Casper Bach Poulsen | 2023-09-26T14:57:02Z | http://arxiv.org/abs/2309.14985v1 | # Types and Semantics for Extensible Data Types (Extended Version)
###### Abstract
Developing and maintaining software commonly requires (1) adding new data type constructors to existing applications, but also (2) adding new functions that work on existing data. Most programming languages have native support for defining data types and functions in a way that supports either (1) or (2), but not both. This lack of native support makes it difficult to use and extend libraries. A theoretically well-studied solution is to define data types and functions using _initial algebra semantics_. While it is possible to encode this solution in existing programming languages, such encodings add syntactic and interpretive overhead, and commonly fail to take advantage of the map and fold fusion laws of initial algebras which compilers could exploit to generate more efficient code. A solution to these is to provide native support for initial algebra semantics. In this paper, we develop such a solution and present a type discipline and core calculus for a language with native support for initial algebra semantics.
Keywords:Type systems Modularity Programming Language Design Categorical Semantics.
## 1 Introduction
A common litmus test for a programming language's capability for modularity is whether a programmer is able to extend existing data with new ways to construct it as well as to add new functionality for this data. All in a way that preserves static type safety; a conundrum which Wadler [37] dubbed the _expression problem_. When working in pure functional programming languages, another modularity question is how to model side effects modularly using, e.g., _monads_[28]. Ideally, we would keep the specific monad used to model the effects of a program abstract and program against an _interface_ of effectful operations instead, defining the syntax and implementation of such interfaces separately and in a modular fashion.
The traditional approach for tackling these modularity questions in pure functional programming languages is by embedding the _initial algebra semantics_[18] of inductive data types in the language's type system. By working with such embeddings in favor of the language's built-in data types we gain modularity without sacrificing type safety. This approach was popularized by Swierstra's
#### 1.0.1 Data Types a la Carte [35]
As a solution to the expression problem, where it was used to derive modular interpreters for a small expression language. In later work, similar techniques were applied to define the syntax and implementation of a large class of monads using (algebraic) effects and handlers based on different flavors of inductively defined _free monads_. This was shown to be an effective technique for modularizing both first order [23] and higher order [39, 31, 7] effectful computations.
The key idea that unifies these techniques is the use of _signature functors_, which act as a de facto syntactic representation of an inductive data type or inductively defined free monad. Effectively, this defines a generic inductive data type or free monad that takes its constructors as a parameter. The crucial benefit of this setup is that we can compose data types and effects by taking the coproduct of signature functors, and we can compose function cases defined over these signature functors in a similarly modular way. Inductive data types and functions in mainstream functional programming languages generally do not support these kinds of composition.
While embedding signature functors has proven itself as a tremendously useful technique for enhancing functional languages with a higher degree of type safe modularity, the approach has some downsides:
* Encodings of a data type's initial algebra semantics lacks the syntactic convenience of native data types, especially when it comes to constructing and pattern matching on values. Further overhead is introduced by their limited interoperability, which is typically relies on user-defined isomorphisms.
* The connection between initial algebra semantics encodings of data types, and the mathematical concepts that motivate them remains implicit. This has two drawbacks: (1) the programmer has to write additional code witnessing that their definitions possess the required structure (e.g., by defining instances of the Functor typeclass), and (2) a compiler cannot leverage the properties of this structure, such as by implementing (provably correct) optimizations based on the well-known map and fold fusion laws.
In this paper, we explore an alternative perspective by making type-safe modularity part of the language's design, by including built-in primitives for the functional programmer's modularity toolkit--e.g., functors, folds, fixpoints, etc. We believe that this approach has the potential to present the programmer with more convenient syntax for working with extensible data types (see, for example, the language design proposed by Van der Rest and Bach Poulsen [32]). Furthermore, by supporting type-safe modularity through dedicated language primitives, we open the door for compilers to benefit from their properties, for example by applying fusion based optimizations.
### Contributions
The semantics of (nested) algebraic data types has been studied extensively in the literature (e.g., by Johann et al. [21, 22, 20], and Abel et al. [2, 3, 4]) resulting in the development of various calculi with the purpose of studying different
aspects of the semantics of programming with algebraic data types. In this paper, we build on these works to develop a core calculus that seeks to distill the essential language features needed for developing programming languages with built-in support for type-safe modularity while retaining the same formal foundations. Although the semantic ideas that we build on to develop our calculus are generally well-known, their application to improving the design of functional programming languages has yet to be explored in depth. It is still future work to leverage the insights gained by developing this calculus in the design of programming language that provide better ergonomics for working with extensible data types, but we believe the development of a core calculus capturing the essentials of programming with extensible data types to be a key step for achieving this goal. To bridge from the calculus presented in this paper to a practical language design, features such as _smart constructors_, _row types_, and _(functor) subtyping_ (as employed, for example, by Morris and McKinna [29] and Hubers and Morris [19]) would be essential. We make the following technical contributions:
* We show (in Section 2) how modular functions over algebraic data types in the style of Data Types a la Carte and modular definitions of first-order and higher-order (algebraic) effects and handlers based on inductively defined free monads can be captured in the calculus.
* We present (in Section 3) a formal definition of the syntax and type system.
* We give (in Section 4) a categorical semantics for our calculus.
* We present (in Section 5) an operational semantics for our calculus, and discuss how it relates to the categorical semantics.
Section 6 discusses related work, and Section 7 concludes.
## 2 Programming with Extensible Data Types, by Example
The basis of our calculus is the polymorphic \(\lambda\)-calculus extended with kinds and restricted to rank-1 polymorphism, allowing the definition of many familiar polymorphic functions, such as \((\mathit{id}:\forall\alpha.\alpha\Rightarrow\alpha)=\lambda x.x\) or \((\mathit{const}:\forall\alpha.\forall\beta.\alpha\Rightarrow\beta\Rightarrow \alpha)=\lambda x.\lambda y.x\). Types are closed under products and coproducts, with the unit type (1) and empty type (0) acting as their respective units. Furthermore, we include a type-level fixpoint (\(\mu\)), which can be used to encode many well-known algebraic data types. For example, the familiar type of lists is encoded as \(\mathit{List}\triangleq\lambda\alpha.\mu(\lambda X.\mathbb{1}+(\alpha\times X))\). A key feature of the calculus is that all higher-order types (i.e., that have one or more type argument) are, by construction, functorial in all their arguments. While this imposes some restrictions on the types we can define, it also means that the programmer gets access to primitive mapping and folding operations that they would otherwise have to define themselves. For the type \(\mathit{List}\), for example, this means that we get both the usual mapping operation transforming its elements, as well as an operation corresponding to Haskell's \(\mathit{foldr}\), for free.
Although the mapping and folding primitives for first-order type constructors (i.e., those taking arguments of kind \(\star\) and producing a type of kind \(\star\)
are already enough to solve the expression problem for regular algebraic data types (Section 2.1) and to encode modular algebraic effects (Section 2.2), they can readily be generalized to higher-order type constructors. That is, type constructors that construct higher-order types from higher-order types. The benefit of this generalization is that our calculus can also capture the definition of so-called _nested data types_[8], which arise as the fixpoint of a _higher-order functor_. We make essential use of the calculus' higher-order capabilities in Section 2.3 to define modular handlers for scoped effects [40] and modular elaborations for higher-order effects [31], as in both cases effect trees that represents monadic programs with higher-order operations is defined as a nested data type.
Notation.All code examples in this section directly correspond to programs in our calculus, but we take some notational liberty to simplify the exposition. Abstraction and application of type variables is left implicit. Similarly, we omit first-order universal quantifications. By convention, we denote type variables bound by type-level \(\lambda\)-abstraction using capital letters (e.g., \(X\)), and those bound by universal quantification using Greek letters (e.g., \(\alpha\),\(\beta\)).
### Modular Interpreters in the style of Data Types a la Carte
We consider how to define a modular interpreter for a small expression language of simple arithmetic operations. For starters, we just include literals and addition. The corresponding BNF equation and signature functor are given below:
\[e ::=\mathbb{N}\mid e+e \textit{Expr}\triangleq\lambda X.\mathbb{N}+(X\times X)\]
Now, we can define an _eval_ that maps expressions--given by the fixpoint of _Expr_--to their result:
\[\textit{expr} :\mathbb{N}+(\mathbb{N}\times\mathbb{N})\Rightarrow\mathbb{N} e \textit{eval} :\mu(\textit{Expr})\Rightarrow\mathbb{N}\] \[\textit{expr} =(\lambda x.x)\blacktriangledown(\lambda x.\pi_{1}\ x+\pi_{2}\ x) \textit{eval} =(\ \textit{expr}\
### Modular Algebraic Effects using the Free Monad
As our second example we consider how to define modular algebraic effects and handlers [30] in terms of the free monad following Swierstra [35]. First, we define the _Free_ type which constructs a free monad for a given signature functor \(f\). We can think of a term with type _Free_\(f\)\(\alpha\) as a syntactic representation of a monadic program producing a value of type \(\alpha\) with \(f\) describing the operations which we can use to interact with the monadic context.
\[\mbox{\it Free}:(\star\leadsto\star)\leadsto\star\leadsto\star\quad\triangleq \quad\lambda f.\lambda\alpha.\mu(\lambda X.\alpha+fX)\]
Note that the type _Free_ is actually a functor in both its arguments, and thus there are two ways to "map over" a value of type _Free_\(f\)\(\alpha\); we can transform the values at the leaves using a function \(\alpha\Rightarrow\beta\), or the shape of the nodes using a natural transformation \(\forall\alpha.f\)\(\alpha\Rightarrow g\)\(\alpha\). The higher order map can be used, for example, for defining function that reorders the operations of effect trees with a composite signature.
\[\mbox{\it reorder}:\mbox{\it Free}\ (f+g)\ \alpha\Rightarrow\mbox{\it Free }\ (g+f)\ \alpha\] \[\mbox{\it reorder}=\mbox{\sf map}\langle\iota_{\mbox{\tiny 2}} \ \mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny \mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{ \mboxmboxmboxmboxmboxmboxmboxmbox { \mboxmboxmboxmboxmbox { \mboxmboxmboxmbox { \mboxmboxmboxmbox { \mboxmboxmboxmbox{ \mboxmboxmbox{ \mboxmboxmbox{ \mboxmboxmbox{ \mboxmbox{ \mboxmboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ }}}}}}}}}}}}}}} \mbox{}}\mbox{\]
Here, we use higher order instances at kind \(\star\leadsto\star\star\) of the coproduct eliminator \(-\)\(\mbox{\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{ \mboxmbox{ \mboxmboxmbox{ \mboxmboxmbox { \mboxmboxmboxmbox { \mboxmbox{ }}}}}}}}}}}}}}}}} \mbox{}}}\), the coproduct injection functions \(\iota_{\mbox{\tiny 1}}}\), \(\iota_{\mbox{\tiny 2}}}\), and the functorial map operation \(\mbox{\sf map}\langle\ -\ \rangle^{-}\).
_Effect handlers_ can straightforwardly be implemented as folds over _Free_. In fact, the behavior of a handler is entirely defined by the algebra that we use to fold over the effect tree, allowing us write a generic _handle_ function:
\[\mbox{\it handle}:(\alpha\Rightarrow\beta)\Rightarrow(f\ (\mbox{\it Free }g\ \beta)\Rightarrow\mbox{\it Free }g\ \beta)\Rightarrow\mbox{\it Free }(f+g)\ \alpha\Rightarrow\mbox{\it Free }g\ \beta\] \[\mbox{\it handle}=\lambda h.\lambda i.((\mbox{\sf in}\circ\iota_{ \mbox{\tiny 1}}\circ h)\ \mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{ \mbox{ \mboxmbox{ \mboxmbox { \mbox }}}}}}}}}}}}}}}} \ \ \mbox{}}}\mbox{i}\ \mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{ \mboxmboxmbox{ \mboxmbox { \mboxmbox { \mbox \mbox { \mbox { \mboxmbox { }}}}}}}}}}}}}}} \mbox{}}}\mbox{\beta}\mbox{\mbox{\tiny\tiny \mbox{\tiny\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{ \mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mboxmboxmboxmbox{\mboxmboxmboxmbox{ \mboxmboxmboxmbox { \mboxmboxmbox { \mboxmboxmbox { \mboxmboxmbox { \mboxmbox { \mboxmboxmbox { \mboxmbox { \mboxmboxmbox { \mboxmbox { \mboxmboxmbox { \mboxmboxmboxmbox { \mboxmboxmbox { \mboxmbox{ \mboxmboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmbox{ \mboxmboxmbox{ \mboxmbox{ \mboxmboxmbox{ \ \mboxmbox{ \ \mboxmbox{ \mboxmboxmbox{ \ \mboxmbox{ \ \mboxmbox{ \mboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox { \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmbox{\mboxmboxmboxmboxmboxmboxmbox{ \mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox{ \mbox
### Modular Higher-Order Effects
To describe the syntax of computations that interact with their monadic context through higher-order operations--that is, operations whose arguments can themselves also be monadic computations--we need to generalize the free monad as follows.
\[\mathit{Prog}:((\star\leadsto\star)\leadsto\star\leadsto\star)\leadsto\star \leadsto\star\leadsto\star\quad\triangleq\quad\lambda f.\mu(\lambda X.\lambda \alpha.\alpha+(f\ X\ \alpha))\]
Note that, unlike the _Free_ type, _Prog_ is defined as the fixpoint of a higher-order functor. This generalization allows for signature functors to freely choose the return type of continuations. Following Yang et al. [40], we use this additional expressivity to describe the syntax of higher-order operations by nesting continuations. For example, the following defines the syntax of an effect for exception catching, that we can interact with by either throwing an exception, or by declaring an exception handler that first executes its first argument, and only runs the second computation if an exception was thrown.
\[\mathit{Catch}:(\star\leadsto\star)\leadsto\star\leadsto\star\quad\triangleq \quad\lambda X.\lambda\alpha.\mathbb{1}+(X(X\alpha)\times(X(X\alpha))\]
A value of type _Prog Catch_\(\alpha\) is then a syntactic representation of a monadic program that can both throw and catch exceptions. From this syntactic representation we can proceed in two different ways. The first option is to replace exception catching with an application of the _hAbort_ handler, in line with Plotkin and Pretnar's [30] original strategy for capturing higher-order operations. In recent work, Bach Poulsen and Van der Rest [31] demonstrated how such abbreviations can be made modular and reusable by implementing them as algebras over the _Prog_ type. Following their approach, we define the following elaboration of exception catching into a first-order effect tree.
\[\mathit{eCatch}:\mathit{Prog}\ \mathit{Catch}\ \alpha\Rightarrow\mathit{ Free}\ \mathit{Abort}\ \alpha\] \[\mathit{eCatch}=((\mathsf{in}\circ\iota_{\mathbb{1}})\] \[\blacktriangledown(\mathsf{in}\circ\iota_{\mathbb{2}})\] \[\blacktriangledown(\lambda x.hAbort\ (\pi_{\mathbb{1}}\ x) \gg\Rightarrow\mathit{maybe}\ (\mathit{join}\ (\pi_{\mathbb{2}}\ x))\ \mathit{id})\ )^{\alpha+\mathit{Catch}\ X\ \alpha}\]
Here, the applications of monadic bind (\(\gg\)) and _join_ refer to the monadic structure of _Free_. Alternatively, we can define a handler for exception catching directly by folding over the _Prog_ type, following the _scoped effects_ approach by Wu et al. [39]:
\[\mathit{hCatch}:\mathit{Prog}\ (\mathit{Catch}+h)\ \alpha\Rightarrow\mathit{ Prog}\ h\ (\mathit{Maybe}\ \alpha)\] \[\mathit{hCatch}=((\mathsf{in}\circ\iota_{\mathbb{1}}\circ\mathit{ Just})\] \[\blacktriangledown(\lambda x.\mathsf{in}\ (\iota_{\mathbb{1}}\ \mathit{Nothing}))\] \[\blacktriangledown(\lambda x.\pi_{\mathbb{1}}\ x\gg\Rightarrow \mathit{maybe}\ (\pi_{\mathbb{2}}\ x\gg\Rightarrow\mathit{fwd})\ \mathit{id}))\] \[\blacktriangledown(\mathsf{in}\circ\iota_{\mathbb{2}})\ \big{\|}^{\alpha+(\mathit{Catch}\ X\ \alpha)+(h\ X\ \alpha)}\]
\begin{tabular}{r l l} & \(\alpha,\beta,\gamma,X,Y\) & \(\in\) & String \\ \(\begin{array}{rcl}\mbox{\it Kind}&\ni&k::=\star\mid k\rightsquigarrow k\\ \mbox{\it KindEnv}&\ni&\Delta,\Phi::=\emptyset\mid\Delta,\alpha:k\end{array}\) \\ \(\begin{array}{rcl}\mbox{\it Type}&\ni&\tau::=\alpha\mid X\mid\tau\;\tau\mid \lambda X.\tau\mid\mu(\tau)\mid\tau\Rightarrow\tau\\ &\mid\;\;0\mid\mathbb{1}\mid\tau\times\tau\mid\tau+\tau\end{array}\) \\ \(\begin{array}{rcl}\mbox{\it Scheme}&\ni&\sigma::=\forall\alpha.\sigma\mid \tau\end{array}\) \\ \end{tabular}
Where the function \(\mathit{fwd}\) establishes that \(\mathit{Maybe}\) commutes with the \(\mathit{Prog}\) type in a suitable way:
\[\mathit{fwd}:\mathit{Maybe}\;(\mathit{Prog}\;h\;(\mathit{Maybe}\;\alpha)) \Rightarrow\mathit{Prog}\;h\;(\mathit{Maybe}\;\alpha)\]
That is, we show that \(\mathit{Prog}\;h\) is a _modular carrier_ for \(\mathit{Maybe}\)[34].
As demonstrated, our calculus supports defining higher-order effects and their interpretations. To conveniently sequence higher-order computations we typically also want to use a monadic bind function, such as \(\gg\)\(:\mathit{Prog}\;h\;\alpha\rightarrow(\alpha\to Prog\;h\;\beta)\to Prog \;h\;\beta\). While it is possible to define monadic bind for \(\mathit{Free}\) from Section 2.2 in terms of a plain fold, defining the monadic bind for \(\mathit{Prog}\) generally requires a _generalized fold_[9, 40]. Adding this and other recursion principles [27] to our calculus is future work.
## 3 The Calculus
The previous section demonstrated how a language with built-in support for functors, folds, and fixpoints provides support for defining and working with state-of-the-art techniques for type safe modular programming. In this section we present a core calculus for such a language. The basis of our calculus is the first-order fragment of System \(F^{\omega}\)--i.e., the polymorphic \(\lambda\)-calculus with kinds, where universal quantification is limited to prenex normal form a la Hindley-Milner. Additionally, the syntax of types, defined in Figure 1, includes primitives for constructing recursive types (\(\mu(-)\)), products (\(\times\)) and coproducts (\(+\)), as well as a unit type (\(\mathbb{1}\)) and empty type (\(\mathbb{0}\)). In the definition of the syntax of types, the use of \(\forall\)-types is restricted by stratifying the syntax into two layers, types and type schemes. Consequently, our calculus is, by design, _predicative_: \(\forall\)-types can quantify over types but not type schemes.
The motivation for this predicative design is that it permits a relatively straightforward categorical interpretation of \(\forall\)-types in terms of \(\mathit{ends}\) (see Section 4.2). Whereas the restriction of universal quantification to prenex normal form is usually imposed to facilitate type inference, our calculus does not support inference in its current form due to the structural treatment of data types.
Figure 1: Type syntax
In a structural setting, inference requires the reconstruction of (recursive) data type definitions from values, which is, in general, not possible.
We remark that the current presentation of the type system is _declarative_, meaning certain algorithmic aspects crucial for type checking, such as normalization and equality checking of types, are not covered in the current exposition. Regarding decidability of the type system: our system is a subset of System \(F_{\omega}\), whose Church-style formulation is decidable while its Curry-style formulation is not. As such, we expect our type system to inherit these properties. Since we are restricting ourselves to a predicative subset of \(F_{\omega}\), we are optimistic that the Curry-style formulation of our type system will be decidable too, but verifying this expectation is future work.
### Well-Formed Types
Types are well-formed with respect to a kind \(k\), describing the arity of a type's parameters, if it has any. Well-formedness of types is defined using the judgment \(\Delta\mid\Phi\vdash\tau:k\), stating that the type \(\tau\) has kind \(k\) under contexts \(\Delta\) and \(\Phi\). Similarly, well-formedness of type schemes is defined by the judgment \(\Delta\vdash\sigma\), stating that the type scheme \(\sigma\) is well-formed with respect to the context \(\Delta\).
Figure 2: Well-formedness rules for types and type schemes
Following Johann et al. [21], well-formedness of types is defined with respect to two contexts, one containing functorial variables (\(\Phi\)), and one containing variables with mixed variance (\(\Delta\)). Specifically, the variables in the context \(\Phi\) are restricted to occur only in _strictly positive_[1, 13] positions (i.e., they can never appear to the left of a function arrow), while the variables in \(\Delta\) can have mixed variance. This restriction on the occurrence of the variables in \(\Phi\) is enforced in the well-formedness rule for function types, K-Fun, which requires that its domain is typed under an empty context of functorial variables, preventing the domain type from deregerencing any functorial variables bound in the surrounding context. While it may seem overly restrictive to require type expressions to be strictly positive--rather than merely positive--in \(\Phi\), this is necessary to ensure that \(\mu\)-types, as well as its introduction and elimination forms, have a well-defined semantics (see Section 4.2). Variables in \(\Phi\) are bound by type-level \(\lambda\)-abstraction, meaning that any type former with kind \(k_{1}\leadsto k_{2}\) is functorial in its argument. In contrast, the variables in \(\Delta\) are bound by \(\forall\)-quantification.
Products (\(\times\)), coproducts (\(+\)), units (1) and empty types (0) can be constructed at any kind, reflecting the fact that the corresponding categorical (co)limits can be lifted from Set to its functor categories by computing them pointwise. This pointwise lifting of these (co)limits to functor categories is reflected in the \(\beta\) equalities for these type formers (shown in Figure 5), which allow an instance at kind \(k_{1}\leadsto k_{2}\), when applied with a type argument, to be replaced with an instance at kind \(k_{2}\).
The well-formed judgements for types effectively define a (simply typed) type level \(\lambda\)-calculus with base "type" \(\star\). Consequently, the same type has multiple equivalent representations in the presence of \(\beta\)-redexes, raising the question of how we should deal with type normalization. The approach we adopt here is to add a non-syntactic conversion rule to the definition of our type system that permits any well-formed term to be typed under an equivalent type scheme. Section 3.3 discusses type equivalence in more detail.
### Well-Typed Terms
Figure 3 shows the term syntax of our calculus. Along with the standard syntactic forms of the polymorphic \(\lambda\)-calculus we include explicit type abstraction and application, as well as introduction and elimination forms for recursive types (in/unin), products (\(\pi_{1}/\pi_{2}/-\bigtriangleup\)\(-\)), coproducts (\(\iota_{1}/\iota_{2}/-\)\(\blacktriangledown\)\(-\)), and the unit (tt) and empty (absurd) types. Furthermore, the calculus includes dedicated primitives for mapping (map\(\langle\,-\,\rangle^{-}\)) and folding (\(\|\)\(-\)\(\|^{-}\)) over a type.
Figure 3 also includes the definition of _arrow types_. In spirit of the syntactic notion of natural transformations used by Abel et al. [2, 3, 4] to study generalized (Mendler) iteration, an arrow type of the form \(\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2}\) (where \(\tau_{1},\tau_{2}:k\)) defines the type of _morphisms_ between the objects that interpret \(\tau_{1}\) and \(\tau_{2}\). Arrow types are defined by induction over \(k\), since the precise meaning of morphism for any pair of types depends on their kind. If \(k=\star\), then a morphism between \(\tau_{1}\) and \(\tau_{2}\) is simply a function type. However, if \(\tau_{1}\) and \(\tau_{2}\) have one or more type argument,
they are to be interpreted as objects in a suitable functor category, meaning that their morphisms are natural transformations. This is reflected in the definition of arrow types, by unfolding an arrow \(\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2}\) to a \(\forall\)-type that closes over all type arguments of \(\tau_{1}\) and \(\tau_{2}\), capturing the intuition that polymorphic functions cor respond to natural transformations.1 For instance, we would type the inorder traversal of binary trees as _inorder_ : _Tree_\(\stackrel{{\star\longleftarrow\rightarrow}}{{\longrightarrow}}\)_List_ (\(\triangleq\forall\alpha.Tree\ \alpha\Rightarrow List\ \alpha\)), describing a natural transformation between the _Tree_ and _List_ functors.
Footnote 1: This intuition is made formal by Theorem 1 in Section 4.4.
The typing rules are shown in shown in Figure 4. The rules rely on arrow types for introduction and elimination forms. For example, Products can be constructed at any kind (following rule K-Product in Figure 2), so the rules for terms that operate on these (i.e., T-Fst, T-Snd, and T-Fork) use arrow types at any kind \(k\). Consequently, arrow types should correspond to morphisms in a suitable category, such that the semantics of a product type and its introduction/elimination forms can be expressed as morphisms in this category.
### Type Equivalence
In the presence of type level \(\lambda\)-abstraction and application, the same type can have multiple representations. For this reason, the type system defined in Figure 4 includes a non-syntactic conversion rule that allows a well-typed term to be re-typed under any equivalent type scheme. The relevant equational theory for types is defined in Figure 5, and includes the customary \(\beta\) and \(\eta\) equivalences for \(\lambda\)-terms, as well as \(\beta\) rules for product, sum, unit, and empty types. The equations shown in Figure 5 are motivated by the semantic model we discuss in Section 4, in the sense that equivalent types are interpreted to naturally isomorphic functors. The relation is also reflexive and transitive, motivated by respectively the identity and composition of natural isomorphisms. Viewing the equalities in Figure 5 left-to-right provides us with a basis for a normalization strategy for types, which would be required for implementing the type system.
Figure 3: Term syntax
## 4 Categorical Semantics
In this section, we consider how to define a categorical semantics for our calculus, drawing inspiration from the semantics defined by Johann and Polonsky [22] and Johann et al. [21, 20]. To define this semantics, we must show that each type in our calculus corresponds to a functor, and that all such functors have initial algebras. In Section 4.3 we discuss the requirements for these initial algebras to exist, and argue informally why they should exist for the functors interpreting our types. Although Johann and Polonsky [22] present a detailed argument for
Figure 4: Well-formed terms
the existence of initial algebras of the functors underlying nested data types, it is still future work to adapt this argument to our setting.
The general setup of our semantics is to interpret types of kind \(\star\) as objects in Set (the category of sets), higher-order types as functors on Set, and type schemes as objects in Set\({}_{2}\) (the category of large sets). This size bump is necessary to model the universal quantification over types in type schemes. Crucially, Set is a _full subcategory_ of Set\({}_{1}\), as witnessed by the existence of a fully faithful inclusion functor \(I\):
\[\textsc{Set}\stackrel{{ I}}{{\hookrightarrow}}\textsc{Set}_{1}\]
Assuming cumulative universes (i.e., the collection of all large sets also includes all small sets), \(I\) is just the identity functor. We remark that both Set and Set\({}_{1}\) are _complete_ and _cocomplete_ and _cartesian closed_. Importantly, since \(I\) is fully faithful, the cartesian closed structure of Set is reflected in Set\({}_{1}\) for those objects that lie in the image of \(I\).
The subcategory relation between Set and Set\({}_{1}\) reflects the syntactic restriction of types to rank-1 polymorphism: all objects in Set can also be found in Set\({}_{1}\), but Set\({}_{1}\) is sufficiently larger than Set that it also includes objects modelling quantification over objects in Set. This intuition is embodied by fact that every functor \(F:\mathcal{C}^{\mathsf{op}}\times\mathcal{C}\to\textsc{Set}_{1}\), where \(\mathcal{C}\) is smaller than Set\({}_{1}\) (which includes Set), has an end in Set\({}_{1}\). This follows from completeness of Set\({}_{1}\)[26, p. 224, corollary 2]. We discuss the use of ends for modelling universal quantification in more detail in Section 4.2.
### Interpreting Kinds and Kind Environments
We associate with each kind \(k\) a category whose objects interpret the types of that kind. The semantics of kinds is defined by induction over \(k\), where we map the base kind \(\star\) to Set, and kinds of the form \(k_{1}\leadsto k_{2}\) to the category of functors between their domain and codomain.2
Footnote 2: Here, **CAT** denotes the (very large) category of large categories. Although Set itself is locally small, its functor categories have a large set of morphisms.
Figure 5: Equational theory for types
By interpreting types of kind \(k_{1}\leadsto k_{2}\) as objects in a functor category, we formalize the intuition that higher-order types correspond to functors. The semantics of kind contexts is then defined on a per-entry basis, as a chain of products of the categories that interpret their elements.
\[\begin{array}{rcl}\llbracket-&\rrbracket&:&\textit{Context}\to\mathbf{CAT}\\ \llbracket\ \emptyset&\rrbracket&=&\bullet\\ \llbracket\ \Delta,\alpha\mapsto k&\rrbracket&=&\llbracket\ \Delta\ \rrbracket \times\llbracket\ k\ \rrbracket\end{array}\]
Here, \(\bullet\) denotes the _trivial category_, which has a single object, \(*\), together with its identity morphism, \(id_{*}\). It is worth mentioning that \(\bullet\) and \(-\times-\), together with the operation of constructing a functor category, \([-,-]\), imply that \(\mathbf{CAT}\) is a cartesian closed category. We will use this cartesian closed structure to give a semantics to the fragment of well-formed types that corresponds to the simply-typed \(\lambda\)-calculus.
### Interpreting Types
Since a well-formed type \(\Delta\mid\Phi\vdash\tau:k\) is intended to be functorial in all variables in \(\Phi\), it is clear that its semantics should be a functor over the category associated with \(\Phi\) (i.e., \(\llbracket\Phi\rrbracket\)). But what about the variables in \(\Delta\), which can occur both in covariant and contravariant positions? For example, in the type of the identity function, \(\forall\alpha.\alpha\Rightarrow\alpha\), we cannot interpret the sub-derivation for \(\alpha\Rightarrow\alpha\) as a functor over the category interpreting its free variables since there would not be a sensible way to define its action on morphisms due to the negative occurence of \(\alpha\). To account for the mixed variance of universally quantified type variables, we instead adopt a _difunctorial semantics_, interpreting types as a functor on the product category \(\llbracket\Delta\rrbracket^{\mathsf{op}}\times\llbracket\Delta\rrbracket\) (similar representations of type expressions with mixed variance appear, for example, when considering Mendler-style inductive types [36], or the object calculus semantics by Glimming and Ghani [17]). Well-formed types (left) and type schemes (right) are interpreted as a functors over their contexts of the following form:
\[\llbracket\ \Delta\mid\Phi\vdash\tau:k\ \rrbracket:(\llbracket \Delta\rrbracket^{\mathsf{op}}\times\llbracket\Delta\rrbracket)\times \llbracket\Phi\rrbracket\to\llbracket k\rrbracket\qquad\llbracket\ \Delta\vdash\sigma\ \rrbracket:\llbracket \Delta\rrbracket^{\mathsf{op}}\times\llbracket\Delta\rrbracket\to\textsc{ Set}_{1}\]
Ultimately, the goal of this setup is to interpret \(\forall\)-types as _ends_ in \(\textsc{Set}_{1}\), which allows us to formally argue that terms that are well-formed with an arrow type of the form \(\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2}\) (which unfolds to \(\forall\bar{\alpha}.\tau_{1}\ \bar{\alpha}\Rightarrow\tau_{2}\ \bar{\alpha}\)) correspond, in a suitable sense, to the natural transformations between the functors interpreting \(\tau_{1}\) and \(\tau_{2}\). Or, put differently, terms with an arrow type define a morphism between the interpretation of their domain and codomain. We discuss the semantics of universal quantification further in Section 4.2, and give a more precise account of the relation between arrow types and natural transformations in Section 4.4.
Figure 6 defines the semantics of well-formed types and type schemes. The interpretation of the empty type, unit type, and (co)product types follow immediately from (co)completeness of Set. Since they can be constructed at any kind, the semantics of (co)product types depends crucially on the fact that functor
categories preserve all (co)limits of their codomain category, which implies that \(\llbracket k\rrbracket\) is (co)complete for any \(k\). To interpret variables, we utilize the cartesian closed structure of \(\mathbf{CAT}\) to compute an appropriate projection based on the position of the variable in the environment.
\[\begin{array}{l}\mathbf{lookup}_{\alpha}^{\Delta}\;\;\colon\;\llbracket\; \Delta\;\rrbracket\rightarrow\llbracket\;k\;\rrbracket\\ \mathbf{lookup}_{\alpha}^{\Delta,\alpha:k}\mapsto\pi_{2}\\ \mathbf{lookup}_{\alpha}^{\Delta,\beta:k}\mapsto\mathbf{lookup}_{\alpha}^{ \Delta}\,\circ\,\pi_{1}\;\;\;\;\;(\text{where }\alpha\neq\beta)\end{array}\]
Similarly, the cartesian closed structure of \(\mathbf{CAT}\) also implies the existence of functors \(\mathsf{eval}:[\mathcal{C},\mathcal{D}]\times\mathcal{C}\rightarrow\mathcal{D}\) and \(\mathsf{curry}(F):\mathcal{C}\rightarrow[\mathcal{D},\mathcal{E}]\), for any \(F:\mathcal{C}\times\mathcal{D}\rightarrow\mathcal{E}\), which immediately provide a semantics for type-level application and abstraction respectively. The remaining type and type scheme constructors are interpreted using specifically-defined functors. Although their definitions are typical examples of how (co)limits are lifted to functor categories by computing them pointwise, we discuss the definition of these functors separately and in more detail respectively in Section 4.2 (recursive types), Section 4.2 (function types), and Section 4.2 (\(\forall\)-types).
#### 4.2.2 Recursive Types
Following the usual categorical interpretation of inductive data types [18], the semantics of recursive types is given by an _initial algebras_. We summarize the setup here. An \(F\)_-algebra_ for an endofunctor \(F:\mathcal{C}\rightarrow\mathcal{C}\) is defined as a tuple \((A,\alpha)\) of an object \(A\in C\) (called the _carrier_), and a morphism \(\alpha:FA\to A\). An _algebra homomorphism_ between \(F\)-algebras \((A,\alpha)\) and \((B,\beta)\) is given by a morphism \(f:A\to B\) such that the following diagram commutes.
Figure 6: Semantics of well-formed types and type schemes
\(F\)-algebras and their homomorphisms form a category. If \(F\) is an endofunctor, we denote the initial object of the category of \(F\)-algebras (which, if it exists, we refer to as the initial algebra) as \((\mu F,\mathsf{in})\). Initial algebras give a semantics to inductive data types, with their universal property providing an induction principle. Given an \(F\)-algebra \((A,\alpha)\), we denote unique F-algebra homomorphism that factors through \(A\) by \(\mathsf{cata}(\alpha):\mu F\to A\). Instantiating the diagram above with \(\mathsf{cata}(\alpha)\) gives us the familiar universal property of folds, \(\mathsf{cata}(\alpha)\circ\mathsf{in}=\alpha\circ F(\mathsf{cata}(\alpha))\), which defines their computational behavior.
To interpret recursive types in our calculus, we construct the functor \(\boldsymbol{\mu}(F)\), which sends objects pointwise to the initial algebras of a functor \(F:\mathcal{C}\to[\mathcal{D},\mathcal{D}]\). For a morphism \(f:X\to Y\), the action of \(\boldsymbol{\mu}(F)\) on \(f\) is defined by factoring through the algebra defined by precomposing the initial algebra of \(F(Y)\) with the action of \(F\) on \(f\), which defines a natural transformation \(F(X)\stackrel{{\cdot}}{{\to}}F(Y)\), at component \(\mu(F(Y))\).
\[\begin{array}{l}\boldsymbol{\mu}(F)(-)\ :\ \mathcal{C}\to\mathcal{D}\\ \boldsymbol{\mu}(F)(x)\ \mapsto\mu(F(x))\\ \boldsymbol{\mu}(F)(f)\ \mapsto\mathsf{cata}(\mathsf{in}\circ F(f)_{\mu(F(Y))}) \end{array}\]
In general, it is not guaranteed that an initial algebra exists for any endofunctor \(F:\mathcal{C}\to\mathcal{C}\). Typically, the existence of an initial algebras is shown by iterating \(F\) and showing that it converges, applying the classic theorem by Adamek [5]. This approach imposes some additional requirements on the functor \(F\) and underlying category \(\mathcal{C}\), which we discuss in more detail in Section 4.3.
#### 4.2.2 Function Types
The functor \(\boldsymbol{\mathsf{exp}}(-)\) is defined by mapping onto exponential objects in Set. But we have to take some additional care to ensure that we can still define its action on morphism, as the polarity of free variables is reversed in domain of a function type. Indeed, when computed pointwise, exponential objects give rise to a bifunctor of the form \(\mathcal{C}^{\mathsf{op}}\to\mathcal{C}\to\mathcal{C}\), meaning that functors are not, in general, closed under exponentiation. To some extent we anticipated this situation already in the design of our type system by defining the well-formedness rule for function types such that the context of functorial variables, \(\Phi\), is discarded in its domain. Of course, the variables in \(\Delta\) can occur both in covariant and contravariant positions, but by adopting a difunctorial semantics we limit ourselves to a specific class of functors that is closed under exponentiation. The key observation is that constructing the opposite category of the product of a category and its opposite is an idempotent (up to isomorphism) operation. That is, we have the following equivalence of categories: \((C^{\mathsf{op}}\times\mathcal{C})^{\mathsf{op}}\simeq C^{\mathsf{op}}\times \mathcal{C}\). As a result, a pointwise mapping of difunctors to exponential objects does give rise to a new difunctor. We use this fact to our advantage to define the following
functor \(\mathbf{exp}(F,G)\) for functors \(F:\mathcal{C}^{\mathsf{op}}\times\mathcal{C}\to\mathcal{E}\) and \(G:(C^{\mathsf{op}}\times\mathcal{C})\times\mathcal{D}\to\mathcal{E}\), of which the interpretation of function types is an instance.
\[\begin{array}{ll}\mathbf{exp}(F,G)(-)&:\ (\mathcal{C}^{\mathsf{op}}\times \mathcal{C})\times\mathcal{D}\to\mathcal{E}\\ \mathbf{exp}(F,G)((x,y),z)&\mapsto G((x,y),z)^{F(y,x)}\\ \mathbf{exp}(F,G)((f,g),h)&\mapsto\mathsf{curry}(G((f,g),h)\circ\mathbf{eval} \circ(id_{\mathbf{exp}(F,G)((x,y),z)}\times F(g,f))\end{array}\]
We remark that \(\mathbf{exp}(\mathsf{F},\mathsf{G})\) does not define an exponential object in the functor category \([(\mathcal{C}^{\mathsf{op}}\times\mathcal{C})\times\mathcal{D},\mathcal{E}]\). Fortunately, for defining the semantics of term level \(\lambda\)-abstraction or application it is sufficient that the action on objects maps to exponentials in Set.
#### 4.2.2 Universal quantification
The semantics of universal quantifications is expressed in terms of ends in the category \(\textsc{Set}_{1}\). If \(F:\mathcal{C}^{\mathsf{op}}\times\mathcal{C}\to\mathcal{D}\) is a functor, then an _end_ of \(F\) is an object \(\int_{x\in\mathcal{C}}F(x,x)\in\mathcal{D}\) equipped with a projection map given by an extranatural transformation \(\pi_{x}:\int_{c\in\mathcal{C}}F(c,c)\to F(x,x)\). Formally, and end of the \(F\) is defined as the universal wedge of the following diagram:
\[F(x,x)\stackrel{{ F(id_{x};f)}}{{\longrightarrow}}F(x,y) \stackrel{{ F(f,id_{y})}}{{\longleftarrow}}F(y,y)\]
For all \(x,y\in\mathcal{C}\) and \(f:x\to y\). The universal property of ends then states that any other wedge \(W\in\mathcal{D}\) with maps \(i:W\to F(x,x)\) and \(j:W\to F(y,y)\) uniquely factors through \(\int_{c\in\mathcal{C}}F(c,c)\).
To model the more general situation where a \(\forall\)-quantified type can contain free variables that are bound by another quantifier above it in the lexical hierarchy, we define the semantics of universal quantification in terms of the _end functor_, \(\mathbf{end}(-)\), which for a functor \(G:\mathcal{C}\to[\mathcal{D}^{\mathsf{op}}\times\mathcal{D},\mathcal{E}]\) defines a functor \(\mathbf{end}(G):\mathcal{C}\to E\) whose object action is computed pointwise from ends in \(\mathcal{E}\). Its action on morphisms, \(\mathbf{end}(f):\int_{d\in\mathcal{D}}G(X)(d,d)\to\int_{d\in\mathcal{D}}G(Y)( d,d)\), follows from the universal property stated above. To define the action on morphisms, we observe that the object \(\int_{d\in\mathcal{D}}G(X)(d,d)\) is a wedge of the following diagram.
\[G(Y)(x,x)\stackrel{{ G(Y)(id_{x},f)}}{{\longrightarrow}}G(Y)(x,y) \stackrel{{ G(Y)(f,id_{y})}}{{\longleftarrow}}G(Y)(y,y)\]
Where the vertices of the cone are constructed by composing the projection map with the action of \(G\) on \(f\), i.e., \(G(f)(x,x)\circ\pi_{x}\). By universality, this wedge uniquely factors through the end \(\int_{d\in\mathcal{D}}G(Y)(d,d)\). This factorization defines the morphism action \(\mathbf{end}(\mathsf{f})\).
\[\begin{array}{ll}\mathbf{end}(G)(-)&:\ \mathcal{C}\to\mathcal{E}\\ \mathbf{end}(G)(x)&\mapsto\int_{d\in\mathcal{D}}G(x)(d,d)\\ \mathbf{end}(G)(f)&\mapsto\mathsf{factor}(\int_{d\in\mathcal{D}}G(x)(d,d)) \end{array}\]
An important subtlety here is that \(F(X)\) should have an end in \(\mathcal{E}\) for every \(X\). In our case, this is a consequence of completeness of \(\textsc{Set}_{1}\).3 To actually use the functor **end** to define the semantics of universal quantifications, we need to precompose the semantics of its body with the **sift** functor to separate the quantified variable from the remainder of the context.
Footnote 3: See Mac Lane [26] chapter 9.5 corollary 2.
\[\textbf{sift}:(\llbracket\Delta\rrbracket\times\llbracket k\rrbracket)^{\textsf{ op}}\times(\llbracket\Delta\rrbracket\times\llbracket k\rrbracket))\times\llbracket \Phi\rrbracket\rightarrow((\llbracket\Delta\rrbracket^{\textsf{op}}\times \llbracket\Delta\rrbracket)\times\llbracket\Phi\rrbracket)\times(\llbracket k \rrbracket^{\textsf{op}}\times\llbracket k\rrbracket)\]
We note that **sift** defines an isomorphism in **CAT**.
### On the Existence of Initial Algebras
In general, it is not the case that any endofunctor has an initial algebra. For certain classes of endofunctors, it can be shown that an initial algebra exists by means of Adamek's theorem [5]. Here, we present a condensed argument for why we expect that functors interpreting well-formed types of kind \(k\leadsto k\) (for any \(k\)) have initial algebras; a more thorough formal treatment of the construction of initial algebras is a subject of further study.
The intuition behind Adamek's construction is that repeated applications of an endofunctor \(F:\mathcal{C}\rightarrow\mathcal{C}\) converge after infinite iterations, reaching a fixpoint. If \(\mathcal{C}\) has an initial object and _\(\omega\)-colimits_,4 we can define the initial algebra of \(F\) as the \(\omega\)-colimit of the following chain:
Footnote 4: That is, colimits over diagrams defined as a functor on the thin category generated from the poset of natural numbers.
\[\bot\stackrel{{!}}{{\longrightarrow}}F\bot\stackrel{{ F!}}{{\longrightarrow}}FF\bot\stackrel{{ FF!}}{{\longrightarrow}}FFF\bot\stackrel{{ FF!}}{{\longrightarrow}}\ldots\]
Where \(\bot\) is the initial object in \(\mathcal{C}\) and \(!_{X}:\bot\to X\) the unique map from \(\bot\) to \(X\). A crucial stipulation is that \(F\) should be _\(\omega\)-coconinuous_, meaning that it preserves \(\omega\)-colimits.
Thus, for the functors interpreting higher-order types to have an initial algebra, we must argue that all higher-order types are interpreted to a \(\omega\)-coconinuous functor. This prompts a refinement of the semantics for kinds discussed in Section 4.1, where we impose the additional restriction that the interpretation of a kind of the form \(k_{1}\leadsto k_{2}\) is a \(\omega\)-coconinuous functor from \(\llbracket k_{1}\rrbracket\) to \(\llbracket k_{2}\rrbracket\). Subsequently, we must show that Figure 6 actually inhabits this refined semantics.
Johann and Polonsky [22] present an inductive argument showing the existence of initial algebras for a universe of higher-kinded data types is similar to our definition of well-formed terms in Figure 2. While their proof establishes the more general property of \(\lambda\)-cocontunity (for an arbitrary limit ordinal \(\lambda\)) for the functors interpreting higher-kinded types, we expect that the relevant cases of their inductive proof--specifically the cases for products, coproducts, type application, and the \(\boldsymbol{\mu}\) functor--can be adapted to our setting. What remains is to show that the semantics of type level \(\lambda\)-abstraction and function types is
a \(\omega\)-cocomontinuous functor. For \(\lambda\)-abstraction, we transport along the currying isomorphism, which should preserve \(\omega\)-cocomintuity. For function types, we require that the functor \((-)^{X}:\textsc{Set}\to\textsc{Set}\) is \(\omega\)-cocomontinuous for all \(X\), which, as Johann and Polonsky [22] point out, is indeed the case. Expanding this proof sketch into a full proof of the existence of initial algebras is future work.
### Arrow Types Correspond to Morphisms
To define the semantics of well-typed terms, it is crucial that we can relate arrow types--i.e., of the form \(\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2}\)--to morphisms in the category \(\llbracket k\rrbracket\). To make this more precise, consider the typing rule for left projections. To define its semantics, we would like to use the cartesian structure of the category \(\llbracket k\rrbracket\), which implies the existence of a _morphism_\(\pi_{1}:\llbracket k\rrbracket(x\times y,x)\) for \(x,y\in\llbracket k\rrbracket\). However, the rule T-Fst implies that \(\boldsymbol{\pi_{1}}\) should be related to an _object_ in \(\textsc{Set}_{1}\), i.e., \(\llbracket\tau_{1}\times\tau_{2}\stackrel{{ k}}{{\longrightarrow }}\tau_{1}\rrbracket\). To mediate between morphisms in \(\llbracket k\rrbracket\) and objects in \(\textsc{Set}_{1}\) calls for a suitable currying/uncurrying isomorphism for arrow types, though we highlight that the required isomorphism is different from the usual currying isomorphism arising from the existence of right adjoints for the tensor product in closed monoidal categories, in the sense that \(\llbracket\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2}\rrbracket\) does not define an internal hom for the objects \(\llbracket\tau_{1}\rrbracket,\llbracket\tau_{2}\rrbracket\) but rather inernalizes the morphisms between these objects in a _different_ category.
Theorem 4.1: _Given a kind \(k\), morphisms of the category \(\llbracket k\rrbracket\) are internalized as objects in \(\textsc{Set}_{1}\) through the following bijection between hom-sets:_
\[\llbracket k\rrbracket(F(\boldsymbol{\delta})\times\llbracket\tau_{1} \rrbracket(\boldsymbol{\delta}^{\circ}),\llbracket\tau_{2}\rrbracket( \boldsymbol{\delta}))\quad\simeq\quad\textsc{Set}_{1}(F(\boldsymbol{\delta}),\llbracket\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{2} \rrbracket(\boldsymbol{\delta})) \tag{1}\]
_Where \(\boldsymbol{\delta}\in\llbracket\Delta\rrbracket^{\mathsf{op}}\times \llbracket\Delta\rrbracket\) and \(\boldsymbol{\delta}^{\circ}\in(\llbracket\Delta\rrbracket^{\mathsf{op}}\times \llbracket\Delta\rrbracket)^{\mathsf{op}}\) its complement, which is defined by swapping the objects representing contravariant respectively covariant occurrences of the variables in \(\Delta\). Let \(F:\llbracket\Delta\rrbracket^{\mathsf{op}}\times\llbracket\Delta\rrbracket \to\textsc{Set}_{1}\) be a functor. In a slight abuse of notation, we also write \(F(\boldsymbol{\delta})\) for the "lifting" of \(F\) to an object in the (functor) category \(\llbracket k\rrbracket\) that ignores all the additional variables on which \(\llbracket\tau_{1}\rrbracket\) and \(\llbracket\tau_{2}\rrbracket\) depend._
Proof: We compute the isomorphism as follows, where \(k=k_{1}\leadsto\dots\leadsto k_{n}\leadsto\star\):
\[\llbracket k\rrbracket(F(\boldsymbol{\delta})\times\llbracket \tau_{1}\rrbracket(\boldsymbol{\delta}^{\circ}),\llbracket\tau_{2}\rrbracket( \boldsymbol{\delta}))\] \[= \int_{x_{1}\in\llbracket k_{1}\rrbracket}\dots\int_{x_{n}\in \llbracket k_{n}\rrbracket}\textsc{Set}_{1}(F(\boldsymbol{\delta})\times \llbracket\tau_{1}\rrbracket(\boldsymbol{\delta}^{\circ})(x_{1})\dots(x_{n}), \llbracket\tau_{2}\rrbracket(\boldsymbol{\delta})(x_{1})\dots(x_{n}))\] \[\simeq \int_{x_{1}\in\llbracket k_{1}\rrbracket}\dots\int_{x_{n}\in \llbracket k_{n}\rrbracket}\textsc{Set}_{1}(F(\boldsymbol{\delta}), \llbracket\tau_{2}\rrbracket(\boldsymbol{\delta})(x_{1})\dots(x_{n})^{ \llbracket\tau_{1}\rrbracket(\boldsymbol{\delta}^{\circ})(x_{1})\dots(x_{n})})\] \[\simeq \textsc{Set}_{1}(F(\boldsymbol{\delta}),\int_{x_{1}}\dots\int_ {x_{n}}\llbracket\tau_{2}\rrbracket(\boldsymbol{\delta})(x_{1})\dots(x_{n} \in\llbracket k_{n}\rrbracket)^{\llbracket\tau_{1}\rrbracket(\boldsymbol{ \delta}^{\circ})(x_{1})\dots(x_{n})})\] \[\simeq \textsc{Set}_{1}(F(\boldsymbol{\delta}),\llbracket\tau_{1} \stackrel{{ k}}{{\longrightarrow}}\tau_{2}\rrbracket( \boldsymbol{\delta}))\]
The first step of the derivation rewrites the left-hand side of the isomorphism to a sequence of zero or more ends in the category of very large sets, allowing us to apply currying for exponentials in \(\textsc{Set}_{1}\) in the subsequent step. This is justified by cartesian closedness of Set, because the objects \(\llbracket\!\!\!\lceil\!\!\rceil\!\!\rceil(\boldsymbol{\delta}^{\circ})(x_{1}) \cdots(x_{n})\) and \(\llbracket\!\!\!\lceil\!\!\lceil\!\!\lceil\!\!\rceil\!\!\rceil(\boldsymbol{ \delta})(x_{1})\cdots(x_{n})\) are included in the image of the fully faithful inclusion functor \(I\). Next, we use the fact that the covariant hom-functor \(\textsc{Set}_{1}(x,-)\) is continuous and thus preserves ends:5
Footnote 5: See Mac Lane [26], page 225 Equation 4.
\[\int_{y\in\mathcal{C}}\textsc{Set}_{1}(x,G(y,y))\quad\simeq\quad\textsc{Set}_ {1}(x,\int_{y\in\mathcal{C}}G(y,y)) \tag{2}\]
By repeatedly applying the identity above, we can distribute the aforementioned sequence of ends over the functor \(\textsc{Set}_{1}(F(\boldsymbol{\delta}),-)\). Intuitively, this corresponds to distributing universal quantification over logical implication in the scenario that the quantified variable does not occur freely in the antecedent, which is axiomatized in some flavors of first-order logic, though we apply a much more general instance of the same principle here. The final step then follows from the standard definition of \(\eta\)-equivalence implied by cartesian closedness of \(\mathbf{CAT}\).
We write \(\boldsymbol{\uparrow}(-)/\boldsymbol{\downarrow}(-)\) for the functions that transport along the isomorphism defined in Equation (1).
### Interpreting Terms
Well-typed terms, of the form \(\Gamma\vdash M:\sigma\), are interpreted as natural transformations from the interpretation their context, \(\llbracket\!\!\!\lceil\!\!\lceil\!\!\rceil\!\!\rceil\!\!\rceil\), to the interpretation of their type, \(\llbracket\!\!\!\lceil\!\!\lceil\!\!\rceil\!\!\rceil\!\!\rceil\). At component \(\boldsymbol{\delta}\in\llbracket\!\!\!\!\lfloor\!\!\lfloor\!\!\Delta\!\! \!\rceil\!\!\rceil^{\mathsf{op}}\times\llbracket\!\!\!\!\lfloor\!\!\Delta\!\!\!\rceil\!\!\rceil\) this transformation is given by a function with the following type:
\[\llbracket\!\!\!\lceil\!\!\!\lceil\!\!\lceil\!\!\lceil\!\!\rceil\!\!\rceil\! \!\!\rceil\!\!\rceil\!\!\!\rceil\!\!\!\rceil\!\!\!\!\rceil\!\!\!\!\!\! \rceil\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
at component \(\llbracket\tau\rrbracket\) of the end interpreting the type of \(M\). For the introduction and elimination forms of (co)product types, and the unit and empty type, we define the semantics in terms of the corresponding (co)limits in \(\textsc{Set}_{1}\), applying the currying isomorphism defined in Equation (1) to mediate with arrow types. Similarly, a semantics for the mapping and folding primitives also follows from the currying isomorphism defined in Equation (1).
Both the denotation function \(\llbracket-\rrbracket\) as well as the function it computes are total. Consequently, a well-typed value can be computed from every well-typed term. In this sense, the categorical model provides us with a sound computational model of the calculus, which we could implement by writing a definitional interpreter [33]. In the next section, we will discuss how a more traditional small-step operational semantics can be derived from the same categorical model.
Figure 7: Semantics of Well-Typed Terms.
## 5 Operational Semantics
The previous section gave an overview of a categorical semantics of our calculus. In this section, we define a small-step operational semantics for our calculus, and discuss how it relates to the categorical model.
### Reduction Rules
\[\begin{array}{l}v:=\lambda x.M\mid\Lambda\alpha.M\mid\boxed{\pi\,v\,}\mid \mbox{\sf{min}}\quad\overline{\tau\,v}\mid\mid(v_{1}\,\,\mbox{\boldmath$\bot$ }\,\,v_{2})\quad\overline{\tau\,v}\\ \mid\iota_{1}\quad\overline{\tau\,v}\mid\iota_{2}\quad\overline{\tau\,v}\mid \mbox{\sf{map}}\langle v\rangle^{\tau^{\prime}}\quad\overline{\tau}\mid\mid(v \,\,\mbox{\boldmath$\bot$ }\,\,v)^{\tau^{\prime}}\quad\overline{\tau\,}\mid\pi_{1}\quad\overline{\tau\,} \mid\pi_{2}\quad\overline{\tau}\\ \mid(v_{1}\,\,\mbox{\boldmath$\bot$ }\,\,v_{2})\quad\overline{\tau\,}\mid\mbox{\sf{lt}}\quad\overline{\tau\,} \mid\mbox{\sf{absurd}}\quad\overline{\tau\,v}\end{array}\] (Values) \[\begin{array}{l}E:=\llbracket\rrbracket\mid E\ M\mid v\ E\mid\mbox{ \boldmath$\bot$ let }\ (x:\sigma)=E\ \mbox{\sf{in}}\ M\mid\mbox{\bf{let}}\ (x:\sigma)=v\ \mbox{\sf{in}}\ E\ (\mbox{Contexts})\\ \mid\ \mbox{\sf{map}}\langle E\rangle^{\tau}\mid(\,E\,)^{\tau}\mid E\,\,\mbox{ \boldmath$\bot$ }\,\,M\mid v\,\,\mbox{\boldmath$\bot$ }\,\,E\mid E\,\,\mbox{\boldmath$\bot$ }\,\,M\mid v\,\,\mbox{\boldmath$\bot$ }\,\,E\end{array}\]
We define our operational semantics as a reduction semantics in the style of Felleisen and Hieb [16]. Figure 8 shows the definition of values and evaluation contexts. In our definition of values, we must account for the fact that language primitives can exist at any kind. For example, the primitive \(\iota_{1}\) by itself is a value of type \(\tau_{1}\stackrel{{ k}}{{\longrightarrow}}\tau_{1}+\tau_{2}\). Simultaneously, applying \(\iota_{1}\) with a value and/or a sequence of type arguments (the number of which depends on the kind of its arrow type), also yields a value. In fact, all the _partial applications_ of \(\iota_{1}\) with only some of its type arguments, or all type arguments but no value argument, are also values. We use gray highlights to indicate such an optional application with type and/or value arguments in the definition of values.
Figure 9 defines the reduction rules. We split the rules in two categories: the first set describes \(\beta\)-reduction6 for the various type formers, while the second set determines how the \(\mbox{\sf{map}}\langle\,-\,\rangle^{-}\) primitive computes. Similar to the definition of values and contexts in Figure 8, we use the notation \(\overline{\tau}\) to depict a sequence of zero or more type applications. Unlike for values, these type arguments are not optional; terms typed by an arrow types must be fully applied with all their type arguments before they reduce. The notation \(N\bullet M\) is used as a syntactic shorthand for the composition of two arrow types, which is defined through \(\eta\)-expansion of all its type arguments and the term argument. The reduction rules for the \(\mbox{\sf{map}}\langle\tau\rangle^{M}\) primitive are type directed, in the sense that the selected reduction depends on \(\tau\). This is necessary, because in an application of
to a value, there is no way to decide whether to apply the function or to push the \(\mbox{\boldmath{map}}\langle-\,\rangle^{-}\) further inwards by only looking at the value.
### Relation to the Denotational Model
The reduction rules shown in Figure 9 define a computational model for our calculus. We now discuss how this model arises from the denotational model discussed in Section 4. Informally speaking, reducing a term should not change its meaning. This intuition is reflected by the following implication, which states if \(M\) reduces \(N\), their semantics should be equal.7
Footnote 7: This property implies what Devesas Campos and Levy [15] call _soundness_ of the denotational model with respect to the operational model. Their soundness property is about a big-step relation; ours is small-step.
\[M\longrightarrow N\implies\llbracket M\rrbracket=\llbracket N\rrbracket \tag{3}\]
While we do not give a formal proof of the implication above, by relying on the categorical model to inform how terms compute we can be reasonably confident that our semantics does not contain any reductions that violate this property. That is, all the reductions shown in Figure 9 are supported by an equality of morphisms in the categorical model.
What does this mean, specifically? The semantics of well-typed terms is given by a natural transformation, so if \(M\longrightarrow N\), \(M\) and \(N\) should be interpreted as
Figure 9: Reduction rules
the same natural transformation. Equivalence of natural transformations is defined pointwise in terms of the equality relation for morphisms in the underlying category. In our case, this is the category Set, as terms are interpreted as natural transformations between functors into Set. By studying the properties--expressed as equalities between morphisms--of the constructions that give a semantics to the different type formers, and reifying these equalities as syntactic reduction rules, we obtain an operational model that we conjecture respects the denotational model by construction.
Let us illustrate this principle with a concrete example. The semantics of a sum type \(\tau_{1}+\tau_{2}:k\) is given by a coproduct in the category \(\llbracket k\rrbracket\). The universal property of coproducts tells us that \([f,g]\circ\iota_{1}=f\) and \([f,g]\circ\iota_{2}=g\), or in other words, constructing and then immediately deconstructing a coproduct is the same as doing nothing. Rules (8) and (9) in Figure 9 reflect these equations. That is, since the \(\iota_{1}\), \(\iota_{2}\), and \(-\mathbin{\hbox{\vrule height 6.5pt depth -0.2pt width 0.4pt\vrule height 6.5pt depth -0.2pt width 0.4pt} \hskip-0.2pt\hbox{\vrule height 6.5pt depth -0.
but lacks support for defining nested data types. Zhang et al. [41] recently proposed a calculus and language for _compositional programming_, called CP. Their language design is inspired by _object algebras_, which in turn is based on the _tagless final_ approach [11, 25] and _final algebra semantics_[38], which, according to Wand [38, SS7], is an extension of _initial algebra semantics_. These lines of work thus provide similar modularity as initial algebra semantics, but in a way that does not require _tagged values_. While the categorical foundations of Zhang et al.'s CP language seems to be an open question, the language provides flexible support for modular programming, in part due to its powerful notion of subtyping. We are not aware of attempts to model (higher-order) effects and handlers using CP. In contrast, our calculus is designed to have a clear categorical semantics. This semantics makes it straightforward to define state of the art type safe modular (higher-order) effects and handlers. Morris and McKinna [29] define a language that has built-in support for _row types_, which supports both extensible records and variants. While their language captures many known flavors of extensibility, due to parameterizing the type system over a so-called _row theory_ describing how row types behave under composition, rows are restricted to first order types. Consequently, they cannot describe any modularity that hinges on the composition of (higher-order) signature functors.
The question of including nested data types in a language's support for modularity has received some attention as well. For example, Cai et al. [10] develop an extension of \(F_{\omega}\) with equirecursive types tailored to describe patterns from datatype generic programming. Their calculus is expressive enough to capture the modularity abstractions discussed in this paper, including those requiring nested data types, but lacks a denotational model; a correspondence between a subset of types in their calculus and (traversable) functors is discussed informally. Similarly, Abel et al. [4] consider an operational perspective of traversals over nested datatypes by studying several extensions of \(F_{\omega}\) with primitives for _(generalized) Mendler iteration and coiteration_. Although these are expressive enough to describe modular higher-order effects and handlers, their semantic foundation is very different from the semantics of the primitive fold operation in our calculus. It is future work to investigate how our calculus can be extended with support for codata.
A major source of inspiration for the work in this paper are recent works by Johann and Polonsky [22], Johann et al. [21], and Johann and Ghiorzi [20], which respectively study the semantics and parametricity of nested data types and GADTs. For the latter, the authors develop a dedicated calculus with a design and semantics that is very similar to ours. Still, there are some subtle but key differences between the designs; for example, their calculus does not include general notions of \(\forall\)-types and function types, but rather integrates these into a single type representing natural transformations between type constructors. While their setup does not require the same stratification of the type syntax we adopt here, it is also slightly less expressive, as the built-in type of transformations is restricted to closing over 0-arity arguments.
_Data type generic programming_ commonly uses a _universe of descriptions_[6], which is a data type whose inhabitants correspond to a signature functor. Generic functions are commonly defined by induction over these descriptions, ranging over a semantic reflection of the input description in the type system of a dependently-typed host language [14]. In fact, Chapman et al. [12] considered the integration of descriptions in a language's design by developing a type theory with native support for generic programming. We are, however, not aware of any notion of descriptions that corresponds to our syntax of well-formed types.
## 7 Conclusion and Future work
In this paper, we presented the design and semantics of a calculus with support for modularity. We demonstrated it can serve as a basis for capturing several well-known programming patterns for retrofitting type-safe modularity to functional languages, such as modular interpreters in the style of Data Types a la Carte, and modular (higher-order) algebraic effects. The formal semantics associates these patterns with their motivating concepts, creating the possibility for a compiler to benefit from their properties such as by performing fusion-based optimizations.
#### Acknowledgements.
This research was partially funded by the NWO VENI Composable and Safe-by-Construction Programming Language Definitions project (VI.Veni.192.259).
|
2309.06359 | Using Reed-Muller Codes for Classification with Rejection and Recovery | When deploying classifiers in the real world, users expect them to respond to
inputs appropriately. However, traditional classifiers are not equipped to
handle inputs which lie far from the distribution they were trained on.
Malicious actors can exploit this defect by making adversarial perturbations
designed to cause the classifier to give an incorrect output.
Classification-with-rejection methods attempt to solve this problem by allowing
networks to refuse to classify an input in which they have low confidence. This
works well for strongly adversarial examples, but also leads to the rejection
of weakly perturbed images, which intuitively could be correctly classified. To
address these issues, we propose Reed-Muller Aggregation Networks (RMAggNet), a
classifier inspired by Reed-Muller error-correction codes which can correct and
reject inputs. This paper shows that RMAggNet can minimise incorrectness while
maintaining good correctness over multiple adversarial attacks at different
perturbation budgets by leveraging the ability to correct errors in the
classification process. This provides an alternative
classification-with-rejection method which can reduce the amount of additional
processing in situations where a small number of incorrect classifications are
permissible. | Daniel Fentham, David Parker, Mark Ryan | 2023-09-12T16:20:20Z | http://arxiv.org/abs/2309.06359v1 | # Using Reed-Muller Codes for
###### Abstract
When deploying classifiers in the real world, users expect them to respond to inputs appropriately. However, traditional classifiers are not equipped to handle inputs which lie far from the distribution they were trained on. Malicious actors can exploit this defect by making adversarial perturbations designed to cause the classifier to give an incorrect output. Classification-with-rejection methods attempt to solve this problem by allowing networks to refuse to classify an input in which they have low confidence. This works well for strongly adversarial examples, but also leads to the rejection of weakly perturbed images, which intuitively could be correctly classified. To address these issues, we propose Reed-Muller Aggregation Networks (RMAggNet), a classifier inspired by Reed-Muller error-correction codes which can correct and reject inputs. This paper shows that RMAggNet can minimise incorrectness while maintaining good correctness over multiple adversarial attacks at different perturbation budgets by leveraging the ability to correct errors in the classification process. This provides an alternative classification-with-rejection method which can reduce the amount of additional processing in situations where a small number of incorrect classifications are permissible.
Keywords:Deep Neural Networks Adversarial Examples Classification-with-rejection Error-correction codes ML Security
## 1 Introduction
Deep Neural Networks (DNNs) have shown incredible performance in numerous classification tasks, including image classification [1], medical diagnosis [2] and malware detection [3]. However, a fundamental shortcoming is that they pass judgement beyond their expertise. When presented with data outside of the distribution they were trained on, DNNs will attempt to classify that data by selecting from one of the finite labels available, often reporting high confidence in the classifications they have made. The most egregious examples of this occur when a DNN is presented with an input which is far outside of the domain it has been trained on (for example, presenting an image of a cat to a text classification
model), which it will confidently assign a class to. This behaviour is also present in adversarial examples, which were introduced in the seminal paper by Szegedy et al. in 2018 [4], where an (almost) invisible perturbation pushes an input far from the training distribution, leading to a confident misclassification [5]. Since then, extensive research has been conducted exploring new, more sophisticated, attacks on networks of different architectures [6, 7, 8] and defences that attempt to mitigate their effectiveness [9, 10, 11]. This results in hesitation when applying DNN models to safety- and security-critical applications where there is a high cost of misclassification.
Classification-with-rejection (CWR) methods [11, 12, 13, 14] attempt to address this limitation by refusing to assign a label to an input when the confidence in the classification is low. In this paper, we present an approach to CWR that parallels ideas from Error-Correcting Output Codes (ECOCs) [11, 14], where an ensemble of networks perform classification by generating binary strings, extending them with a reject option. ECOC methods have received little attention as a defence mechanism against adversarial attacks, even though the independence of the individual networks offers a natural defence. A notable property of adversarial attacks is that they are highly transferable between models, meaning an adversarial attack crafted for one network will likely deceive another with high probability, provided the networks perform a similar task [15]. Since ECOC methods promote diversity in the constituent network's tasks, an adversarial attack crafted to change the output reported by one network is less likely to fool another in a way which would result in further misclassification. Moreover, due to the aggregated nature of the resulting classification, an adversary would need to create a perturbation which can fool multiple networks simultaneously, necessitating precise bit-flipping strategies which lead to a valid target class.
In this paper, we introduce Reed-Muller Aggregation Networks (RMAggNet), which apply error correcting codes to the correction and rejection of classifications, ultimately producing a new kind of ECOC classifier. Similar to existing ECOCs, these consist of multiple DNNs, each performing a simple classification task which determines if an input belongs to a defined subset of the classes, resulting in a binary answer. The results of these networks are aggregated together into a binary string which we compare to class binary strings which represent each of the classes from the dataset. If the resulting binary string is the same as a class binary string, we return the associated label as a result, otherwise we attempt to correct the result (if we have a small enough Hamming distance), or reject the result and refuse to classify the input. Thus, unlike existing CWR methods our approach has the ability to both correct and reject inputs.
We evaluate the effectiveness of RMAggNet by comparing it to two other CWR approaches: an ensemble of networks (with a voting-based rejection process) and Confidence Calibrated Adversarial Training (CCAT) [16]. After performing tests using the EMNIST and CIFAR-10 datasets with open-box PGD \(L_{\infty}\) and PGD \(L_{2}\) adversarial attacks, we conclude that RMAggNet can greatly reduce the amount of rejected inputs in certain circumstances, making it a viable alternative to methods such as CCAT if some incorrectness is acceptable.
In summary, this paper makes the following contributions:
* We introduce RMAggNet, a novel ECOC classification method which leverages the power of Reed-Muller codes to create a classifier which can both correct and reject inputs (Section 3)1. Footnote 1: Code available at: [https://github.com/dfenth/RMAggNet](https://github.com/dfenth/RMAggNet)
* We show the effectiveness of RMAggNet on the MNIST, EMNIST and CIFAR-10 datasets with open-box and closed-box gradient and gradient-free adversarial attacks (Sections 4 and 5).
* We discuss the application of RMAggNet to classification tasks, providing guidance on when it may be a strong alternative to other CWR methods (Section 6).
## 2 Related work
### Error correcting output codes
Verma and Swami defined an error-correcting output code (ECOC) classification method which uses an ensemble of models, each trained to perform a subset of the classification [11]. Their model uses Hadamard matrices to construct binary codes, which are assigned to classes from the dataset. Multiple DNNs are then defined to generate a set amount of bits from each code for each class, essentially following a set membership classification approach. When an input is passed to the multiple networks, a vector of real numbers is generated, and the similarity between this vector and the class vectors is calculated, with the most similar vector being returned as the final classification.
The authors argue that this classification method has greater resilience to adversarial attacks than traditional ensemble methods due to the independence of the models, encouraged by the diverse classification tasks. This reduces the chance of multiple coordinated bit-flips occurring due to a single perturbation as a result of the transferability of adversarial attacks. Verma and Swami focus their attention on multi-bit outputs, where four networks produce a combined total of 16, 32, or 64 bits, encoding the input into 4, 8 or 16 bits, respectively. This approach to ECOC classification leads to similar networks being trained, where, in many cases, the entire set of classes is being used by all networks. This results in each network learning similar features, reducing independence and lowering resilience to transfer attacks.
Song et al., proposed a method which extends the work by Verma and Swami, introducing Error Correcting Neural Networks (ECNN) [14]. This paper improves ECOCs by increasing the number of networks to one per output bit and optimising the codeword matrix using simulated annealing [17] which encourages each classifier to learn unique features, enhancing robustness against direct and transfer adversarial attacks. However, in practice, the ECNN implementation trains a single network with each classifier having a unique top layer. This reduces
the independence between networks since each of them share the same low level features which can be used by adversaries.
These approaches are similar to the method proposed in this paper; however, there are a few key differences. While Verma and Swami [11] and Song et al. [14] discuss the use of error correction, it is not actively utilised in the classification process. In addition, error correction provides a natural implementation of CWR where outputs which deviate significantly from existing classes can trigger a _reject_ option where the classifier refuses to return a result. This paper aims to address these gaps by exploring the application of error correction and classification-with-rejection approaches to ECOC methods. We hope to provide insights into the effectiveness, practicality and benefits of these strategies.
### Confidence Calibrated Adversarial Training
Many CWR methods have been proposed over the years [12, 13, 16]. We focus on the Confidence Calibrated Adversarial Training (CCAT) CWR method which was introduced by Stutz et al. in 2020 [16]. CCAT attempts to produce a model which is robust to unseen threat models which use different \(L_{p}\) norms or larger perturbations when generating adversarial examples. CCAT achieves good rejection performance through adversarial training where the model is trained to predict the classes of clean data with high confidence and produce a uniform distribution for adversarial examples within an \(\epsilon\)-ball of the true image. This is extrapolated beyond the \(\epsilon\)-ball to account for larger perturbations. The label \(\tilde{y}\) is generated for adversarial training using equation:
\[\tilde{y}=\lambda(\delta)\text{ one-hot}(y)+(1-\lambda(\delta))\frac{1}{|C|}\]
with adversarial perturbation \(\delta\) and function \(\lambda\) where \(\lambda(\delta)\in[0,1]\) equals 0 for \(\|\delta\|_{\infty}\geq\epsilon\), leading to a uniform distribution, and 1 for \(\|\delta\|_{\infty}=0\), \(C\) is the set of all classes in the dataset, with \(|C|\) being the cardinality. This encourages the labels of adversarial examples in the training set to have a uniform distribution, which becomes more uniform as \(\delta\) increases. This approach to adversarial training leads to strong generalisation for large \(\delta\), which means that perturbed inputs to CCAT are reliably rejected. We produce results using CCAT to act as a comparison with our own method when comparing CWR performance.
## 3 Reed-Muller Aggregation Networks (RMAggNet)
### Reed-Muller codes
We begin with some brief background on Reed-Muller codes, which are multi-error detecting and correcting codes [18, 19]. This extends earlier work on Hamming codes [20] and generalise many other error correction methods.
Reed-Muller is often represented with the notation \([2^{m},k,2^{m-r}]_{q}\). We set \(q\), the number of elements in the finite field, to 2, meaning any codes we create will
be binary. In the low-degree polynomial interpretation, \(m\) denotes the number of variables and \(r\) denotes the highest degree of the polynomial both of which influence the properties of the Reed-Muller code. The first element of the tuple \((2^{m})\) represents the length of the codewords we will use. The second element \((k)\) represents the length of the message we can encode and is calculated as
\[k=\sum_{i=0}^{r}\binom{m}{i} \tag{1}\]
The final element \((2^{m-r})\) is the minimum Hamming distance between any two codes we generate, and influences the amount of error correction that can be applied. For simplicity, we set \(n=2^{m}\) and \(d=2^{m-r}\) condensing the notation to \([n,k,d]_{2}\).
A key advantage of Reed-Muller codes is that they allow us to unambiguously correct a number of bits equal to the Hamming bound \(t\):
\[t=\left\lfloor\frac{(d-1)}{2}\right\rfloor \tag{2}\]
due to the guaranteed Hamming distance between any two codewords. This can be thought of as an open Hamming sphere around each codeword with a radius of \(2^{m-r-1}\) which does not intersect any other sphere.
To build Reed-Muller codes with pre-determined Hamming distances, we start by selecting values for \(m\) and \(r\) which fit our use case, i.e., we can generate codewords of appropriate length with a desired amount of correction. Once we have chosen \(m\) and \(r\), we can calculate \(k\) (see equation 1) and we can define the low-degree polynomial, which will have \(k\) coefficients, \(m\) variables and a maximum degree of \(r\). This allows us to generate the codewords with a minimal Hamming distance of \(d\) between any two of the codes.
We specify the coefficients of the polynomial in \(k\) different ways where a single coefficient is set to one, and all others are zero. This gives us \(k\) polynomials with fixed coefficients and \(m\) free variables. We can then define the basis vectors of the space by instantiating every possible combination of variables for each of the fixed coefficient polynomials. This creates \(k\) codewords of length \(2^{m}\) all of which have a guaranteed Hamming distance of at least \(d\). We can have up to \(2^{k}\) valid codewords which satisfy the Hamming distance guarantee. These additional codewords can be generated by performing an XOR operation on all possible combinations of the basis vectors generating a closed set. We refer to these binary vectors as _codewords_.
For example, if we set \(m=3\) and \(r=2\), then we can calculate \(k=\binom{3}{0}+\binom{3}{1}+\binom{3}{2}=7\). This means we can generate a polynomial:
\[p(z_{1},z_{2},z_{3})=c_{0}+c_{1}z_{1}+c_{2}z_{2}+c_{3}z_{3}+c_{4}z_{1}z_{2}+c_ {5}z_{1}z_{3}+c_{6}z_{2}z_{3}\]
From this we can generate 7 polynomials with fixed coefficients:
\[\begin{array}{l}p_{1000000}(z_{1},z_{2},z_{3})=1\\ p_{0100000}(z_{1},z_{2},z_{3})=z_{1}\\ p_{0010000}(z_{1},z_{2},z_{3})=z_{2}\\ \vdots\\ p_{0000001}(z_{1},z_{2},z_{3})=z_{2}z_{3}\end{array}\]
Since \(m=3\) we can have 8 different combinations of variables which we can apply to each polynomial, therefore, we produce the basis vectors:
\[\begin{array}{l|cccccccc}&(000)&(001)&(010)&(011)&(100)&(101)&(110)&(111)\\ \hline p_{1000000}&1&1&1&1&1&1&1\\ p_{0100000}&0&0&0&1&1&1&1&1\\ p_{0010000}&0&0&1&1&0&0&1&1\\ p_{0001000}&0&1&0&1&0&1&0&1\\ p_{0000100}&0&0&0&0&0&1&1\\ p_{0000010}&0&0&0&0&1&0&1\\ p_{0000001}&0&0&0&1&0&0&0&1\\ \end{array}\]
Because \(d=2^{3-2}=2\), then each of the basis vectors should have a Hamming distance of at least 2 which is confirmed in the table above.
### Reed-Muller Aggregation Networks
We can now define a Reed-Muller Aggregation Network (RMAggNet) which uses multiple networks, trained on separate tasks, to create a binary vector which we can classify, correct, or reject. To create an RMAggNet we start by defining Reed-Muller codes which act as class codewords for the dataset classes we intend to recognise. To define appropriate Reed-Muller codes, we have to consider a number of factors related to the problem we are solving.
The first is the number of classes in the dataset (\(|C|\)). We must make sure that the message length \(k\) is adequate for the number of classes, such that, \(|C|\leq 2^{k}\). From the definition of \(k\) (equation 1) we can see that it depends on both \(m\) and \(r\), therefore these values are influenced by \(|C|\) and must be considered early on in the design process. The number of classes in the dataset is the primary point to consider when deciding on values for \(m\) and \(r\), because if we do not satisfy \(|C|\leq 2^{k}\) then we will not have an effective classifier.
The second factor we must consider, is whether we have appropriate error correction for the problem. The maximum number of errors we can correct is represented by the Hamming bound \(t\) (equation 2) which relies on \(d\) which is \(2^{m-r}\), so we also need to take this into account when deciding on \(m\) and \(r\).
The third factor is that we must have a low probability of assigning a valid codeword to a random noise image. A fundamental flaw with traditional DNNs is that they will assign a class to any input, even if the input is far from the distribution they have been trained on. The probability of assigning a random
noise image a valid class is \(|C|/2^{n}\) where we have no error correction, however, with error correction, the probability increases to \((|C|\cdot\sum_{i=0}^{t}\binom{n}{i}))/2^{n}\). This means that it is advantageous to only use the amount of error correction that is necessary for the problem at hand, even if that means we are not correcting the maximum number of bits theoretically possible.
Once we have a set of codewords which fit the problem specification, the length of the codewords (\(n\)) determines the number of networks included in the aggregation. We can assign each network a unique index value corresponding to an index within the class codeword binary strings. The values at the index positions define a set partition which determines which classes a network is trained to return a 1 for and which it returns a 0 for (i.e., the network returns a 1 for all classes with a 1 at the index and 0 otherwise). We can also view the class codewords as a matrix, with each network assigned to a column with 1s and 0s indicating the partition between sets. By randomly shuffling the class codewords, each network is trained to recognise a set of approximately half of the classes.
With the classification task for each network defined, we can move on to training. The training process requires us to adjust the true labels of each input with each network having a unique set of true labels for the training data, where the true labels correspond to the set partition label. Once the dataset has been adjusted each network is trained independently.
During inference we pass the same input to the \(n\) networks which produces \(n\) real values \(\mathbf{v}\in\mathbb{R}^{n}\). We select a threshold value \(\tau\) which acts as a bias, with a large \(\tau\) leading to codewords consisting of more 0 bits. We compare each of the \(n\) real values of \(\mathbf{v}\) to \(\tau\) with the following rule:
\[v_{i}=\begin{cases}1&\text{if }v_{i}\geq\tau\\ 0&\text{otherwise}\end{cases}\]
This produces a binary string which we can compare to the class codewords. If any of the class codewords match the predicted binary string exactly, we can return the label associated with it as the result; however, if none match, we calculate the Hamming distance between the prediction and the class codewords. If we find a Hamming distance less than or equal to \(t\), then we can unambiguously correct to, and return, that class codeword due to the properties of Reed-Muller codes. Otherwise, we refuse to classify the input and return a rejection.
## 4 Evaluation methodology
### Threat model
We begin by establishing the threat model under which we expect the RMAggNet defence to operate effectively as per the recommendations made by Carlini et al. [21]. Our assumptions consider an adversary who knows the purpose of the model (i.e., the classes the model can output) and is capable of providing the model with inputs. The actions of the adversary are constrained by a limited perturbation cost, where the \(L_{p}\)-norm between the original (\(x\)) and perturbed
(\(\tilde{x}\)) image must be below some threshold \(\epsilon\), i.e., (\(|x-\tilde{x}|_{p}\leq\epsilon\)), where \(p\) is either 2 or \(\infty\) depending on the attack used. The norm used for an attack changes the focus of the adversarial perturbation. The \(L_{2}\) attacks encourage small changes across all input dimensions, which distributes the perturbation across multiple features, whereas the \(L_{\infty}\) attack encourages perturbations which focus on a single feature, maximising this as much as the budget (\(\epsilon\)) allows. The ultimate aim of the adversary is to generate perturbed images which are not noticeable to a time constrained human.
The level of access to the model granted to the adversary depends on the specific adversarial attack being employed. In the case of open-box attacks, the adversary has full access to the target model and can generate adversarial perturbations tailored to deceive that particular network. This represents the worst-case scenario. On the other hand, closed-box attacks represent a more realistic setting where the adversary does not have access to the model parameters. In this case it is necessary to train a surrogate model which performs the same classification task as the target model. Adversarial examples are then generated for this surrogate model, which leverage transferability to create adversarial examples for the target model. Both of these attacks will be used in the following experiments.
We employ a range of attacks to generate adversarial examples, including gradient-based attacks such as Projected Gradient Descent (PGD) [22], both in the \(L_{2}\)- and \(L_{\infty}\)-norm, and the gradient free Boundary attack in the \(L_{2}\)-norm [23].We also use these adversarial generation methods for the transfer attacks to demonstrate the robustness of these approaches in a closed-box setting. The use of both gradient and gradient-free attacks allows us to more thoroughly evaluate the robustness of the models. While gradient based attacks tend to produce stronger adversarial examples, they can fail to produce effective perturbations if the target model performs any kind of gradient masking. To ensure that we have a fair and reliable evaluation of the robustness we include gradient-free attacks to eliminate the possibility that any results are solely due to masking the model gradients.
### Comparison methods
To evaluate the classification and rejection ability of RMAggNet we have implemented two other comparison methods.
The first is a traditional ensemble method which consists of \(n\) networks (where \(n\) is the same number of networks used for RMAggNet) each of which are trained to perform the full classification task, as opposed to RMAggNet where each network is trained to perform set membership over two partitions. To aggregate the ensemble method results from the multiple networks, we have set up a simple voting system with an associated threshold (\(\sigma\)). When an input is passed to the ensemble, each network classifies the data producing a predicted class. If we exceed the threshold with the percentage of networks that agree on a single class, that class is returned as the most likely answer, otherwise, the input is rejected and no class is returned.
The second is the CCAT method (see Section 2.2) which uses adversarial training as per the original paper [16] using the original code which is slightly modified 1. The adversarial training process allows CCAT to reject adversarial inputs within an \(\epsilon\)-ball by learning to return a uniform distribution over all of the classes. This is then extrapolated beyond the \(\epsilon\)-ball to larger perturbations. A threshold (\(\tau\)) is specified which represents the confidence bound that must be exceeded so that the result is not rejected. Unlike the original paper where an optimal \(\tau\) is calculated based on performance on the clean dataset, we vary \(\tau\) to determine the effect on the rejection ability.
Footnote 1: Available: [https://github.com/davidstutz/confidence-calibrated-adversarial-training](https://github.com/davidstutz/confidence-calibrated-adversarial-training)
### Datasets
We use multiple datasets to evaluate the effectiveness of RMAggNet on a variety of classification tasks. We focus on the MNIST, EMNIST (balanced) [24] and CIFAR-10 datasets. The MNIST dataset provides us with the simplest classification task with grey-scale images of size \(28\times 28\) of 10 possible classes. This is extended by EMNIST which consists of 131,600 grey-scale images of size \(28\times 28\), with 47 balanced classes including handwritten digits, upper- and lower-case letters (with some lower case classes excluded). Since we have 47 classes, the number of networks in RMAggNet is expanded to 32, with the same amount used for the Ensemble method. CIFAR-10 represents a more challenging classification task, increasing the image complexity with full colour images of size \(32\times 32\) over 10 possible classes which uses 16 networks for RMAggNet and Ensemble. We also use two extra datasets for out-of-distribution tests which are Fashion MNIST (FMNIST) [25], which consists of \(28\times 28\) images of articles of clothing from 10 classes, and a dataset of uniform random noise images.
### Generation of adversarial examples
To generate adversarial images from the selected datasets we use the FoolBox library [26, 27]. We generate adversarial images using PGD \(L_{2}\), PGD \(L_{\infty}\) and Boundary \(L_{2}\) attacks. Due to the complex nature of some of the networks, adjustments needed to be made to generate adversarial examples.
**RMAggNet**: Due to the non-differentiable nature of RMAggNet from thresholding, direct attacks are difficult to generate. Following approaches such as BPDA [28] we implement a hybrid RMAggNet which replaces the final mapping from a binary string to class (or reject) with a Neural Network. This allows us to backpropagate through the entire model to produce effective adversarial examples.
**Ensemble**: We implement an ensemble via logits method [29] where the result of each network is weighted. Due to the voting system for the rejection we set equal weights over all networks. This approach allows us to have an ensemble method which mimics the voting output, except it is differentiable, therefore we can generate adversarial examples using the multiple networks directly.
**CCAT**: Since CCAT is a standard Network which has undergone specific adversarial training, the generation of adversarial attacks is simple. Many attacks in the FoolBox library are able to generate adversarial examples without any modification of the network.
## 5 Results
### Reed-Muller hyperparameters
Our first experiment explores the use of Reed-Muller hyperparameters. Reed-Muller codes provides us with fine-grained control when it comes to the codeword space. Increasing the number of variables (\(m\)) gives us access to a larger space at the cost of an increased number of networks (\(2^{m}\)), whereas increasing the degree of the polynomials (\(r\)) allows us to use more of this space at the cost of error correction and an increase in the probability an out-of-distribution input will be randomly assigned a valid class. To explore the effects these parameters have on the classification ability of RMAggNet, both for in- and out-of-distribution data we set up a number of experiments which vary the \(m\) and \(r\), effectively altering the number of networks used for the aggregated classification, and the amount of error correction we allow.
The results of these experiments can be seen in tables (a)a, (b)b, (c)c. This data is generated by testing three different versions of RMAggNet: (\(m=4\), \(r=1\)) expressed as \([16,5,8]_{2}\), (\(m=4\), \(r=2\)) expressed as \([16,11,4]_{2}\), and (\(m=5\), \(r=1\)) expressed as \([32,6,16]_{2}\), with potential error correction of \(3\), \(1\) and \(7\) bits respectively. The tests are performed over three datasets, a clean MNIST dataset, a dataset consisting of random uniform noise (Noise), and Fashion MNIST (FMNIST) [25]. The last two act as out-of-distribution datasets where the Noise dataset has no semantic meaning, and FMNIST has semantic meaning, but should still be considered far from the distribution the models were trained on (and therefore, should be rejected). For all experiments going forward we set the threshold to \(\tau=0.5\) when generating codewords so that there is no bias towards a particular bit.
On the clean dataset \([16,5,8]_{2}\) and \([16,11,4]_{2}\) have very similar performance at \(EC=1\), but \([16,5,8]_{2}\), with the larger amount of error correction, can achieve better accuracy. Increasing accuracy through error correction intuitively makes sense since we are effectively expanding the Hamming sphere around each class codeword, including more codewords which can be considered valid. However, by expanding the Hamming sphere, we also increase the space of erroneously assigned codewords which can be assigned a class incorrectly. From \([32,6,16]_{2}\) we can clearly see that earlier \(EC\) values transform far more rejections to correct classifications, however, this trend diminishes as we allow more correction. For example, if we focus on different \(EC\) values in \([32,6,16]_{2}\), increasing the amount of \(EC\) from \(0\) to \(1\), the performance on MNIST (Clean) changes from \((85.29,14.68,0.03)\) to \((96.58,3.38,0.04)\), which means that by correcting just one bit, we are able to see changes of \((+11.29,-12.40,+0.01)\) where a majority of the rejections are transferred to correct classifications. However, as we continue
increasing the \(EC\), the conversion rate from rejections to correct classifications decreases. As we move from \(EC=6\) to \(EC=7\) the difference in correctness, rejections and incorrectness becomes \((+0.21,-0.33,+0.12)\) where approximately one-third of the rejected inputs are transferred to an incorrect classification.
Overall \([32,6,16]_{2}\) achieves better accuracy at its maximum \(EC\) than \([16,5,8]_{2}\) with only a slight increase in incorrect classifications. This is likely due to the aggregation of information from more networks, which is a common occurrence in ensemble based methods. However, on the clean dataset at \(EC=0\), the difference in the number of incorrect classifications is negligible, while the number of correct classifications for \([16,5,8]_{2}\) is much higher. It is worth noting that we may observe the law of diminishing returns here as a doubling in the number of networks used for aggregation (\(16\) for \([16,5,8]_{2}\) and \(32\) for \([32,6,16]_{2}\)) leads to a \(0.21\%\) increase in accuracy with a \(0.1\%\) increase in the number of incorrect classifications. This strongly indicates that the \([16,5,8]_{2}\) model has an ideal balance between classification ability and computational cost.
On the out-of-distribution datasets (Noise and FMNIST), all models reliably reject the noise data better than FMNIST which is to be expected since FMNIST has a semantic structure which could share features with data the model is trained on. \([32,6,16]_{2}\) has excellent performance on both out-of-distribution datasets at \(EC=0\), far above \([16,5,8]_{2}\) and \([16,11,4]_{2}\). This is likely due to the probability of a random input being assigned a class codeword which is much lower for \([32,6,16]_{2}\) at \(2.33\times 10^{-9}\) compared to \(1.53\times 10^{-4}\) for both \([16,5,8]_{2}\) and \([16,11,4]_{2}\).
### MNIST Dataset
Using the hyperparameters found in the previous section (5.1) we construct a RMAggNet model for the MNIST dataset with \(m=4\), \(r=1\) giving us \(16\) networks with \(3\) bits of error correction (\([16,5,8]_{2}\)). We define an equal number of ensemble networks for parity. While the \([32,6,16]_{2}\) network is able to convincingly outperform it on out-of-distribution data, it has negligible improvement in correctness on the clean dataset and requires twice the number of networks compared to \([16,5,8]_{2}\), therefore, for computational cost we use \(16\) networks. The network architecture is shown in table 2.
Table 3 reports the results on the clean MNIST dataset where we aim to maximise correctness. All three methods report similar correctness with CCAT performing slightly better. If we compare equivalent correctness over all methods (\(EC=3\) for RMAggNet, \(\sigma=0.4\) for Ensemble and \(\tau=0.4\) for CCAT) we can see that RMAggNet is able to report the lowest incorrectness value. This indicates that RMAggNet is able to use the rejection option more effectively to reduce the number of incorrect classifications being made.
Tables 4 and 5 show the results of all architectures applied to out-of-distribution datasets, with the aim of maximising rejection. Both sets of results show that all architectures are able to reject a large majority of the out-of-distribution data at their most conservative settings (Ensemble \(\tau=1.0\) and RMAggNet \(EC=0\)). CCAT is the highest performing model, rejecting nearly \(100\%\) of the data at
\begin{table}
\end{table}
Table 1: Results for RMAggNet over multiple values of \(m\), \(r\). Three datasets are used, MNIST, Noise and FMNIST, where Noise and FMNIST show the performance on out-of-distribution data which is semantically random and structured respectively. EC denotes the amount of error correction. For MNIST (Clean) higher correctness is better, for Noise and FMNIST, higher rejection is better.
nearly all thresholds, excluding \(\tau=0\) and \(\tau=0.1\) where the CCAT confidence threshold is effectively ignored. RMAggNet shows slightly better performance over Ensemble in table 4, however, Ensemble outperforms RMAggNet in the more challenging Fashion MNIST dataset (table 5) which, unlike the uniform noise dataset, contains low-level features which can be shared with the MNIST dataset. This could impact RMAggNets performance more than Ensemble because Ensemble performs full classification, therefore later layers in the network will contain high-level features which are able to distinguish between the classes it is trained on at a higher level of abstraction. However, RMAggNet only has to perform set membership checks on the inputs, therefore lower-level features may be learned by the networks, meaning that the lower-level features present in FMNIST may have a greater influence over RMAggNet as it attempts to generalise.
\begin{table}
\begin{tabular}{r|l} \hline
**CCAT** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \(\tau\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & **99.49** & 0.00 & 0.51 \\
0.10 & **99.49** & 0.00 & 0.51 \\
0.20 & **99.46** & 0.04 & 0.50 \\
0.30 & **99.43** & 0.10 & 0.47 \\
0.40 & **99.38** & 0.17 & 0.45 \\
0.50 & **99.28** & 0.32 & 0.40 \\
0.60 & **99.16** & 0.50 & 0.34 \\
0.70 & **99.01** & 0.68 & 0.31 \\
0.80 & **98.87** & 0.89 & 0.24 \\
0.90 & **98.41** & 1.41 & 0.18 \\
1.0 & **47.41** & 52.59 & 0.00 \\ \hline \end{tabular}
\begin{tabular}{r|l} \hline
**Ensemble** \\ \hline \(\sigma\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & **99.70** & 3.24 & 0.06 \\
1 & **98.38** & 1.50 & 0.12 \\
2 & **99.02** & 0.78 & 0.20 \\
3 & **99.34** & 0.27 & 0.39 \\ \hline \end{tabular}
\end{table}
Table 3: Percentages of correct, rejected, and incorrect classifications for the clean MNIST dataset. Both Ensemble and RMAggNet models consist of 16 networks. Higher correctness is better.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**CCAT** & \multicolumn{1}{c|}{**Ensemble**} \\ \hline \(\tau\) & **Rejected** & **Incorrect** \\ \hline
0 & 0.00 & **100.00** \\
0.10 & 0.00 & **100.00** \\
0.20 & 100.00 & **0.00** \\
0.30 & 100.00 & **0.00** \\
0.40 & 100.00 & **0.00** \\
0.50 & 100.00 & **0.00** \\
0.60 & 100.00 & **0.00** \\
0.70 & 100.00 & **0.00** \\
0.80 & 100.00 & **0.00** \\
0.90 & 100.00 & **0.00** \\
1.0 & 100.00 & **0.00** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**Ensemble** \\ \hline \(\sigma\) & **Rejected** & **Incorrect** \\ \hline
0 & 0.00 & **100.00** \\
0.10 & 0.00 & **100.00** \\
0.20 & 0.00 & **100.00** \\
0.30 & 0.01 & **99.99** \\
0.40 & 8.25 & **91.75** \\
0.50 & 41.61 & **58.39** \\
0.60 & 60.34 & **39.66** \\
0.70 & 84.32 & **15.68** \\
0.80 & 90.71 & **9.29** \\
0.90 & 97.64 & **2.36** \\
1.0 & 99.78 & **0.22** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**RMAggNet**[16, 5, 8]\({}_{2}\) \\ \hline
**EC** & **Rejected** & **Incorrect** \\ \hline
0 & 99.88 & **0.12** \\
1 & 98.58 & **1.42** \\
2 & 91.81 & **8.19** \\
3 & 67.14 & **32.86** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**RMAggNet**[16, 5, 8]\({}_{2}\) \\ \hline
**EC** & **Rejected** & **Incorrect** \\ \hline
0 & 99.88 & **0.12** \\
1 & 98.58 & **1.42** \\
2 & 91.81 & **8.19** \\
3 & 67.14 & **32.86** \\ \hline \end{tabular}
\end{table}
Table 4: Percentages of rejected and incorrect classifications on a dataset consisting of uniform random noise images for all models trained on the MNIST dataset. Lower incorrectness is better.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**CCAT** & \multicolumn{1}{c|}{**Ensemble**} \\ \hline \(\tau\) & **Rejected** & **Incorrect** \\ \hline
0 & 0.00 & **100.00** \\
0.10 & 0.00 & **100.00** \\
0.20 & 99.02 & **0.98** \\
0.30 & 99.50 & **0.50** \\
0.40 & 99.68 & **0.32** \\
0.50 & 99.77 & **0.23** \\
0.60 & 99.85 & **0.15** \\
0.70 & 99.90 & **0.10** \\
0.80 & 99.91 & **0.09** \\
1.0 & 100.00 & **0.00** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**CCAT** & \multicolumn{1}{c|}{**Ensemble**} \\ \hline \(\tau\) & **Rejected** & **Incorrect** \\ \hline
0 & 0.00 & **100.00** \\
0.10 & 0.00 & **100.00** \\
0.20 & 0.00 & **100.00** \\
0.30 & 0.01 & **99.99** \\
0.40 & 8.25 & **91.75** \\
0.50 & 41.61 & **58.39** \\
0.60 & 60.34 & **39.66** \\
0.70 & 84.32 & **15.68** \\
0.80 & 90.71 & **92.29** \\
0.90 & 97.64 & **2.36** \\
1.0 & 99.78 & **0.22** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**RMAggNet**[16, 5, 8]\({}_{2}\) \\ \hline
**EC** & **Rejected** & **Incorrect** \\ \hline
0 & 99.88 & **0.12** \\
1 & 98.58 & **1.42** \\
2 & 91.81 & **8.19** \\
3 & 67.14 & **32.86** \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline
**RMAggNet**[16, 5, 8]\({}_{2}\) \\ \hline
**EC** & **Rejected** & **Incorrect** \\ \hline
0 & 99.88 & **0.12** \\
1 & 98.58 & **1.42** \\
2 & 91.81 & **8.19** \\
3 & 67.14 & **32.86** \\ \hline \end{tabular}
\end{table}
Table 5: Percentages of rejected and incorrect classifications on the Fashion MNIST dataset for all models trained on the MNIST dataset. Lower incorrectness is better.
Table 6b shows the performance of the PGD \(L_{\infty}\) transfer attack on the CCAT, Ensemble, and RMAggNet methods, generated from a surrogate model which shares the same architecture as a single model from the Ensemble method. We use the performance of the surrogate model (table 6a) as a guide to determine the appropriate perturbation budget. In this setting we aim to minimise incorrectness. The CCAT results show that at \(\tau>0.1\) we achieve \(0\%\) incorrectness with all adversarial inputs being rejected at all \(\epsilon\), even though at lower \(\epsilon\), \(91\%\) of inputs could still be correctly classified. RMAggNet has higher incorrectness than CCAT, but shows strong classification ability for \(\epsilon\in\{0.05,0.10\}\) with a similar correctness to Ensemble at \(\epsilon=0.05\), with more than half the number of incorrect classifications. This becomes even more pronounced at \(\epsilon=0.10\) where RMAggNet performs over \(10\%\) better with approximately one-third the number of incorrect classifications, which can be reduced even further while still achieving a higher correctness. However, at \(\epsilon=0.30\) both Ensemble and RMAggNet have high incorrectness compared to CCAT, increasing to a minimum of \(92.50\%\) and \(43.50\%\) respectively.
The results of the PGD \(L_{2}\) transfer attack are in table 7b. We, again, aim to minimise incorrectness. We see that CCAT rejects nearly all of the inputs over all values of \(\epsilon\), leading to \(0\) incorrect classifications. This is in contrast to both Ensemble and RMAggNet which are able to classify many of the inputs correctly at low \(\epsilon\) values (\(0.5\) and \(1.00\)) with RMAggNet achieving approximately half the number of incorrect classifications, and even reducing to \(0\) for \(\epsilon=0.50\). At higher \(\epsilon\) we see a decrease in classification ability with both methods reporting high incorrectness values. However, RMAggNet is still able to approximately halve the number of incorrect classifications compared to Ensemble.
Table 8 shows the results of open-box PGD \(L_{\infty}\) adversarial attacks where the adversarial perturbations are generated using the models themselves rather than a surrogate. Details on how these adversarial images were generated can be found in Section 4.4. The aim here is to minimise incorrectness. These attacks are slightly more effective than the transfer attacks. They follow a similar pattern to the transfer attacks, with RMAggNet allowing adversarial examples to be corrected leading to higher correctness at low values of \(\epsilon\), which becomes less effective at higher \(\epsilon\). CCAT is able to achieve \(0\%\) incorrectness by rejecting all adversarial examples at \(\tau>0\). This is an advantage at \(\epsilon=0.30\), since both Ensemble and RMAggNet have high number of incorrect classifications, with RMAggNet having a minimum incorrectness of \(41\%\) compared to the Ensemble method at \(94\%\). However, at the lower perturbation values RMAggNet is able to correctly classify a majority of the adversarial examples with fairly small amounts of incorrect classifications whereas CCAT rejects all inputs. RMAggNet is also able to outperform the Ensemble method at these low perturbation values as well, with similar correctness at \(\epsilon=0.05\), but with much lower incorrectness, and a much higher correctness at \(\epsilon=0.10\), with significantly lower incorrectness.
The PGD \(L_{2}\) open-box attack (table 9) mirrors results from the closed-box transfer attack (table 7b). CCAT is able to achieve the lowest incorrectness results at \(0\%\) for all \(\tau>0\) over all \(\epsilon\). However, these low incorrectness values are due to
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 99.10 \\
0.05 & 93.90 \\
0.1 & 63.90 \\
0.3 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 6: Results for the transfer attacks using PGD \(L_{\infty}\). Table 6a shows the accuracy of the surrogate model on the PGD \(L_{\infty}\) adversarial datasets. Table 6b shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
Using Reed-Muller Codes for Classification with Rejection and Recovery
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 99.10 \\
0.50 & 97.00 \\
1.00 & 87.00 \\
3.00 & 7.40 \\ \hline \end{tabular}
\end{table}
Table 7: Results for the transfer attacks using PGD \(L_{2}\). Table 6(a) shows the accuracy of the surrogate model on the PGD \(L_{2}\) adversarial datasets. Table 6(b) shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
rejection in situations where some amount of inputs could be correctly classified. Comparing Ensemble and RMAggNet, RMAggNet is able to achieve consistently lower incorrectness scores, with equal or higher correctness than Ensemble. At \(\epsilon=3.0\), the incorrectness of both Ensemble and RMAggNet increase considerably, showing that CCAT is the preferable model if we expect strong adversaries to be present.
Our final attack on the MNIST dataset is the Boundary (\(L_{2}\)) attack. The results are in tables 10b and 11, where we look to minimise incorrectness. Looking
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{**PGD(\(L_{\infty}\))**} \\ \hline \multicolumn{10}{|c|}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 44.80 & 0.00 & **55.20** & 0.10 & 16.60 & 0.00 & **83.40** & 0.3 & 0.30 & 0.00 & **99.70** \\
0.30 & & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
0.70 & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
1.00 & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\ \hline \multicolumn{10}{|c|}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 94.80 & 0.00 & **5.20** & 0.10 & 76.90 & 0.00 & **23.10** & 0.3 & 0.00 & 0.00 & **100.00** \\
0.30 & & 94.80 & 0.00 & **5.20** & & 76.90 & 0.00 & **23.10** & & 0.00 & 0.00 & **100.00** \\
0.70 & & 92.60 & 3.10 & **4.30** & & 72.50 & 7.20 & **20.30** & & 0.00 & 0.50 & **99.50** \\
1.00 & & 85.60 & 12.90 & **1.50** & & 50.60 & 39.80 & **9.60** & & 0.00 & 5.50 & **94.50** \\ \hline \multicolumn{10}{|c|}{**RMAggNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.05 & 79.00 & 20.40 & **0.60** & 0.10 & 41.40 & 56.40 & **2.20** & 0.3 & 0.00 & 58.90 & **41.10** \\
1 & & 90.40 & 8.30 & **1.30** & & 74.80 & 21.40 & **3.80** & & 0.10 & 35.50 & **64.40** \\
2 & & 94.00 & 4.00 & **2.00** & & 84.50 & 10.50 & **5.00** & & 0.70 & 20.50 & **78.80** \\
3 & & 95.70 & 1.50 & **2.80** & & 89.10 & 3.60 & **7.30** & & 2.10 & 9.00 & **88.90** \\ \hline \end{tabular}
\end{table}
Table 8: Results for open-box attacks using PGD \(L_{\infty}\) on the MNIST dataset at different perturbation budgets (\(\epsilon\)). Lower incorrectness is better.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{**PGD(\(L_{2}\))**} \\ \hline \multicolumn{10}{|c|}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.50 & 75.60 & 0.00 & **24.40** & 1.00 & 42.90 & 0.00 & **57.10** & 3.00 & 3.70 & 0.00 & **96.30** \\
0.30 & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
0.70 & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
1.00 & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\ \hline \multicolumn{10}{|c|}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 94.80 & 0.00 & **5.20** & 0.10 & 76.90 & 0.00 & **23.10** & 0.3 & 0.00 & 0.00 & **100.00** \\
0.30 & & 94.80 & 0.00 & **5.20** & & 76.90 & 0.00 & **23.10** & & 0.00 & 0.00 & **100.00** \\
0.70 & & 92.60 & 3.10 & **4.30** & & 72.50 & 7.20 & **20.30** & & 0.00 & 0.50 & **99.50** \\
1.00 & & 85.60 & 12.90 & **1.50** & & 50.60 & 39.80 & **9.60** & & 0.00 & 5.50 & **94.50** \\ \hline \multicolumn{10}{|c|}{**RMAggNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.05 & 79.00 & 20.40 & **0.60** & 0.10 & 41.40 & 56.40 & **2.20** & 0.3 & 0.00 & 58.90 & **41.10** \\
1 & & 90.40 & 8.30 & **1.30** & & 74.80 & 21.40 & **3.80** & & 0.10 & 35.50 & **64.40** \\
2 & & 94.00 & 4.00 & **2.00** & & 84.50 & 10.50 & **5.00** & & 0.70 & 20.50 & **78.80** \\
3 & & 95.70 & 1.50 & **2.80** & & 89.10 & 3.60 & **7.30** & & 2.10 & 9.00 & **88.90** \\ \hline \end{tabular}
\end{table}
Table 9: Results for open-box attacks using PGD \(L_{2}\) on the MNIST dataset at different perturbation budgets (\(\epsilon\)). Lower incorrectness is better.
at the transfer attack in table 10b, we see that CCAT performs worse compared to the previous attacks, with a rejection rate between 98-100% only achieving the 100% mark at the strictest confidence threshold. The results on the Ensemble and RMAggNet methods are similar to the previous adversarial attacks, where the percentage of correctly classified inputs are similar for low \(\epsilon\), with RMAggNet able to achieve approximately half the number of incorrect classifications (dropping to 0 for \(\epsilon=1.50\) for \(EC\leq 2\)). However, at higher \(\epsilon\) we see both Ensemble and RMAggNet retain their high correctness and a low number of incorrect classifications, with RMAggNet reporting 0% incorrectness in many cases. This can also be seen in CCAT as the low thresholds (\(\tau=0\)) show over 85% correctness. Looking at the effect that the Boundary attack has on the surrogate model (table 10a) it indicates that the adversarial examples produced for it focus on making subtle changes to features which are unique to the surrogate model and do not generalise to the CWR methods.
Table 11 shows the power of the Boundary (\(L_{2}\)) attack in a open-box setting. Both Ensemble and RMAggNet show reduced correctness over all \(\epsilon\), which becomes more pronounced as \(\epsilon\) increases. These adversarial examples appear to affect RMAggNet to a greater degree leading to a 10-30% reduction in correctness compared to Ensemble. However, RMAggNet is able to achieve a significantly lower number of incorrect classifications compared to Ensemble, leveraging the reject option effectively to reduce the incorrectness to 0% at \(EC\leq 1\) for all \(\epsilon\). It is worth noting that no open-box attacks could be carried out on CCAT since no initial adversarial examples could be found by the Boundary method operating at the same conditions as Ensemble and RMAggNet.
Images corresponding to the PGD \(L_{\infty}\) attacks can be found in figure 1, with PGD \(L_{2}\) in 2 and Boundary in 3.
### EMNIST Dataset
Results for the EMNIST dataset use RMAggNet with \(m=5\), \(r=1\) which gives us 32 networks with 7 bits of error correction (EC). We also use 32 networks for the Ensemble method for parity. All methods use ResNet-18 models [30]. Table 12 shows the results on the clean EMNIST dataset where we expect to maximise correctness. All three models perform similarly, with Ensemble achieving the highest correctness, closely followed by RMAggNet and CCAT. However, all models come close to state-of-the-art performance (91.06%) [31], with minimal negative impacts from the adversarial defence.
The results for the PGD \(L_{\infty}\) transfer attacks are in table 13b where we aim to minimise incorrectness, either through rejection or correct classifications. The CCAT model is able to reject all adversarial inputs for \(\tau>0\), however the \(\tau=0\) results provide us with some interesting insights. \(\tau=0\) effectively removes the confidence threshold which is used to reject inputs, this indicates that the CCAT model is strongly affected by the adversarial examples generated by the surrogate model. If we turn our attention to the Ensemble and RMAggNet methods, we see that RMAggNet is able to achieve a higher correctness than Ensemble with the difference becoming larger as \(\epsilon\) increases. In addition, RMAggNet is able to
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 99.10 \\
1.50 & 62.00 \\
2.00 & 28.30 \\
3.00 & 0.90 \\ \hline \end{tabular}
\end{table}
Table 10: Results for the transfer attacks using Boundary (\(L_{2}\)). Table 10a shows the accuracy of the surrogate model on the Boundary (\(L_{2}\)) adversarial datasets. Table 10b shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
Using Reed-Muller Codes for Classification with Rejection and Recovery
Figure 3: Adversarial images generated using the Boundary \(L_{2}\) attack on the MNIST dataset across all models. Each architecture shows four rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{1.5,2.0,3.0\}\). Note that CCAT does not have any adversarial images due to the Boundary method not being able to find any initial successful perturbations close enough to the classification boundary.
Figure 2: Adversarial images generated using the PGD \(L_{2}\) attack on the MNIST dataset across all models. Each architecture shows three rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{0.5,1.0,3.0\}\).
achieve a similar minimum incorrectness with slightly higher correctness over all values of \(\epsilon\).
The results of the PGD \(L_{2}\) transfer results are in table (b)b. This shows that CCAT is, once again, able to reject all adversarial data for \(\tau>0\), resulting in 0 incorrect classification across all \(\epsilon\) values. We can also see that RMAggNet is able to achieve similar correctness to Ensemble for \(\epsilon\in\{0.30,1.00\}\), with a lower number of incorrect classifications at these high correctness values. For \(\epsilon=3.00\), RMAggNet shows notably higher correctness with significantly lower incorrectness as it is able to leverage the reject option at high \(EC\) values. If we focus on minimising incorrectness instead, we can see that Ensemble is able to perform slightly better across all \(\epsilon\), however, RMAggNet is able to achieve higher correctness which increases with larger \(\epsilon\).
The performance on adversarial datasets generated with open-box attacks is shown in tables 14 and 16. For both of these experiments we aim to minimise incorrectness, either by correctly classifying or rejecting the data. Correctly classifying the data is preferred since it reduces the reliance on downstream rejection handling.
Table 14 shows the results of the PGD \(L_{\infty}\) attack at varying perturbation budgets (\(\epsilon\)). The CCAT results demonstrate strong performance with 0% incorrectness for \(\tau>0\) for all \(\epsilon\). At \(\tau=0\) we effectively disable the confidence threshold of CCAT and see 100% incorrectness as it is trained to return a uniform distribution for adversarial inputs. It is worth noting that the 0% incorrectness of CCAT is achieved through the rejection of all of the inputs, even at lower \(\epsilon\) where both Ensemble and RMAggNet show that correct classifications can be recovered. This points towards a disadvantage of the conservative nature of CCAT. In situations where we want the option to reject, but can tolerate some incorrectness, CCAT often becomes ineffective for classification. Comparing Ensemble and RMAggNet, RMAggNet can achieve significantly lower incorrectness over all \(\epsilon\), translating the incorrectness into correct or rejected classifications depending on the amount of EC. This leads to RMAggNet being able to achieve higher correctness than both Ensemble and CCAT.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{**CCAT**} & \multicolumn{3}{|c|}{**Ensemble**} \\ \hline \(\tau\) & **Correct** & **Rejected** & **Incorrect** & \(\sigma\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & **88.68** & 0.00 & 11.32 & 0 & **89.78** & 0.00 & 10.22 \\
0.10 & **88.60** & 0.15 & 11.24 & 0.10 & **89.78** & 0.00 & 10.22 \\
0.20 & **87.54** & 2.41 & 10.05 & 0.20 & **89.78** & 0.01 & 10.21 \\
0.30 & **85.46** & 6.45 & 8.09 & 0.30 & **89.76** & 0.05 & 10.19 \\
0.40 & **83.16** & 10.29 & 6.55 & 0.40 & **89.70** & 0.28 & 10.03 \\
0.50 & **80.70** & 14.22 & 5.08 & 0.50 & **89.20** & 1.41 & 9.39 \\
0.60 & **77.95** & 18.03 & 4.02 & 0.60 & **87.76** & 4.45 & 7.79 \\
0.70 & **74.23** & 22.71 & 3.06 & 0.70 & **85.94** & 7.68 & 6.38 \\
0.80 & **69.48** & 28.46 & 2.06 & 0.80 & **83.89** & 11.05 & 5.06 \\
0.90 & **60.74** & 38.05 & 1.20 & 0.90 & **80.60** & 15.60 & 3.80 \\
1.0 & **0.00** & 100.00 & 0.00 & 1.0 & **68.34** & 30.15 & 1.51 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{**RMAggNet**} & \multicolumn{1}{c|}{\(\widehat{32,6,16}\)} \\ \hline
**EC** & **Correct** & **Rejected** & **Incorrect** \\ \hline
**0** & **70.02** & 27.72 & 2.26 \\
**77.19** & 19.59 & 3.23 \\
**2** & **80.85** & 15.15 & 4.00 \\
**3** & **83.29** & 12.03 & 4.68 \\
**4** & **84.99** & 9.49 & 5.52 \\
**5** & **86.54** & 6.98 & 6.48 \\
**6** & **87.93** & 4.59 & 7.48 \\
**7** & **89.16** & 2.14 & 8.70 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{**EC**} & **Correct** & **Rejected** & **Incorrect** \\ \hline
**0** & **70.02** & 27.72 & 2.26 \\
**77.19** & 19.59 & 3.23 \\
**2** & **80.85** & 15.15 & 4.00 \\
**3** & **83.29** & 12.03 & 4.68 \\
**4** & **84.99** & 9.49 & 5.52 \\
**5** & **86.54** & 6.98 & 6.48 \\
**6** & **87.93** & 4.59 & 7.48 \\
**7** & **89.16** & 2.14 & 8.70 \\ \hline \end{tabular}
\end{table}
Table 12: Results for the clean EMNIST dataset showing the percentage of classifications that are correct, rejected and incorrect. Bold text indicates the metric of interest. Higher correctness is better.
The results of the PGD \(L_{2}\) attacks are in table 16. The results are similar to those in table 14, with CCAT reducing incorrectness to 0% through rejection alone. RMAggNet outperforms Ensemble in both maximum correctness and minimum incorrectness for \(\epsilon=0.30\) and \(\epsilon=1.0\). However, Ensemble can achieve higher correctness for \(\epsilon=3.0\) at the cost of incorrectness, which remains significantly higher than RMAggNet's.
From these results, we can conclude that RMAggNet provides more flexibility where small amounts of incorrectness is tolerable. The error correction process allows us to make trade-offs between maximising correctness and minimising incorrectness, with RMAggNet outperforming Ensemble in both of these metrics. RMAggNet comes close to CCAT in minimising incorrectness with the added advantage that, for small \(\epsilon\), we can recover and correctly classify many of the inputs, reducing pressure on downstream rejection handling.
A selection of adversarial images generated using PGD \(L_{\infty}\) and \(L_{2}\) on the EMNIST dataset can be found in figures 4 and 5.
### Cifar-10 Dataset
Results for the CIFAR-10 dataset use RMAggNet with \(m=4\), \(r=1\) which gives us 16 networks with 3 bits of error correction. We use 16 networks in the Ensemble method for parity. All networks use an architecture outlined in table 17.
Table 18 shows the results on the clean CIFAR-10 dataset where we aim to maximise correctness. Ensemble reports the highest correctness, followed by
Figure 4: Adversarial images generated using the PGD \(L_{\infty}\) attack on the EMNIST dataset across all models. Each architecture shows three rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{0.05,0.1,0.3\}\).
Using Reed-Muller Codes for Classification with Rejection and Recovery
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 87.93 \\
0.05 & 2.20 \\
0.10 & 0.00 \\
0.30 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 13: Results for the transfer attacks using PGD \(L_{\infty}\). Table 13(a) shows the accuracy of the surrogate model on the PGD \(L_{\infty}\) adversarial datasets. Table 13(b) shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{11}{|c|}{**PGD(\(L_{\infty}\))**} \\ \hline \multicolumn{11}{|c|}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 0.00 & 0.00 & **100.00** & 0.10 & 0.00 & 0.00 & **100.00** & 0.30 & 0.00 & 0.00 & **100.00** \\
0.30 & & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
0.70 & 0.00 & 100.00 & **0.00** & 0.00 & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\
0.90 & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** & & 0.00 & 100.00 & **0.00** \\ \hline \multicolumn{11}{|c|}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 71.70 & 0.00 & **28.30** & 0.10 & 29.40 & 0.00 & **70.60** & 0.30 & 0.00 & 0.00 & **100.00** \\
0.30 & & 71.70 & 0.00 & **28.30** & & 29.40 & 0.00 & **70.60** & & 0.00 & 0.00 & **100.00** \\
0.70 & & 64.10 & 15.20 & **20.70** & & 20.80 & 22.60 & **56.60** & & 0.00 & 0.00 & **100.00** \\
1.00 & & 31.40 & 60.10 & **8.50** & & 3.20 & 73.70 & **23.10** & & 0.00 & 4.60 & **95.40** \\ \hline \multicolumn{11}{|c|}{**RMAgNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.05 & 28.50 & 67.10 & **4.40** & 0.10 & 2.30 & 92.00 & **5.70** & 0.30 & 0.00 & 100.00 & **0.00** \\
1 & & 54.30 & 39.60 & **6.10** & & 16.80 & 72.40 & **10.80** & & 0.00 & 98.50 & **1.50** \\
2 & & 63.20 & 29.10 & **7.70** & & 29.30 & 56.70 & **14.00** & & 0.00 & 94.40 & **5.60** \\
3 & & 68.90 & 22.20 & **8.90** & & 41.40 & 41.50 & **17.10** & & 0.10 & 87.90 & **12.00** \\
4 & & 71.90 & 17.90 & **10.20** & & 48.20 & 31.50 & **20.30** & & 0.10 & 77.80 & **22.10** \\
5 & & 75.00 & 13.70 & **11.30** & & 54.10 & 23.40 & **22.50** & & 0.30 & 68.30 & **31.40** \\
6 & & 78.00 & 8.50 & **13.50** & & 59.30 & 15.60 & **25.10** & & 0.40 & 54.00 & **45.60** \\
7 & & 80.70 & 3.60 & **15.70** & & 62.90 & 8.70 & **28.40** & & 0.50 & 39.50 & **60.00** \\ \hline \end{tabular}
\end{table}
Table 14: Results of PGD \(L_{\infty}\) adversaries generated using open-box attacks on EMNIST images with percentages of correct, rejected and incorrect classifications. Lower incorrectness is better.
Figure 5: Adversarial images generated using the PGD \(L_{2}\) attack on the EMNIST dataset across all models. Each architecture shows three rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{0.30,1.0,3.0\}\).
Using Reed-Muller Codes for Classification with Rejection and Recovery
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 87.93 \\
0.30 & 55.10 \\
1.00 & 1.20 \\
3.00 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 15: Results for the transfer attacks using PGD \(L_{2}\). Table 15a shows the accuracy of the surrogate model on the PGD \(L_{2}\) adversarial datasets. Table 15b shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
RMAggNet, then CCAT. All models report correctness within 3% of one another, therefore, we can conclude that all are equally capable in terms of classification ability.
The \(L_{\infty}\) transfer attacks on the CIFAR-10 models can be seen in table 19, where we aim to reduce incorrectness. The results show a strong adversarial attack due to low correctness from Ensemble and RMAggNet, even at low \(\epsilon\). The CCAT model is able to reliably reject all adversarial inputs leading to 0% incorrectness for all \(\epsilon\) at all non-zero confidence thresholds. The Ensemble method classifies significantly more inputs correctly compared to RMAggNet for all \(\epsilon\) we tested, however, these are still a small number of correct classifications. This result shows that CCAT is an effective method for avoiding adversarial results when strong attacks are used.
The results of the transfer attacks using the PGD \(L_{2}\) adversarial method are in table 20. The CCAT method shows some attempt to classify the adversarial images at low \(\epsilon\), and it is able to achieve low incorrectness at high \(\tau\). However, the lowest incorrectness CCAT reaches at \(\epsilon=0.30\) is 0.6 (apart from \(\tau=1\) which rejects all inputs). Comparing this to the closest equivalent from RMAggNet (\(EC=0\)) we see that RMAggNet is able to have a correctness score which is approximately 20% higher, while having similar incorrectness, indicating that CCAT is relying on rejection to reduce the incorrectness score whereas RMAggNet attempts to correct the input to the true class. At the lower \(\epsilon\) values, Ensemble and RMAggNet are able to achieve similar correctness and incorrectness values, however, Ensemble is able to outperform RMAggNet as \(\epsilon\) increases.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{10}{c}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.30 & 0.20 & 0.00 & **99.80** & 1.0 & 0.00 & 0.00 & **100.00** & 3.0 & 0.00 & 0.00 & **100.00** \\
0.30 & & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** \\
0.70 & & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** \\
0.90 & & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** & 0.00 & 100.00 & **0.00** \\ \hline \multicolumn{10}{c}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.30 & 84.80 & 0.00 & **15.20** & 1.0 & 62.40 & 0.00 & **37.60** & 3.0 & 19.60 & 0.00 & **80.40** \\
0.30 & & 84.80 & 0.00 & **15.20** & 62.40 & 0.00 & **37.60** & 19.60 & 0.00 & **80.40** \\
0.70 & & 79.50 & 9.10 & **11.40** & 57.10 & 13.00 & **29.90** & 19.50 & 0.20 & **80.30** \\
1.00 & & 60.60 & 35.90 & **3.50** & 47.40 & 39.80 & **12.80** & 18.50 & 9.80 & **71.70** \\ \hline \multicolumn{10}{c}{**RMAggNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.30 & 56.00 & 41.30 & **2.70** & 1.0 & 15.10 & 78.60 & **6.30** & 3.0 & 0.00 & 94.60 & **5.40** \\
1 & & 68.00 & 28.20 & **3.80** & 39.10 & 51.50 & **9.40** & 0.70 & 86.10 & **13.20** \\
2 & & 74.10 & 20.90 & **5.00** & 50.30 & 38.10 & **11.60** & 1.00 & 75.60 & **23.40** \\
3 & & 77.70 & 16.40 & **5.90** & 57.40 & 29.30 & **13.30** & 2.00 & 65.00 & **33.00** \\
4 & & 80.70 & 12.40 & **6.90** & 61.70 & 23.40 & **14.90** & 3.10 & 55.60 & **41.30** \\
5 & & 82.70 & 9.10 & **8.20** & 65.20 & 17.40 & **17.40** & 5.00 & 45.60 & **49.40** \\
6 & & 84.40 & 6.20 & **9.40** & 67.60 & 12.20 & **20.20** & 6.40 & 37.60 & **56.00** \\
7 & & 86.00 & 3.10 & **10.90** & 70.40 & 6.60 & **23.00** & 9.20 & 25.70 & **65.10** \\ \hline \end{tabular}
\end{table}
Table 16: Results of PGD \(L_{2}\) adversaries generated using open-box attacks on EMNIST images with percentages of correct, rejected and incorrect classifications. Lower incorrectness is better.
Using Reed-Muller Codes for Classification with Rejection and Recovery
\begin{table}
\begin{tabular}{r|l}
**Layer** & **Parameters** \\ \hline Conv2D & channels = 32, kernel size = \(5\times 5\), padding = 2 \\ BatchNorm2D & \\ ReLU & \\ MaxPool2D & pool size = \(2\times 2\) \\ Dropout & \(p=0.4\) \\ Conv2D & channels = 64, kernel size = \(5\times 5\), padding = 2 \\ BatchNorm2D & \\ ReLU & \\ Conv2D & channels = 64, kernel size = \(5\times 5\), padding = 2 \\ BatchNorm2D & \\ ReLU & \\ MaxPool2D & pool size = \(2\times 2\) \\ Dropout & \(p=0.5\) \\ Conv2D & channels = 128, kernel size = \(5\times 5\), padding = 2 \\ BatchNorm2D & \\ ReLU & \\ MaxPool2D & pool size = \(2\times 2\) \\ Dropout & \(p=0.6\) \\ Flatten & \\ Linear & in = 2048, out = 512 \\ ReLU & \\ Dropout & \(p=0.7\) \\ Linear & in = 512, out = 256 \\ ReLU & \\ Dropout & \(p=0.8\) \\ Linear & in = 256, out = \(c\) \\ \end{tabular}
\end{table}
Table 17: Model architecture for classifying CIFAR-10 data. The \(c\) parameter is for the varying output sizes depending on the system i.e. \(c=1\) for RMAggNet, \(c=10\) for Ensemble and CCAT.
The performance on the CIFAR-10 datasets using open-box attacks is shown in tables 20 and 22. We, again, aim to minimise incorrectness.
The PGD \(L_{\infty}\) results in table 20 show low correctness for both Ensemble and RMAggNet for all \(\epsilon\) which indicates that this is a strong adversarial attack, which is reduced to unrecognisable images at \(\epsilon=0.3\). With this in mind, it is better to compare the methods focusing on incorrectness and rejection performance. Ensemble struggles to reject inputs, leading to nearly 100% incorrectness across all \(\epsilon\) which indicates that the adversarial examples are able to fool the multiple networks that form this method. RMAggNet shows slightly better performance, with much higher rejection for \(EC=0\). CCAT can achieve the lowest incorrectness scores, which are significantly lower for \(\epsilon=0.75\) and \(\epsilon=2.5\). This indicates that when we expect strong adversaries with little chance of recovery, CCAT is the best-performing model.
Table 22 shows the results for PGD \(L_{2}\) on CIFAR-10. Interestingly, RMAggNet can outperform both Ensemble and CCAT at \(\epsilon=0.30\) with a lower incorrectness and higher correctness than both methods. For \(\epsilon=0.75\) and \(\epsilon=2.5\) CCAT can report the lowest incorrectness by a significant margin. Over all \(\epsilon\) values RMAggNet reports lower incorrectness than Ensemble, achieving higher correctness at \(\epsilon=\{0.30,0.75\}\).
These results show the effect that strong adversaries have on the classification ability of these models. In this circumstance, CCAT is the better model, rejecting most adversaries, while Ensemble struggles to reject the inputs, and RMAggNet has varying performance when attempting to correct the images. However, this is a worst-case scenario.
A sample of the adversarial images produced from these attacks can be seen in figures 6 and 7.
Figure 6: Adversarial images generated using the PGD \(L_{\infty}\) attack on the CIFAR-10 dataset across all models. Each architecture shows three rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{0.05,0.1,0.3\}\).
Figure 7: Adversarial images generated using the PGD \(L_{2}\) attack on the CIFAR-10 dataset across all models. Each architecture shows three rows of images, where each row shows the adversarial images generated at a different \(\epsilon\in\{0.30,0.75,2.5\}\).
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 81.21 \\
0.05 & 0.10 \\
0.10 & 0.10 \\
0.30 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 19: Results for the transfer attacks using PGD \(L_{\infty}\). Table 19(a) shows the accuracy of the surrogate model on the PGD \(L_{\infty}\) adversarial datasets. Table 19(b) shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
We start by discussing the hyperparameters of RMAggNet from Section 5.1. The selection of hyperparameters is dependent on the problem, with the most important aspect being the number of classes the dataset has. If a dataset has \(|C|\) classes, then we require at least \(\lceil\log_{2}|C|\rceil\) networks to generate a unique encoding for each class. This works well for datasets such as MNIST or CIFAR-10, with ten classes each, which requires at least four networks, however, this approach is less optimal for datasets with a small number of classes. For datasets with few classes we are only able to produce \(\binom{|C|}{s}\) unique networks (where \(s\) is the number of classes we allow in the partitioned set) which limits the number of networks we can use and increases the probability that a random noise input will be assigned a valid class, which decreases adversarial defence.
Due to the nature of Reed-Muller codes, the number of networks must be a power of 2, therefore, the optimal number is often the power of 2 closest and exceeding the number of classes. Adding networks which exceed this power of 2 for error correction is a costly task with quickly decreasing benefit (see table 1c), this should be done with caution.
The comparisons between RMAggNet, Ensemble and CCAT over the MNIST, EMNIST and CIFAR-10 datasets on clean and adversarial inputs allow us to place RMAggNet in context with the other methods. The results for these tests are in sections 5.2, 5.3, and 5.4.
Results on the clean testing data (tables 3, 12 and 18) show that RMAggNet is able to train models which are competitive with the other architectures, equalling Ensemble for some datasets. This result shows that the RMAggNet method has minimal impact on clean dataset performance.
Tables 4 and 5 show that RMAggNet is able to reject out-of-distribution data well, and is able to refuse to classify data which traditional neural networks
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{12}{|c|}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 9.00 & 0.00 & **91.00** & 0.10 & 7.10 & 0.00 & **92.90** & 0.30 & 5.40 & 0.00 & **94.60** \\
0.30 & & 0.00 & 99.00 & **1.00** & & 0.00 & 97.50 & **2.50** & & 0.20 & 83.00 & **16.80** \\
0.70 & & 0.00 & 99.50 & **0.50** & & 0.00 & 98.50 & **1.50** & & 0.00 & 86.40 & **13.60** \\
0.90 & & 0.00 & 99.70 & **0.30** & & 0.00 & 98.80 & **1.20** & & 0.00 & 89.50 & **10.50** \\ \hline \multicolumn{12}{|c|}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.05 & 1.10 & 0.00 & **98.90** & 0.10 & 0.00 & 0.00 & **100.00** & 0.30 & 0.00 & 0.00 & **100.00** \\
0.30 & & 0.90 & 0.00 & **99.10** & & 0.00 & 0.00 & **100.00** & & 0.00 & 0.00 & **100.00** \\
0.70 & & 0.50 & 2.70 & **96.80** & & 0.00 & 0.10 & **99.90** & & 0.00 & 0.00 & **100.00** \\
1.00 & & 0.20 & 14.80 & **85.00** & & 0.00 & 1.60 & **98.40** & & 0.00 & 0.10 & **99.90** \\ \hline \multicolumn{12}{|c|}{**RMAggNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.05 & 9.50 & 65.20 & **25.30** & 0.10 & 3.10 & 67.60 & **29.30** & 0.30 & 0.00 & 82.30 & **17.70** \\
1 & & 12.80 & 43.70 & **43.50** & & 4.00 & 44.90 & **51.10** & & 0.00 & 50.80 & **49.20** \\
2 & & 15.60 & 27.20 & **57.20** & & 5.00 & 24.10 & **70.90** & & 0.20 & 26.20 & **73.60** \\
3 & & 18.50 & 12.90 & **68.60** & & 6.40 & 9.50 & **84.10** & & 0.20 & 11.10 & **88.70** \\ \hline \end{tabular}
\end{table}
Table 20: Results of PGD \(L_{\infty}\) adversaries generated using open-box attacks on CIFAR-10 images with percentages of correct, rejected and incorrect classifications. Lower incorrectness is better.
\begin{table}
\begin{tabular}{|c|c|} \hline \(\epsilon\) & **Accuracy (\%)** \\ \hline
0.00 & 81.21 \\
0.30 & 40.30 \\
0.75 & 4.30 \\
2.50 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 21: Results for the transfer attacks using PGD \(L_{2}\). Table 21a shows the accuracy of the surrogate model on the PGD \(L_{2}\) adversarial datasets. Table 21b shows the results of the adversarial datasets on the CCAT, Ensemble and RMAggNet models.
would be forced to classify. We also see that it performs better on the uniform noise dataset than FMNIST. This is an expected result since FMNIST represents a semantically coherent dataset which is likely to share low-level features with the MNIST dataset the model was trained on. The error correction ability of RMAggNet does impact the percentage of correctly rejected inputs, with more error correction leading to a higher number of incorrect classifications. This presents the challenge of rejection with error correction. If we consider RMAggNet, the result on the clean dataset (table 3) shows that the best version uses \(EC=3\), whereas in an out-of-distribution setting (e.g. table 4) a high \(EC\) value leads to high incorrectness. A balance must be struck between the desired correctness, incorrectness, and rejection which is often dependent on the situation a model will be deployed in.
Across the adversarial tests CCAT is able to reject the most adversarial inputs leading to nearly 0 incorrect classifications being made. However, CCAT does not attempt correction of any inputs which leads to nearly 0 correct classifications at most confidence thresholds. This even occurs when both Ensemble and RMAggNet recover over 90% of the labels from an adversarial attack (see table 8). This approach to rejection means that CCAT is ideal for situations where any uncertainty in the correctness of a result cannot be tolerated. However, if we can allow some incorrectness, and want the option to reject, then Ensemble and RMAggNet allow us to classify many inputs correctly, which greatly reduces the reliance on downstream rejection handling, at the risk of a small amount of incorrectness. If we compare the Ensemble method with RMAggNet, over many of the datasets RMAggNet is able to outperform Ensemble with a slightly higher (or equal) number of correct classifications, and lower incorrectness for comparable correctness as it uses the reject option more effectively. This becomes
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{**CCAT**} \\ \hline \(\tau\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.30 & 29.20 & 0.00 & **70.80** & 0.75 & 12.50 & 0.00 & **87.50** & 2.5 & 10.20 & 0.00 & **89.80** \\
0.30 & & 0.50 & 92.10 & **7.40** & & 0.00 & 97.50 & **2.50** & & 0.00 & 99.50 & **0.50** \\
0.70 & & 0.00 & 96.00 & **4.00** & & 0.00 & 99.20 & **0.80** & & 0.00 & 99.80 & **0.20** \\
0.90 & & 0.00 & 97.40 & **2.60** & & 0.00 & 99.70 & **0.30** & & 0.00 & 99.80 & **0.20** \\ \hline \multicolumn{10}{|c|}{**Ensemble**} \\ \hline \(\sigma\) & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0.00 & 0.30 & 54.30 & 0.00 & **45.70** & 0.75 & 26.70 & 0.00 & **73.30** & 2.5 & 13.40 & 0.00 & **86.60** \\
0.30 & & 53.30 & 0.00 & **46.70** & & 26.50 & 0.00 & **73.50** & & 13.30 & 0.00 & **86.70** \\
0.70 & & 42.00 & 28.60 & **29.40** & & 23.30 & 12.90 & **63.80** & & 13.00 & 0.90 & **86.10** \\
1.00 & & 23.10 & 70.20 & **6.70** & & 16.20 & 54.70 & **29.10** & & 12.20 & 6.30 & **81.50** \\ \hline \multicolumn{10}{|c|}{**RMAggNet**} \\ \hline EC & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** & \(\epsilon\) & **Correct** & **Rejected** & **Incorrect** \\ \hline
0 & 0.30 & 28.70 & 69.10 & **2.20** & 0.75 & 19.40 & 71.70 & **8.90** & 2.5 & 5.60 & 63.00 & **31.40** \\
1 & & 41.20 & 52.10 & **6.70** & & 25.60 & 54.90 & **19.50** & & 8.20 & 41.80 & **50.00** \\
2 & & 52.40 & 36.30 & **11.30** & & 32.90 & 37.90 & **29.20** & & 9.60 & 23.20 & **67.20** \\
3 & & 64.00 & 15.30 & **20.70** & & 39.80 & 17.00 & **43.20** & & 11.10 & 10.80 & **78.10** \\ \hline \end{tabular}
\end{table}
Table 22: Results of PGD \(L_{2}\) adversaries generated using open-box attacks on CIFAR-10 images with percentages of correct, rejected and incorrect classifications. Lower incorrectness is better.
more pronounced at higher \(\epsilon\). The application of RMAggNet to datasets with many more classes, such as ImageNet, would be interesting future work since we have stated that \(|C|\leq 2^{k}\) (section 3.2), and this can be achieved by either adding more networks (increasing \(m\)) or increasing the polynomial degree (\(r\)) which decreases error correction ability and increases the probability of assigning random noise a class. Striking this balance would lead to interesting results regarding the applicability of RMAggNet to larger datasets.
## 7 Conclusion
In this paper we have seen how an architecture inspired by Reed-Muller codes can be used to create an effective CWR method which (to our knowledge) is the first approach combining its rejection and correction ability. The experimental results show the advantages of RMAggNet and allow us to determine situations where it can be beneficial to a system. Comparing the results of RMAggNet to CCAT, shows that CCAT is able to reject nearly 100% of the adversarial images over all attacks and datasets we tested. However, this comes at the cost of rejecting inputs that could otherwise be classified correctly. The sensitive approach of CCAT could be detrimental to a system where the rejected inputs still need to be processed, either using more computationally expensive processes, or reviewed by a human. RMAggNet is able to achieve low incorrectness, often with higher correctness, meaning that, provided the system can accept small amounts of incorrectness, we can reduce the reliance on any downstream rejection handling. From the results, we can see that this can be a significant improvement for small perturbations on the MNIST/EMNIST datasets. Comparing the results of RMAggNet to Ensemble, the results show that RMAggNet appears to be more resilient to adversarial attacks, particularly open-box attacks. This means that we can expect RMAggNet to be a better choice in a situation where we expect adversaries to be present.
|
2309.17254 | Line-of-sight gas radiation effects on near-infrared two-color ratio
pyrometry measurements during plasma wind tunnel experiments | Two-color ratio pyrometry is commonly used to measure the surface temperature
of aerospace materials during plasma wind tunnel experiments. However, the
effect of the plasma radiation on the measurement accuracy is often neglected.
In this paper we formulate a model of the instrument response to analyze the
systematic error induced by the gas radiation along the optical path. CFD
simulations of the plasma flow field, together with a radiation code, allow to
compute the gas spectral radiance within the instrument wavelength range. The
measurement error is numerically assessed as a function of the true object
temperature and emittance value. Our simulations explain the typical behavior
observed in experiments, showing that a significant bias can affect the
measured temperature during the material heating phase. For an actual
experiment on a ceramic-matrix composite, a correction to the measured data is
proposed, while comparative measurements with a spectrometer corroborate the
results. | Andrea Fagnani, Bernd Helber, Annick Hubin, Olivier Chazot | 2023-09-29T14:03:24Z | http://arxiv.org/abs/2309.17254v1 | Line-of-sight gas radiation effects on near-infrared two-color ratio pyrometry measurements during plasma wind tunnel experiments
###### Abstract
Two-color ratio pyrometry is commonly used to measure the surface temperature of aerospace materials during plasma wind tunnel experiments. However, the effect of the plasma radiation on the measurement accuracy is often neglected. In this paper we formulate a model of the instrument response to analyze the systematic error induced by the gas radiation along the optical path. CFD simulations of the plasma flow field, together with a radiation code, allow to compute the gas spectral radiance within the instrument wavelength range. The measurement error is numerically assessed as a function of the true object temperature and emittance value. Our simulations explain the typical behavior observed in experiments, showing that a significant bias can affect the measured temperature during the material heating phase. For an actual experiment on a ceramic-matrix composite, a correction to the measured data is proposed, while comparative measurements with a spectrometer corroborate the results.
keywords: Two-color pyrometry, Error analysis, Gas emission, Plasma wind tunnels, Thermal Protection Materials +
Footnote †: journal: Measurement
## 1 Introduction
The hypersonic flight of a spacecraft through a planetary atmosphere is a fascinating engineering endeavor, often representing the most critical part of a
space mission. During atmospheric entry, the vehicle's kinetic energy is converted into thermal energy of the flow across a strong shock wave, creating an extreme aero-thermal environment [1]. Gas temperatures can rise above 10000 K, thus producing a chemically-reacting plasma flow. Thermal Protection Systems (TPSs) are designed to shield the spacecraft from the severe heat loads [2], ensuring safety of the crew and payload onboard. In this context, Plasma Wind Tunnels (PWTs) offer a laboratory testing environment that allows to study complex gas-surface interaction phenomena, resulting from the TPS material response to high-temperature chemically-reacting flows [3]. These facilities typically use arc heaters or Inductively Coupled Plasma (ICP) torches to produce a high-temperature plasma jet, which is impinged onto a test material sample to duplicate a real flight scenario [4].
In the context of PWT experiments, InfraRed (IR) radiometry is commonly used to characterize the material response during exposure to the plasma jet [5], as it allows non-intrusive probing of the surface temperature up to extreme values (\(\sim 3000\,\mathrm{K}\)). In particular, multi-spectral radiometry is a class of IR techniques that offers some key advantages. Single band IR measurements, in fact, require prior knowledge of the material's emittance to correct the measured signal in order to retrieve the target temperature [6]. Multi-spectral systems, instead, provide simultaneous measurements of the object irradiance in \(N\) wavelength bands. By making suitable assumptions on the behavior of the spectral emittance, \(\varepsilon_{\lambda}\), the system of measurement equations can be solved for the object temperature and the \(N-1\) parameters describing \(\varepsilon_{\lambda}\)[7; 8; 9]. As a result, these techniques allow measuring the material's surface temperature even when its emittance is unknown or changing in an unpredictable fashion, which represents a key factor for PWT metrology [10].
Two-Color Pyrometry (TCP) is the simplest case of multi-spectral radiometry, employing only two measurements bands, and it is therefore widely employed to monitor the evolution of the material surface temperature during PWT experiments [5; 10; 11; 12; 13; 14; 15; 16]. Recently, some authors have proposed advanced applications, including two-color imaging pyrometry using CCD cameras [17; 18], two-color thermographic imaging using IR cameras [19; 20; 21] and high-speed three-dimensional tomographic two-color pyrometry of flames [22]. Although one parameter can be used to model \(\varepsilon_{\lambda}\) in TCP, it is common practice to assume a uniform behavior (gray-body) in the wavelength range of interest. As a result, the ratio of the measured signals becomes independent of the emittance, allowing to solve directly for the object temperature (two-color ratio pyrometry) [7]. In this case, the sensitive wavelength bands are typically selected very close to each other to reduce the approximation error (local gray-body assumption). This, however, increases the sensitivity of the signal ratio to noise and effects of participating media along the optical path, as well as to any variation of \(\varepsilon_{\lambda}\), should it occur [23]. Accurate
selection of the wavelength bands, in terms of range and spacing, determines the sensitivity of the instrument to a change in temperature of the emitting source [24]. For values between 500 K and 2500 K, such as those typically encountered on materials during PWT experiments, the highest sensitivity is obtained around 1 \(\mathrm{\SIUnitSymbolMicro m}\). This range is also weakly affected by optical path absorption from CO\({}_{2}\) and H\({}_{2}\)O [25], making it ideal for calibration purposes.
The effects on TCP related to the gray-body assumption, in case of a real surface with variable spectral emittance [19, 26], as well as to environmental factors, in terms of ambient reflections [27, 28], window transmission [29] or absorption along the optical path [25, 30], as well as background radiation [31], have received considerable attention in the literature. However, the influence of gas radiation in PWT metrology is often neglected. In this case, the plasma jet, reaching temperatures within 5000-10 000 K, produces an intense emission along the optical path between the instrument and the material's surface. This primarily originates from the spontaneous emission of excited atoms and molecules in the ultraviolet (UV) to near-infrared (NIR) wavelength region of the electromagnetic spectrum [32, 33]. As this overlaps with the wavelength range of many commercial two-color pyrometers, we can expect the plasma emission to induce systematic errors on the measured temperature.
Notably, Loesener and co-workers [34, 35] already considered this source of interference; the problem, however, was chiefly avoided either selecting a suitable spectral region where the gas radiation was weak enough, or measuring the temperature of the material on the back surface, in order to avoid the pyrometer optical path to cross the plasma jet. MacDonald et al. [36] also conducted a preliminary analysis of the error induced by the plasma radiation from an ICP torch on a single-band IR pyrometer. The problem was treated more extensively in the context of plasma-sprayed particle diagnostics, where some researchers analyzed the influence of both direct plasma emission and scattered light on the measurement of the particle temperature by means of TCP. For instance, Sakuta and Boulos [37] defined a thermal visibility factor, based on the ratio of the particle emission to the sum of particle and gas emission along the line of sight. A critical plasma temperature could be identified, depending on the particle surface temperature and emittance value, defining an acceptable measurement range. Gougeon and Moreau [38] investigated the influence of the scattered light by combining spectroscopic measurements and the Mie scattering theory. They showed that large positive errors could affect the measured temperature, as later confirmed by Salhi et al. [39]. Correspondingly, a minimum measurable temperature could be defined as a function of the plasma emission intensity. More recently, Aziz et al. [40] measured the spectral signature of an Ar/He plasma loaded with zirconia particles, also concluding that TCP measurements could be significantly affected by the plasma emission along the line of sight.
Starting from these considerations, we propose to study the effect of the plasma emission along the optical path on TCP measurements during PWT experiments of TPS materials. Our approach is based on a model for the response of a two-color ratio pyrometer (sec. 2), which, after calibration (sec. 3), allows to compute the effect of the gas radiation on the measured temperature. Starting from CFD simulations of the flow field, the plasma emission along the line of sight is numerically simulated with a radiation code. The effect on the measured temperature is studied as a function of the ICP torch electric power, as well as in terms of the material surface temperature and emittance value (sec. 4), showing that a large positive bias can be expected during the transient heating phase of the material sample. Experimental results on a ceramic matrix composite (sec. 5), tested in the Plasmatron facility at the von Karman Institute for Fluid Dynamics (VKI), confirm the predicted trends. A correction to the measured signals allows to compensate for the biasing effect, while a comparative analysis with a spectrometer, simultaneously probing the line-of-sight emission, corroborates the results.
## 2 TCP Instrument Response Model
Let us consider an IR instrument with absolute spectral radiance responsivity \(R_{\lambda}~{}=~{}k\tilde{R}_{\lambda}~{}[\mathrm{A\,W^{-1}}]\), which includes detector sensitivity and internal optics transmission, where \(\tilde{R}_{\lambda}\) is the normalized spectral response and \(k\) is the absolute sensitivity coefficient. Then, the radiometric measurement equation relates the instrument output signal, \(S\), to the spectral radiance incident on the instrument collector optics, \(L_{\lambda}\), as [41; 42]
\[S=\Theta k\int_{\Delta\lambda}\tilde{R}_{\lambda}L_{\lambda}d\lambda~{}~{}[ \mathrm{A}], \tag{1}\]
where \(\Theta~{}[\mathrm{m^{2}\,sr}]\) is the instrument optical throughput, \(\lambda\) is the wavelength and \(\Delta\lambda\) is the range defined by \(\tilde{R}_{\lambda}\). Then, the signal S is digitalized by the instrument electronics and recorded through a software interface.
Neglecting ambient reflections and optics emission for the high object temperatures of our application (\(T_{\mathrm{obj}}>500\,\mathrm{K}\)), considering the schematic in Fig. 1, \(L_{\lambda}\) can be written as
\[\begin{split} L_{\lambda}&\approxeq\tau_{\lambda}^{ \mathrm{atm}}\tau_{\lambda}^{\mathrm{win}}\varepsilon_{\lambda}L_{\lambda}^{ \mathrm{bb}}(T_{\mathrm{obj}})+\tau_{\lambda}^{\mathrm{win}}L_{\lambda}^{ \mathrm{g,e}}+\\ &+\tau_{\lambda}^{\mathrm{atm}}\tau_{\lambda}^{\mathrm{win}}(1- \varepsilon_{\lambda})L_{\lambda}^{\mathrm{g,s}}~{}[\mathrm{W\,m^{-2}\,sr^{-1} \,\mu m^{-1}}].\end{split} \tag{2}\]
Here, \(\tau_{\lambda}^{\mathrm{atm}}\) and \(\tau_{\lambda}^{\mathrm{win}}\) represent the spectral transmittance of the atmosphere and external optics along the optical path, respectively, while \(\varepsilon_{\lambda}\) is the spectral directional emittance of the material's surface. \(L_{\lambda}^{\mathrm{bb}}(T_{\mathrm{obj}})\) is the spectral radiance of an ideal blackbody at temperature \(T_{\mathrm{obj}}\), described by Planck's law as [43]
\[L_{\lambda}^{\mathrm{bb}}(\lambda,T)=\frac{2\mathrm{hc}}{\lambda^{5}}\frac{1} {\exp[\mathrm{hc}/(k_{\mathrm{B}}\lambda T)]-1}, \tag{3}\]
where h is the Planck's constant, c is the speed of light and \(k_{\rm B}\) is the Boltzmann's constant. Lastly, \(L_{\lambda}^{\rm g,e}\) is the spectral radiance emitted by the gas along the optical path, while \((1-\varepsilon_{\lambda})L_{\lambda}^{\rm g,s}\) is the spectral radiance emitted by the surrounding gas and reflected by the material's surface onto the instrument collector optics.
In this work, we focus on the influence of \(L_{\lambda}^{\rm g,e}\) on the measured temperature; hence, the following assumptions are introduced. Considering highly emitting materials, i.e., \(\varepsilon_{\lambda}\geq 0.8\), the contribution of \(L_{\lambda}^{\rm g,s}\) in the previous equation is considered negligible. However, we remark that this could become important in case of highly reflective surfaces. Moreover, for an instrument with operating range around \(1\,\mathrm{\SIUnitSymbolMicro m}\) and a stand-off distance of \(1\,\mathrm{m}\), we consider a thin optical path (\(\tau_{\lambda}^{\rm atm}\approxeq 1\)). Finally, we also assume that the window material is carefully chosen to provide a uniform spectral transmittance in the instrument wavelength range, such that \(\tau_{\lambda}^{\rm win}=\tau_{\rm win}\).
Inserting eq. 2 in eq. 1, and considering the aforementioned assumptions, the ratio, \(\rho_{\rm m}\), of the signals, \(S_{1}\) and \(S_{2}\), output from each wavelength band of a two-color pyrometer writes
\[\rho_{\rm m}=\frac{S_{1}}{S_{2}}\approxeq\frac{k_{1}}{k_{2}}\times\frac{\int_ {\Delta\lambda_{1}}\tilde{R}_{\lambda,1}[\varepsilon_{\lambda}L_{\lambda}^{ \rm bb}(T_{\rm obj})+L_{\lambda}^{\rm g,e}]d\lambda}{\int_{\Delta\lambda_{2}} \tilde{R}_{\lambda,2}[\varepsilon_{\lambda}L_{\lambda}^{\rm bb}(T_{\rm obj})+ L_{\lambda}^{\rm g,e}]d\lambda}, \tag{4}\]
where the factor \(\varphi=k_{1}/k_{2}\) accounts for the absolute sensitivity ratio between \(R_{\lambda,1}\) and \(R_{\lambda,2}\). Since only the normalized responses \(\tilde{R}_{\lambda,i}\) are generally provided by the manufacturer, this coefficient can be determined during calibration, as later discussed in section 3.2. The previous equation also considers the same optical throughput for the two measuring bands, i.e., \(\Theta_{1}=\Theta_{2}\), which is the case for dual sandwich detectors considered in this work.
During calibration, performed in a controlled laboratory environment at room
Figure 1: Schematic of the contributions to the radiance detected by the instrument. The collection volume is considered to be a cylinder of size \(A\times d\), incident on the material surface with an angle \(\alpha\) with respect to the surface normal.
temperature, optical path emission is negligible, i.e., \(L_{\lambda}^{\rm g,e}\approx 0\), and a blackbody calibration source provides \(\varepsilon_{\lambda}\approxeq 1\). Hence, eq. 4 reduces to
\[\rho_{\rm c}=\varphi\frac{\int_{\Delta\lambda_{1}}\bar{R}_{\lambda,1}L_{\lambda} ^{\rm bb}(T_{\rm obj})d\lambda}{\int_{\Delta\lambda_{2}}\bar{R}_{\lambda,2}L_{ \lambda}^{\rm bb}(T_{\rm obj})d\lambda}=\mathcal{H}_{\rm c}(T_{\rm obj}), \tag{5}\]
where \(\rho_{\rm c}\) indicates the signal ratio obtained during calibration and \(\mathcal{H}_{\rm c}\) represents the calibration curve. During measurements in a PWT, instead, emission from the high-temperature plasma along the optical path will affect the measured ratio when \(L_{\lambda}^{\rm g,e}\approx\varepsilon_{\lambda}L_{\lambda}^{\rm bb}(T_{\rm obj})\). In this case, as \(\rho_{\rm m}\) will differ from \(\rho_{\rm c}\) for the same value of \(T_{\rm obj}\), one only measures the apparent temperature
\[T_{\rm app}=\mathcal{H}_{\rm c}^{-1}(\rho_{\rm m}), \tag{6}\]
and the quantity
\[e=\frac{T_{\rm app}-T_{\rm obj}}{T_{\rm obj}}\times 100 \tag{7}\]
will describe the relative measurement error percent in the following analysis. Eq. 4, eq. 5 and eq. 6 represent the Instrument Response Model (IRM) that will be used to study the effect of the gas emission along the line of sight on the temperature measured by means of TCP. For this, we consider a target material with uniform spectral emittance in the instrument range, in order to satisfy the gray-body assumption of ratio pyrometry.
## 3 Experimental set-up and calibration
### The VKI Plasmatron facility
The VKI Plasmatron facility, rendered in Figure 2(a), is equipped with a 160 mm diameter ICP torch, powered by a 400 kHz, 1.2 MW, 2 kV electric generator and connected to a 1.4 m diameter, 2.4 m long test chamber. An extensive description of the facility and its performance was given by Bottin and co-workers [44; 45; 46]. Fig. 2(b) shows a schematic section of the test chamber and ICP torch. The latter is made up of a quartz tube, surrounded by a six-turn flat coil inductor and supplied by a gas injection system. The electric power to the coil, \(P_{\rm el}\), is monitored by a voltage-current probe, while a calibrated flow meter (F-203AV, Bronkhorst High-Tech B.V, NL) controls the mass flow rate, \(\dot{m}_{\rm gas}\), of the test gas supplied to the torch. The gas, compressed atmospheric air in this case, is heated by electromagnetic induction, thus providing a chemically pure plasma flow. Pressure in the test chamber, \(p_{\rm c}\), is measured by an absolute pressure transducer (Membranovac DM 12, Leybold GmbH, DE). The material sample is mounted onto a movable probe holder, which can be injected into the plasma flow at a distance of 385 mm from the torch exit.
### Two-color pyrometer and calibration
We used the Marathon Series MR1SB (Raytek Corporation, USA) two-color pyrometer, featuring a sandwich silicon detector with spectral responsivities between 0.75-1.15 \(\upmu\)m and 0.95-1.15 \(\upmu\)m (Fig. 3(a)) and a temperature range between 973 K and 2073 K. Signals were recorded with an integration time of 10 ms and at a frequency of 10 Hz. Optical access to the Plasmatron test chamber was provided through a 1.5 cm thick quartz window (label D in Fig. 2(a)), whose spectral transmission in the instrument range can be considered uniform with a value \(\tau_{\mathrm{win}}\approxeq 0.87\). The instrument is placed at 1 m distance from the sample and the optical access provides an inclination of about 35\({}^{\circ}\) with respect to the surface normal. The probing area over the sample surface has a size of approximately 14 mm in diameter.
The radiometric calibration of the instrument was performed with a variable temperature blackbody source (R1500T, Ametek Landcal, UK) in the range 973-1773 K. The latter features a 120\({}^{\circ}\) conical-ended silicon-carbide cavity with a 40 mm clear aperture and a PID controller to hold the set temperature. The uncertainty on the source temperature is \(\pm 3\) K, while its emittance is considered unitary and spectrally uniform in the instrument wavelength range (0.75-1.15 \(\upmu\)m).
During calibration, the two-color pyrometer is placed in front of the calibration source, replicating the operating distance and position of the window along the optical path. Then, the signal output from each band is recorded for a set of source temperatures to provide the calibration points. Figure 3(b) shows the
Figure 2: (a) Rendering of the VKI Plasmatron facility, showing the test-chamber and the location of the view ports. (b) Schematic of the experimental set-up, highlighting the main components of the ICP torch and the geometry of the optical paths of the two-color pyrometer and spectrometer. Dimensions in millimeters.
signal ratio, \(\rho_{\mathrm{c}}\), as a function of \(T_{\mathrm{obj}}\). A value of \(\varphi=0.315\) allowed to best fit the simulated response through eq. 5 to the calibration points, with a residual within \(\pm 1.5\%\). This deviation, although very limited, can be related to an imperfect knowledge of the sensor responsivities. While the simulated response will be used in sec. 4 to study the effect of \(L_{\lambda}^{\mathrm{g,e}}\) on the measured temperature, a quadratic function fits the calibration points with negligible residuals and will be used in sec. 5 to process the actual experimental data. The quadratic fit is extrapolated to \(2073\,\mathrm{K}\) to cover the whole instrument range. Uncertainties are propagated through the processing steps with a Monte Carlo sampling approach. We consider \(\pm 3\,\mathrm{K}\) on the calibration source temperature and \(\pm 1\%\) on both \(S_{1}\) and \(S_{2}\), including resolution and repeatability. This leads to about \(\pm 1.5\%\) uncertainty on the measured temperature, excluding possible biasing effects.
### Spectrometer and calibration
We used a compact spectrometer (MAYA 2000 PRO series, Ocean Insight, USA), coupled to a custom collection optics, as shown in Fig. 4(a), to probe the spectral radiance emitted by the surface of the material sample within the \(750-1030\,\mathrm{nm}\) range. The spectrometer features a \(25\,\mathrm{\SIUnitSymbolMicro m}\) wide entrance slit, a \(300\,\mathrm{grooves/mm}\) grating and a back-thinned 2D CCD detector. The integration
Figure 3: (a) Normalized spectral responses, \(\tilde{R}_{\lambda,1}\) and \(\tilde{R}_{\lambda,2}\), of the Raytek Marathon Series MR1SB two-color pyrometer. (b) Simulated signal ratio from eq. 5 closely matches the calibration points to a maximum residual of \(1.5\%\), after fitting the modulation factor (\(\varphi~{}=~{}0.315\)). A quadratic function fits the calibration points with negligible residuals and will be used to process the actual experimental data in sec. 5.
time was set to 1 s and the instrument was triggered at a frequency of 0.5 Hz by a pulse generator (DG535, Stanford Research Systems, USA), allowing precise synchronization with the two-color pyrometer.
The collection optics consists of a 550 \(\mathrm{\SIUnitSymbolMicro m}\) core diameter multi-mode optics fiber, a 75 mm focal length achromatic lens, a 455 nm high-pass filter and a 5 mm diameter iris. The optics are mounted on a 1 inch diameter tube, held by a kinematic mount for precise alignment to the material surface. A stack of Neutral Density (ND) filters is additionally mounted to decrease the detected irradiance and avoid saturation of the detector. Optical access to the Plasmatron chamber was provided through a 5 mm thick CaF\({}_{2}\) window (label E in Fig. 2(a)), offering \(\sim 95\%\) transmission in the wavelength range. In a similar configuration to the two-color pyrometer, the collection optics are placed at \(\sim 100\) cm from the sample surface with a \(\sim 35^{\circ}\) inclination with respect to its normal. The probing area has a size of about 10 mm in diameter on the sample surface. The spectral resolution was characterized with a low-pressure Ar lamp, resulting in about 0.75 \(\mathrm{\SIUnitSymbolMicro m}\) full-width at half maximum.
The system was calibrated using the same reference blackbody source described previously, for a source temperature of 1773 K and reproducing the optical path encountered in the measurement. Indicating with \(\hat{U}_{\lambda}=U_{\lambda}-U_{\lambda,\mathrm{bg}}\) the raw signal detected by the spectrometer, \(U_{\lambda}\), minus the background signal, \(U_{\lambda,\mathrm{bg}}\), (in digital intensity counts) and with \(\Delta t\) the integration time, the measured spectral radiance \(L_{\lambda}^{\mathrm{m}}\) is obtained as [47]
\[L_{\lambda}^{\mathrm{m}}=\frac{\hat{U}_{\lambda}^{\mathrm{m}}}{\Delta t_{ \mathrm{m}}}\cdot f_{\lambda}^{\mathrm{c}}, \tag{8}\]
where
\[f_{\lambda}^{\mathrm{c}}=\frac{L_{\lambda}^{\mathrm{bb}}}{\hat{U}_{\lambda}^{ \mathrm{c}}/\Delta t_{\mathrm{c}}}\ \left[\mathrm{mW\ ms/(count\ cm^{2}\ \mathrm{\SIUnitSymbolMicro m \ sr})}\right] \tag{9}\]
is the calibration factor and the letters "m" and "c" indicate the quantities recorded during measurement and calibration, respectively. Figure 4(b) shows the calibration factor as a function of wavelength. The calibration law assumes linearity of the signal \(\hat{U}_{\lambda}\) with respect either to the incident radiance and integration time, which was checked during calibration.
## 4 Numerical assessment of the measurement error due to plasma emission along the line of sight
### Plasma flow field
The subsonic plasma flow in the Plasmatron chamber was numerically simulated using a two-dimensional magnetohydrodynamic solver, referred to as CF-ICP in the following, which couples the Maxwell induction equations with the Navier-Stokes equations under the assumptions of Local Thermodynamic Equilibrium (LTE) and axisymmetric steady flow [48]. The code is integrated into the Computational Object-Oriented Library for Fluid Dynamics (COOLFluiD) [49] and relies on the Mutation++ library [50] to compute the thermodynamic and transport properties of an eleven-species air mixture (O\({}_{2}\), N\({}_{2}\), O\({}_{2}\)\({}^{+}\), N\({}_{2}\)\({}^{+}\), NO, NO\({}^{+}\), O, O\({}^{+}\), N, N\({}^{+}\), e\({}^{-}\)). Under well-established assumptions, the flow in the ICP torch can be considered continuum, partially ionized, and collision-dominated [51]. Then, the Navier-Stokes equations are used to express mass, momentum and energy conservation. The electromagnetic field is modeled with a simplified form of Maxwell's induction equation, coupled with the momentum and energy equations through Lorentz force and Joule heating effects. As the Reynolds number is typically low (\(Re\sim 100\)), the flow is assumed to be laminar and transition is neglected. The LTE model is adopted, where energy modes are assumed to follow a Maxwell-Boltzmann distribution and equilibrium chemistry occurs.
Figure 5(a) shows an example of the simulated temperature field. The domain includes the torch and a portion of the test chamber up to \(500\,\mathrm{mm}\) from the torch exit and \(160\,\mathrm{mm}\) from the jet axis, as well as a representative \(50\,\mathrm{mm}\) diameter hemispherical probe positioned at \(385\,\mathrm{mm}\) downstream of the torch. A
Figure 4: (a) Rendering of the compact spectrometer, together with the optical set-up designed for this work. (b) Calibration factor of the spectrometer as a function of wavelength.
quadrilateral mesh with \(16\,000\,\mathrm{cells}\) was used for the computation and a convergence study confirmed the grid independence. Boundary conditions are specified in terms of the torch and probe wall temperature (\(T_{\mathrm{w}}=350\,\mathrm{K}\)), chamber pressure (\(p_{\mathrm{c}}=50\,\mathrm{mbar}\)), inlet gas mass flow rate (\(\dot{m}_{\mathrm{gas}}=16\,\mathrm{g/s}\)) and input numerical electric power (\(P_{\mathrm{el}}^{\mathrm{sim}}\)). Nine simulations were carried out for \(50\,\mathrm{kW}<P_{\mathrm{el}}^{\mathrm{sim}}<130\,\mathrm{kW}\), with increasing steps of \(10\,\mathrm{kW}\). In this regard, it is important to notice that \(P_{\mathrm{el}}^{\mathrm{sim}}\) differs from the value measured experimentally, \(P_{\mathrm{el}}\), due to the energy efficiency of the ICP torch. Recent comparison with spectroscopic temperature measurements of the gas at different axial locations suggested that \(P_{\mathrm{el}}^{\mathrm{sim}}/P_{\mathrm{el}}\approxeq 35\)-\(40\%\)[33].
The figure also represents a \(1\,\mathrm{m}\) long optical slab (\(\zeta\)-axis), originating from the surface of the probe with a \(35^{\circ}\) inclination with respect to the jet axis and reaching the the two-color pyrometer. Figure 5(b) shows the gas temperature and pressure along the optical slab for the different values of \(P_{\mathrm{el}}^{\mathrm{sim}}\). Temperature increases rapidly through the boundary layer around the probe, up to several thousands of Kelvin, and drops to \(350\,\mathrm{K}\) outside the jet core, i.e., for \(\zeta>150\,\mathrm{mm}\). Pressure, instead, is fairly uniform in the test chamber.
Figure 5: (a) Temperature field of the VKI Plasmatron ICP torch, simulated with CF-ICP code for an eleven-species air mixture with \(p_{\mathrm{c}}=50\,\mathrm{mbar}\), \(\dot{m}_{\mathrm{gas}}=16\,\mathrm{g}\,\mathrm{s}^{-1}\) and \(P_{\mathrm{el}}^{\mathrm{sim}}=90\,\mathrm{kW}\). Also represented is the \(1\,\mathrm{m}\) optical slab from the material sample up to the instrument collection optics. (b) Temperature and pressure distributions along the optical slab, extracted from the CFD simulations as a function of \(P_{\mathrm{el}}^{\mathrm{sim}}\). Since the domain is limited to \(160\,\mathrm{mm}\) from the jet axis, values are extrapolated up to the position of the window (\(\zeta=800\,\mathrm{mm}\)), while laboratory conditions (\(T=300\,\mathrm{K}\) and \(p=1\,\mathrm{atm}\)) are considered for \(800\,\mathrm{mm}<\zeta<1000\,\mathrm{mm}\).
Since the simulation domain terminates at \(160\,\mathrm{mm}\) from the jet axis, corresponding to \(278\,\mathrm{mm}\) along \(\zeta\), quantities were uniformly extrapolated up to the position of the window, i.e., \(\zeta=800\,\mathrm{mm}\), while outside the test chamber, i.e., for \(\zeta>800\,\mathrm{mm}\), we assume laboratory conditions with \(T=300\,\mathrm{K}\) and \(p=1\,\mathrm{atm}\).
In order to reduce the computational effort related to the CFD simulations, the temperature on the surface of the probe was fixed at \(350\,\mathrm{K}\). During a PWT experiment, instead, this will rise to an equilibrium value as a result of the heat transfer between the gas and the material. As the thermal boundary layer around the probe is very small (\(\sim 5\,\mathrm{mm}\)) with respect to the jet core (\(\sim 150\,\mathrm{mm}\)), where much of the emission is originating, we expect a limited effect on \(L_{\lambda}^{\mathrm{g,e}}\).
While a detailed validation of the CFD simulations of the plasma flow field is currently missing, the following analysis aims to show the principal effects of plasma emission on the measured surface temperature, providing better understanding and interpretation of the experimental results.
### Line-of-sight and surface radiance
We used the Non-EQUilibrium Air Radiation (NEQAIR v.15.0) code [52; 53] to simulate the spectral radiance emitted by the plasma along the line of sight. This allows to compute line-by-line radiation spectra, including spontaneous emission, absorption and stimulated emission, due to transitions between different energy states of atomic and molecular species through a non-uniform gas mixture. Inputs to the code are the gas temperature and number density of constituent species along the optical path, which were provided by the aforementioned CFD simulations. Considering LTE, the population of the excited states is assumed to follow a Boltzmann distribution.
For different values of \(P_{\mathrm{el}}^{\mathrm{sim}}\), Fig. 6 shows the simulated \(L_{\lambda}^{\mathrm{g,e}}\) spectra in the \(0.7\)-\(1.2\,\mathrm{\SIUnitSymbolMicro m}\) range. For visualization purposes, the spectral resolution was downgraded through a convolution with a Gaussian lineshape of \(0.75\,\mathrm{nm}\) at full-width at half maximum. The lines originating from excited states of O and N atoms are clearly evident, together with the background radiation, mainly due to the first-positive rovibronic transition of N\({}_{2}\). In the same plot, the spectral radiance \(L_{\lambda}^{\mathrm{obj}}=\varepsilon_{\lambda}L_{\lambda}^{\mathrm{bb}}(T _{\mathrm{obj}})\), from a gray-body with \(\varepsilon_{\lambda}=0.85\), is also shown for different values of the object temperature between \(750\,\mathrm{K}\) and \(2000\,\mathrm{K}\).
Lastly, the shaded gray areas represent the sensitive wavelength bands of the MR1SB two-color pyrometer. We can observe how emission from the plasma easily overcomes the object's radiance up to \(T_{\mathrm{obj}}=1000\,\mathrm{K}\), while the peaks corresponding to the atomic lines achieve comparable values even above \(T_{\mathrm{obj}}=1500\,\mathrm{K}\). Additionally, since \(L_{\lambda}^{\mathrm{g,e}}\) is highly non-uniform within the sensitive bands of the two-color pyrometer, the ratio of the detected signals can be strongly affected.
### Sample visibility factors
For a range of temperatures \(500\,\mathrm{K}<T_{\mathrm{obj}}<2000\,\mathrm{K}\) and \(\varepsilon_{\lambda}=0.85\), we compute the sample band visibility factors, \(\nu_{\Delta\lambda_{i}}\), adapting the definition of Sakuta and Boulos [37] to consider also the instrument spectral response in each band
\[\nu_{\Delta\lambda_{i}}=\frac{\int_{\Delta\lambda_{i}}\tilde{R}_{\lambda,i} \varepsilon_{\lambda}L_{\lambda}^{\mathrm{bb}}(T_{\mathrm{obj}})}{\int_{ \Delta\lambda_{i}}\tilde{R}_{\lambda,i}\left[\varepsilon_{\lambda}L_{\lambda}^ {\mathrm{bb}}(T_{\mathrm{obj}})+L_{\lambda}^{\mathrm{g,e}}\right]}. \tag{10}\]
These coefficients relate the detected object emission to the sum of the object and plasma emission along the line of sight, clearly showing the plasma interference effect on radiation thermometry.
In Fig. 7(a) we can appreciate the trend of \(\nu_{\Delta\lambda_{1}}\) and \(\nu_{\Delta\lambda_{2}}\) with \(T_{\mathrm{obj}}\) for different values of \(P_{\mathrm{el}}^{\mathrm{sim}}\). Both visibility factors are close to zero until \(750\,\mathrm{K}\), below which the plasma emission overcomes significantly the object emission and prevents its detection. A transition region is observed for \(750\,\mathrm{K}<T_{\mathrm{obj}}<1500\,\mathrm{K}\), where object and plasma emission have similar intensities. Only above \(1500\,\mathrm{K}\) the plasma emission becomes negligible and the visibility factors approach one. Additionally, Fig. 7(b) shows the value of \(T_{\mathrm{obj}}\) as a function of \(P_{\mathrm{el}}^{\mathrm{sim}}\) for which \(\nu_{\Delta\lambda_{i}}=0.99\) and \(\nu_{\Delta\lambda_{i}}=0.90\), respectively. This highlights that, for a certain value of the object temperature, the visibility factors have different values in each band, which can further increase the biasing effect on the measured signal ratio.
### Simulated apparent temperature and measurement error
Using the instrument response model formulated in sec. 2, we simulate the effect of the plasma emission on the measured apparent temperature. For \(\varepsilon_{\lambda}\;=\;0.85\) and \(500\,\mathrm{K}\;<\;T_{\mathrm{obj}}\;<\;2500\,\mathrm{K}\), \(L_{\lambda}^{\mathrm{g,e}}\) is inserted in eq. 4 to simulate the measured signal ratio. Then, the apparent temperature is computed from eq. 6. Depending on the value of \(P_{\mathrm{el}}^{\mathrm{sim}}\), Fig. 8(a) clearly demonstrates that \(T_{\mathrm{app}}\) deviates considerably from \(T_{\mathrm{obj}}\) when the latter is below 1500 K. Correspondingly, the relative error is computed according to eq. 7 and shown in Fig. 8(b). A large systematic error is found for low values of \(T_{\mathrm{obj}}\), while the steepness of the curves demonstrates the high sensitivity to \(L_{\lambda}^{\mathrm{g,e}}\).
Figure 8(c) synthetically depicts a map of the error induced by \(L_{\lambda}^{\mathrm{g,e}}\), where each curve represents \(e=1\%\) as a function of \(T_{\mathrm{obj}}\) and \(\varepsilon_{\lambda}\). A large portion of the instrument range can be affected by errors larger than \(1\%\), this being more pronounced for high values of \(P_{\mathrm{el}}^{\mathrm{sim}}\) and low values of \(\varepsilon_{\lambda}\). However, we should notice that an instrument with a higher temperature range is typically selected for high values of \(P_{\mathrm{el}}^{\mathrm{sim}}\) to accommodate the higher values also expected for \(T_{\mathrm{obj}}\). Moreover, a horizontal asymptote is reached for \(\varepsilon_{\lambda}\to 0\), since the instrument would only detect the plasma radiation in this limit condition. This graph is useful to provide a first estimation of the error induced by \(L_{\lambda}^{\mathrm{g,e}}\) on the measured temperature, knowing approximately the expected values of \(T_{\mathrm{obj}}\) and \(\varepsilon_{\lambda}\) for a selected power condition during the experiment.
### Simulated effect during transient heating
Finally, we analyze the effect occurring during the transient heating phase of a material sample for a PWT experiment. As the latter is injected into the plasma flow, we consider the time evolution of the surface temperature to follow the analytical function
\[T_{\mathrm{obj}}(t)=\left\{\begin{aligned} & T_{0}\;\;\mathrm{for}\;\; \mathrm{t}<0\,\mathrm{s}\\ & T_{0}+g\left[1-(1+\omega_{0}t)\exp(-\omega_{0}t)\right]\;\; \mathrm{for}\;\;\mathrm{t}>0\,\mathrm{s},\end{aligned}\right. \tag{11}\]
which represents the step response of a critically dumped second order system, with gain \(g\) and natural frequency \(\omega_{0}\). We assume an initial temperature of \(T_{0}=350\,\mathrm{K}\), while \(g=1250\,\mathrm{K}\) and \(\omega_{0}=0.1\,\mathrm{s}^{-1}\) provide a steady-state temperature of \(1600\,\mathrm{K}\) after \(80\,\mathrm{s}\). These values are selected to represent qualitatively the experimental results, discussed later in sec. 5. Figure 8(d) shows \(T_{\mathrm{obj}}(t)\), along with the simulated behavior of the apparent temperature, \(T_{\mathrm{app}}(t)\), as it would be measured by the two-color pyrometer including the effect of \(L_{\lambda}^{\mathrm{g,e}}\) for two representative conditions with \(P_{\mathrm{el}}^{\mathrm{sim}}=60\,\mathrm{kW}\) and \(P_{\mathrm{el}}^{\mathrm{sim}}=80\,\mathrm{kW}\), considering \(\varepsilon_{\lambda}=0.85\). The bias induced by the gas radiation along the instrument line of sight causes \(T_{\mathrm{app}}\) to drop from large values, rather than to follow the rising trend of \(T_{\mathrm{obj}}\), thus resulting in a significant error during the transient heating. As time evolves, the apparent temperature approaches the object temperature when this becomes high enough for \(L_{\lambda}^{\mathrm{g,e}}\) to become negligible.
Figure 8: (a) Simulated effect of \(L_{\lambda}^{\text{g,e}}\) on \(T_{\text{app}}\) for \(\varepsilon_{\lambda}=0.85\), as a function of \(P_{\text{el}}^{\text{sim}}\). (b) Relative error percent of \(T_{\text{app}}\) with respect to \(T_{\text{obj}}\) for \(\varepsilon_{\lambda}=0.85\), as a function of \(P_{\text{el}}^{\text{sim}}\). (c) Map of the simulated measurement error induced by \(L_{\lambda}^{\text{g,e}}\) as a function of \(T_{\text{obj}}\) and \(\varepsilon_{\lambda}\). Each line represents \(e=1\%\) and separates the regions for which \(e>1\%\) and \(e<1\%\), respectively. (d) Expected material temperature during a PWT experiment, \(T_{\text{obj}}\), and correspondent simulated apparent temperature, \(T_{\text{app}}\), as it would be measured by the two-color pyrometer considering the influence of \(L_{\lambda}^{\text{g,e}}\) for \(P_{\text{el}}^{\text{sim}}=60\,\text{kW}\) and \(P_{\text{el}}^{\text{sim}}=80\,\text{kW}\). A positive bias induces \(T_{\text{app}}\) to decrease during the transient heating and to approach \(T_{\text{obj}}\) only when its value is high enough.
## 5 Experimental results and discussion
### Material sample and test conditions
Figure 9(a) shows the ceramic matrix composite sample (Keraman C/SiC, MT-Aerospace) that was selected to study the effect of \(L_{\lambda}^{\mathrm{g,e}}\) on TCP during a PWT experiment. The material is a 26.5 mm diameter, 3 mm thick disc, made up of a carbon fiber matrix, with 7 um diameter filaments, and treated with a polymer vapor infiltration before a final coating (60-80 um thick) with SiC by chemical vapor deposition. Room temperature spectral reflectance, \(r_{\lambda}\), was characterized before the experiment using a Cary-500 spectrophotometer (Agilent Technologies Inc., USA). The normal spectral emittance in Fig. 9(b), obtained as \(\varepsilon_{\lambda}=1-r_{\lambda}\), shows a uniform value within 0.75-1.15 um, close to 0.85. Hence, this allowed to rely on the gray-body assumption for TCP within the instrument range, as well as to consider a negligible influence of \(L_{\lambda}^{\mathrm{g,s}}\) due to the large value of \(\varepsilon_{\lambda}\). Figure 9(c) shows a picture during the plasma exposure, where the material sample was mounted on the 50 mm diameter ESA standard probe and positioned at 385 mm from the ICP torch exit. The experimental values, defining the test conditions achieved in the VKI Plasmatron facility, are reported in table 1.
### Two-color pyrometry measurement and correction
The time evolution of the measured signals from the two wavelength bands, \(S_{1}\) and \(S_{2}\), is shown in Fig. 10(a). Here, \(t=0\) s represents the instant when the sample is injected into the plasma flow. After a transient heating phase of approximately 80 s, the signals settle to a steady value; then, the plasma torch is switched off at \(t=300\) s, allowing to record the sample cool down until \(t=400\) s.
We can notice that both \(S_{1}\) and \(S_{2}\) drop at \(t=0\) s, as the sample obstructs the lower half of the plasma jet after injection. Moreover, shortly after \(t=0\) s we can consider \(T_{\mathrm{obj}}\) to be low enough such that \(L_{\lambda}^{\mathrm{g,e}}\gg L_{\lambda}^{\mathrm{obj}}\) and \(L_{\lambda}\approxeq L_{\lambda}^{\mathrm{g,e}}\). We indicate with \(S_{1}^{\mathrm{ref}}\) and \(S_{2}^{\mathrm{ref}}\) the time-averaged signals within a reference interval \(\Delta t_{\mathrm{ref}}\) shortly after the injection time, in this case between 0 s and 6 s. This range should be chosen according to the particular measurement condition and must end before the signals start to rise due to the emission from the object's surface. Since \(L_{\lambda}\approxeq L_{\lambda}^{\mathrm{g,e}}\) within \(\Delta t_{\mathrm{ref}}\), then eq. 1 yields
\[S_{i}^{\mathrm{ref}}\approxeq k_{i}\Theta\int_{\Delta\lambda_{i}}\tilde{R}_{ \lambda,i}L_{\lambda}^{\mathrm{g,e}}d\lambda. \tag{12}\]
\begin{table}
\begin{tabular}{l l l l l l} \hline distance & duration & gas & \(P_{\mathrm{el}}\) & \(\dot{m}_{\mathrm{gas}}\) & \(p_{\mathrm{c}}\) \\ mm & s & & kW & g/s & mbar \\ \hline \(385\pm 1\) & 300 & air & \(160\pm 10\) & \(16\pm 0.15\) & \(50\pm 1\) \\ \hline \end{tabular}
\end{table}
Table 1: Plasmatron experimental conditions, listing the distance to the torch exit, duration of the plasma exposure, electric power \(P_{\mathrm{el}}\), test gas mass flow rate \(\dot{m}_{\mathrm{gas}}\) and chamber pressure \(p_{\mathrm{c}}\).
Assuming that \(L_{\lambda}^{\rm g,e}\) does not vary significantly during the sample heating and steady-state phases, the corrected signals
\[S_{i}^{\rm corr}=\left\{\begin{array}{ll}S_{i}-S_{i}^{\rm ref}\approxeq k_{i} \Theta\int_{\Delta\lambda_{i}}\tilde{R}_{\lambda,i}\varepsilon_{\lambda}L_{ \lambda}^{\rm bb}(T_{\rm obj})d\lambda\ \ \mbox{for}\ 0\leq t\leq 300\,{\rm s}\\ S_{i}\ \ \mbox{for}\ t>300\,{\rm s}\end{array}\right. \tag{13}\]
approximate the contributions originating from the object's surface only. Notice that a correction is not necessary after 300 s, as \(L_{\lambda}^{\rm g,e}=0\) after the plasma is switched off.
The measured apparent temperature, \(T_{\rm app}^{\rm meas}\), is obtained considering the ratio \(\rho_{\rm m}\ =\ S_{1}/S_{2}\) and applying the calibration curve obtained in sec. 3.2 as \(T_{\rm app}\ =\ \mathcal{H}_{\rm c}^{-1}(\rho_{\rm m})\). The corrected ratio \(\rho_{\rm m}^{\rm corr}=S_{1}^{\rm corr}/S_{2}^{\rm corr}\), instead, can be used to determine a corrected value of the apparent temperature \(T_{\rm app}^{\rm corr}=\mathcal{H}_{\rm c}^{-1}(\rho_{\rm m}^{\rm corr})\). The latter represents the best estimate of the object temperature, removing the biasing effect of the plasma emission along the line-of-sight. Figure 10(b) shows the measured and corrected temperature values. We can notice how, before correction, \(T_{\rm app}^{\rm meas}\) follows a similar trend to those predicted by our simulations in Fig 8(d). Namely, the value drops from the top of the instrument range, to rise again only after \(t\approxeq 20\,{\rm s}\). Hence, this peculiar trend can be associated to the biasing effect of \(L_{\lambda}^{\rm g,e}\) on the measured signal. The corrected temperature, instead, recovers a rising trend, as it is expected during the transient heating of the material. At steady state, the surface temperature is high enough for \(L_{\lambda}^{\rm g,e}\) to have a small effect and the correction has a negligible impact.
Considering \(T_{\rm app}^{\rm corr}\) to be best representative of the material surface temperature,
Figure 9: (a) Picture of the C/SiC sample before the experiment. (b) Normal spectral emittance obtained from room temperature reflectometry of the virgin sample shows \(\varepsilon_{\lambda}\approxeq 0.85\), with a uniform behavior within 0.75-1.15 μm. (c) Picture taken during the plasma exposure, showing the material sample inserted into the probe cover.
then \(e=(T_{\rm app}^{\rm meas}-T_{\rm app}^{\rm corr})/T_{\rm app}^{\rm corr}\) represents the relative error of the measured value with respect to the corrected one. Figure 10(c) shows that \(T_{\rm app}^{\rm meas}\) falls within 1% of \(T_{\rm app}^{\rm corr}\) only above 1350 K, while the error is larger than 10% below 1120 K. Their
Figure 10: (a) Measured signals \(S_{1}\) and \(S_{2}\), showing the heating, steady-state and cooling phases. The reference signals for correction, \(S_{1}^{\rm ref}\) and \(S_{2}^{\rm ref}\), are obtained by averaging the values within \(0\,\mathrm{s}<t<6\,\mathrm{s}\). (b) Comparison between the measured apparent temperature and its corrected value. Notice how \(T_{\rm app}^{\rm meas}\) decreases during the heating phase, similarly to what has been shown in our simulations (Fig. 8(d)). \(T_{\rm app}^{\rm corr}\), instead, recovers an increasing trend, as expected. (c) Relative error between the measured and corrected apparent temperature. \(T_{\rm app}^{\rm meas}\) falls within 1% of \(T_{\rm app}^{\rm corr}\) only above 1350 K, while their difference is negligible at steady state. Since no correction is applied during the cooling phase, their values coincide.
difference becomes negligible (0.2%) at steady state and, since no correction is applied after the plasma is switched off, their values coincide during the cooling phase.
### Comparison to spectrometry
The spectrometer, described in sec. 3.3, allowed to probe the spectral radiance emitted by the object's surface and gas along the line of sight. Following the previous discussion, since we expect \(L_{\lambda}^{\mathrm{obj}}\ll L_{\lambda}^{\mathrm{g,e}}\) for \(0\,\mathrm{s}<t<6\,\mathrm{s}\), a reference spectrum \(L_{\lambda}^{\mathrm{ref}}\) within this time interval is such that \(L_{\lambda}^{\mathrm{ref}}\approxeq L_{\lambda}^{\mathrm{g,e}}\). Figure 11(a) compares \(L_{\lambda}^{\mathrm{ref}}\) at \(t=2\,\mathrm{s}\) to the values of \(L_{\lambda}^{\mathrm{g,e}}\), computed in sec. 4.2 for \(P_{\mathrm{el}}^{\mathrm{sim}}=80\,\mathrm{kW}\) and \(P_{\mathrm{el}}^{\mathrm{sim}}=90\,\mathrm{kW}\), demonstrating that our radiative transfer simulations provide comparable intensities to those observed during actual experiments.
To measure the surface temperature from the observed spectra, these are first corrected for the reference spectrum as
\[\left\{\begin{aligned} L_{\lambda}^{\mathrm{corr}}& =L_{\lambda}^{\mathrm{m}}-L_{\lambda}^{\mathrm{ref}}\approxeq\varepsilon_{ \lambda}L_{\lambda}^{\mathrm{bb}}(T_{\mathrm{obj}})\;\;\mathrm{for}\;0\leq t \leq 300\,\mathrm{s}\\ L_{\lambda}^{\mathrm{corr}}&=L_{\lambda}^{\mathrm{m}} \;\;\mathrm{for}\;t>300\,\mathrm{s}.\end{aligned}\right. \tag{14}\]
Then, employing a similar method to the one shown by Savino et al. [20], the material's surface temperature can be obtained by a least-square fitting of the normalized spectrum \(\tilde{L}_{\lambda}^{\mathrm{corr}}=L_{\lambda}^{\mathrm{corr}}/L_{\lambda_{ \mathrm{norm}}}^{\mathrm{corr}}\) with the normalized Planck distribution \(\tilde{L}_{\lambda}^{\mathrm{bb}}(T)\;=\;L_{\lambda}^{\mathrm{bb}}(T)/L_{ \lambda_{\mathrm{norm}}}^{\mathrm{bb}}(T)\). For this procedure we consider \(\lambda_{\mathrm{norm}}=1.03\,\mathrm{\SIUnitSymbolMicro m}\), while the fitting provides the temperature \(T_{\mathrm{sp}}\). Figure 11(b) shows some representative \(\tilde{L}_{\lambda}^{\mathrm{corr}}\) spectra at different times, together with the fitted \(\tilde{L}_{\lambda}^{\mathrm{bb}}(T_{\mathrm{sp}})\) spectra, and the corresponding temperatures. Before \(t=16\,\mathrm{s}\), however, the signal is too low to provide reliable measurements. Figure 11(c) compares the measured surface temperature by means of the spectroscopic method, \(T_{\mathrm{sp}}\), together with the corrected value from the two-color ratio pyrometry, \(T_{\mathrm{app}}^{\mathrm{corr}}\), obtained in the previous section. Their values show a close agreement, confirming that the subtraction of the reference signals correspondent to the line-of-sight plasma radiation allows to recover a reliable measurement of the material surface temperature during the transient heating phase.
## 6 Conclusions
Two-color ratio pyrometry provides powerful in situ diagnostics for plasma wind tunnel experiments of aerospace materials. However, the intense plasma radiation along the optical path poses a major concern for measurement accuracy.
In this work, we formulated a model of the instrument response that allowed a numerical assessment of the error. The gas spectral radiance was computed with
Figure 11: (a) Reference spectrum, \(L_{\lambda}^{\mathrm{ref}}\), measured at \(t=2\,\mathrm{s}\), representing the spectral radiance of the gas along the optical path. Simulated \(L_{\lambda}^{\mathrm{g,e}}\) spectra for \(P_{\mathrm{el}}^{\mathrm{sim}}=80\) and \(90\,\mathrm{kW}\), obtained in sec. 4.2, provide comparable intensities. (b) Normalized corrected spectral radiance of the object surface, \(\tilde{L}_{\lambda}^{\mathrm{corr}}\), for different time instants after injection. These are fitted with the normalized Planck distribution, \(\tilde{L}_{\lambda}^{\mathrm{bb}}(T)\), to provide the temperature \(T_{\mathrm{sp}}\). (c) Comparison between \(T_{\mathrm{sp}}\) and the corrected apparent temperature from the two-color pyrometer, \(T_{\mathrm{app}}^{\mathrm{corr}}\), show noticeable agreement.
a radiation code, using temperature, pressure, and gas composition provided by CFD simulations of the plasma flow field. Our results show that gas radiation reaches a comparable intensity to the one emitted by the material for typical surface temperatures expected during PWT experiments. As a result, the measured signal ratio is shifted with respect to the value obtained during calibration, causing large positive errors in the measured temperature. The effect is particularly relevant during the transient heating phase of the material sample exposed to the plasma jet, while steady-state values appear negligibly affected.
A plasma wind tunnel test on a ceramic matrix composite material provided experimental data that confirmed the predicted trends, highlighting that a correction is necessary to improve the accuracy. Subtraction of reference signals, obtained right after the injection time, revealed a valuable procedure for a high-emittance material, as those could be interpreted as the contributions originating from the gas radiation only. A spectrometer, simultaneously pointed at the material, provided the spectral signature of either the gas and the surface, confirming the correction.
With the presented work, we provide a methodology to predict and understand an important biasing effect on ratio pyrometry, extending previous analyses concerning the influence of different environmental factors, and allowing experimentalists to achieve improved measurements in applications involving high-temperature gases. While a measurement correction is possible, future work should focus on identifying different wavelength ranges to minimize the interference of the plasma radiation. In this case, a compromise with the instrument's sensitivity needs to be accounted for. Additionally, while high-emittance materials allowed to neglect reflections from the surrounding gas, these could become important and would require further consideration in the case, for instance, of low-emittance metallic samples.
## Acknowledgments
The research of A. Fagnani was funded by the Research Foundation - Flanders (dossier n. 1SB3121N). The experimental activities of this work were supported by the Air Force Office of Scientific Research (Grant N. FA9550-18-1-0209). V.Romano is acknowledged for the mesh convergence study on the CF-ICP code simulations. The authors would like to thank J. Freitas Monteiro for the precious help as Plasmatron test engineer and P. Collin as Plasmatron technical operator. S. Smolka, B.Bras and G.van Papendrecht are acknowledged for the support with the reflectometry measurements at the ESA ESTEC laboratories.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2309.08374 | Understanding the limitations of self-supervised learning for tabular
anomaly detection | While self-supervised learning has improved anomaly detection in computer
vision and natural language processing, it is unclear whether tabular data can
benefit from it. This paper explores the limitations of self-supervision for
tabular anomaly detection. We conduct several experiments spanning various
pretext tasks on 26 benchmark datasets to understand why this is the case. Our
results confirm representations derived from self-supervision do not improve
tabular anomaly detection performance compared to using the raw representations
of the data. We show this is due to neural networks introducing irrelevant
features, which reduces the effectiveness of anomaly detectors. However, we
demonstrate that using a subspace of the neural network's representation can
recover performance. | Kimberly T. Mai, Toby Davies, Lewis D. Griffin | 2023-09-15T13:04:11Z | http://arxiv.org/abs/2309.08374v3 | # Understanding the limitations of self-supervised learning for tabular anomaly detection
###### Abstract
While self-supervised learning has improved anomaly detection in computer vision and natural language processing, it is unclear whether tabular data can benefit from it. This paper explores the limitations of self-supervision for tabular anomaly detection. We conduct several experiments spanning various pretext tasks on 26 benchmark datasets to understand why this is the case. Our results confirm representations derived from self-supervision do not improve tabular anomaly detection performance compared to using the raw representations of the data. We show this is due to neural networks introducing irrelevant features, which reduces the effectiveness of anomaly detectors. However, we demonstrate that using a subspace of the neural network's representation can recover performance.
**Keywords:** anomaly detection, deep learning, self-supervised learning, tabular data
**Statements and Declarations**: This work was supported by funding from EPSRC under grant EP/R513143/1.
## 1 Introduction
Anomaly detection is the task of identifying unusual instances. Two issues hinder performance: how to obtain a "good" representation of the normal data and a lack of knowledge about the nature of anomalies. The emergence of self-supervised learning techniques has primarily addressed these issues in complex domains such as computer vision and natural language processing [1, 2]. However, these techniques have not yielded the same benefits for tabular data [3].
Self-supervised learning typically uses a pretext task to learn the intrinsic structure of the training data [4]. Examples of pretext tasks include colourising greyscale images [5] or predicting the next word in a sentence [6, 7]. Understanding the typical characteristics of a domain allows one to choose an effective pretext task. For instance, colourisation requires knowledge of object boundaries and semantics. These aspects are useful for image classification [8, 9]. However, unlike images or text where spatial or sequential biases are natural starting points for self-supervision, the starting points for tabular data are unclear.
A recent study indicated that self-supervised learning does not help tabular anomaly detection [3]. Reiss et al. compared two self-supervised methods with \(k\)-nearest neighbours (\(k\)-NN) on the original features. Even though the methods were designed for tabular data, they found that \(k\)-NN on the original features worked the best.
We seek to understand _why_ this is the case. We extend the experiments to include a more comprehensive suite of pretext tasks. We also incorporate synthetic test cases and analyse the underlying learnt representations. Our results reinforce that self-supervision does not improve tabular anomaly detection performance and indicate deep neural networks introduce redundant features, which reduces the effectiveness of anomaly detectors. Conversely, we can recover performance using a subspace of the neural network's representation. We also show that self-supervised learning can outperform the original representation in the case of purely localised anomalies and those with different dependency structures.
In addition to the above investigations, we ran a series of experiments to benchmark anomaly detection performance in a setting where we do not have access to anomalies during training. We include our findings as a complement to the self-supervision results and to provide practical insight into scenarios where specific detectors work better than others.
Our contributions are as follows:
1. We reconfirm the ineffectiveness of self-supervision for tabular anomaly detection.
2. We empirically investigate why self-supervision does not benefit tabular anomaly detection.
3. We introduce a comprehensive one-class anomaly detection benchmark using several self-supervised methods.
4. We provide practical insights and identify instances where particular anomaly detectors and pretext tasks may be beneficial.
In Section 2, we cover the anomaly detection setup. We proceed to outline our experimental approach in Section 3. We evaluate our findings in Section 4. Finally, we summarise our work and conclude in Section 5.
## 2 Background
### Anomaly detection
Anomaly detection can be characterised as follows:
Let \(\mathcal{X}\in\mathbb{R}^{d}\) represent the data space. We assume the normal data is drawn from a distribution \(\mathcal{P}\) on \(\mathcal{X}\). Anomalies are data points \(\mathbf{x}\in\mathcal{X}\) that lie in a low probability region in \(\mathcal{P}\). Therefore, the set of anomalies can be defined as follows [10]:
\[\mathcal{A}=\{\mathbf{x}\in\mathcal{X}|p(\mathbf{x})\leq\tau\},\quad\tau\geq 0 \tag{1}\]
Where \(\tau\) is a threshold. Often, the original input space is not used as anomaly detection performance can be improved by using a different representation space. In the context of deep learning, a neural network parameterised by \(\theta:\mathcal{X}\mapsto\mathcal{Y}\) (where \(\mathcal{Y}\in\mathbb{R}^{m}\)) is used to transform the input data. The anomalies are assumed to lie in a low-probability region in the new space. Namely, \(\mathcal{P}\) on \(\mathcal{X}\) transforms to \(\mathcal{P}^{\prime}\) on \(\mathcal{Y}\) according to \(\mathcal{P}^{\prime}(\theta(\mathbf{x}))=|\mathbf{J}_{\theta}|\), where \(\mathbf{J}\) is the Jacobian of \(\theta\). If \(\theta\) is an effective mapping, then \(\theta(\mathcal{A})\) will still be a low probability of \(\mathcal{P}^{\prime}\) and \(\theta(\mathcal{A})\) will have a simpler boundary in \(\mathcal{Y}\) than \(\mathcal{A}\) in \(\mathcal{X}\).
There are deep anomaly detectors (which aim to simultaneously transform the data to a new subspace and classify it) and shallow anomaly detectors (which do not transform the data but solely rely on an existing representation). This paper focuses on shallow anomaly detectors to isolate the differences in representations derived from different self-supervision tasks. Evaluating the transformative properties of deep anomaly detectors is out of scope. In this setup, we also assume only normal samples are present in the training set. This is commonly referred to as a "one-class" setting in anomaly detection literature. We describe the anomaly detectors used in our analyses below. For a more detailed overview of anomaly detection techniques, we refer the reader to Ruff et al. [10].
\(k\)**-NN** assumes normal data closely surround other similar samples in the feature space, while anomalies have relatively fewer nearby neighbours. Despite being a simple approach, \(k\)-NN remains competitive in big data instances [11, 12, 13, 3, 14]. \(k\)-NN typically uses features extracted from pre-trained classification neural networks [12, 13, 14] for image-based anomaly detection. However, equivalent neural networks for tabular data do not exist.
**Local outlier factor** (LOF) is a density-based outlier detection method [15]. It compares the local density of a data point against its \(k\)-nearest neighbours. If the point's density is significantly lower, it is deemed anomalous.
**Isolation forest** (iForest) is an ensemble-based algorithm [16]. It uses a set of isolation trees. Each tree aims to isolate the training data into leaves. The tree construction algorithm randomly selects an attribute and a random split inside the attribute's range
until each data point lies in a leaf. Each observation is assigned a score by calculating the length of the root node to the leaf and averaging across the trees. Points with shorter path lengths are considered more unusual, as the algorithm assumes anomalies are easier to isolate.
**One-class support vector machine** (OCSVM) assumes normal data lies in a high-density region [17]. Taking the origin as an anchor in the absence of anomalous data during training, it learns a maximum margin hyperplane that separates most training data from the origin. The algorithm considers a test datum's distance to the learnt hyperplane to classify anomalies. The method classifies a point as an anomaly if it lies on the side of the hyperplane closer to the origin.
**Residual norms** belong to the category of dictionary-based approaches. Dictionary-based approaches assume the building blocks of a feature space can reconstruct normal data but cannot construct anomalies. Methods using dictionaries use either linear or non-linear manifold learning techniques (e.g., principal components analysis or autoencoders) to determine the building blocks [18, 19, 20]. We use the linear principal space approach from Wang et al. [20] for our experiments. This technique achieves state-of-the-art results for out-of-distribution detection on images.
Given \(\mathbf{X}\) as the in-distribution data matrix of training samples, we find the principal subspace \(\mathbf{W}\) from the matrix \(\mathbf{X}^{T}\mathbf{X}\). This subspace spans the eigenvectors of the \(D\) largest eigenvalues of \(\mathbf{X}^{T}\mathbf{X}\). We assume anomalies have more variance on the components with smaller explained variance [21]. Therefore, we project \(\mathbf{X}\) to the subspace spanned by the _smallest_ eigenvalues of \(D\) (represented by \(\mathbf{W}^{\perp}\)) to encapsulate the residual space and take its norm as the anomaly score:
\[||\mathbf{x}^{W^{\perp}}|| \tag{2}\]
### An overview of self-supervised learning
Self-supervision approaches devise tasks based on the intrinsic properties of the training data. By exploiting these properties, neural networks hopefully learn about the regularities of the data. Examples include:
**Classifying perturbations**: Each training datum is subject to a perturbation randomly selected from a fixed set, such as rotating the input data [22] or reordering patches in an image [23]. A classification model then learns to predict which perturbation was applied.
**Conditional prediction**. A neural network sees pieces of the input data and learns to complete the remaining parts. Examples include predicting the next word given a portion of a sentence [6] or filling in masked areas of an image [24, 25].
**Clustering**. Under this category, models learn to group semantically similar instances and place them far away from observations representing other semantic categories. \(k\)-means clustering is a classic example that measures similarity in Euclidean space.
More modern techniques learn a similarity metric using neural mappings. One popular loss function that enables this is InfoNCE [26, 27]. InfoNCE takes augmented views of the same data point as positives and learns to group them while pushing away other data points. Augmentations are usually in the form of transformations.
In the case of images, these can involve adding noise, colour jittering, or horizontal flips. However, InfoNCE relies on large batch sizes to enable sufficiently challenging comparisons. Augmentation choices are also vital, as aggressive transformations could remove relevant semantic features.
VICReg [28] attempts to overcome some of the issues of InfoNCE by enforcing specific statistical properties. It encourages augmented views to have a high variance to ensure the neural mapping learns diverse aspects of the data. It also regularises the covariance matrix of the representations. This regularisation ensures the neural mapping covers complementary information across the representation space.
Additional pretext tasks are covered in more detail in Balestriero et al. [4].
### Self-supervised learning and anomaly detection for non-tabular data
Anomaly detection for non-tabular data has benefited from self-supervision. Golan and El-Yaniv [29] show that compared to OCSVMs trained on pixel space, outputs from a convolutional neural network trained to predict image rotations were more reliable for anomaly detection. Mai et al. [2] demonstrate similar findings on text. They show that good anomaly detection performance is achievable by fine-tuning a transformer with a self-supervised objective and using the loss as an anomaly score.
Other successful approaches do not use a self-supervised model in an end-to-end manner for anomaly detection. The works of Sehwag et al. [30] and Tack et al. [31] both extract features from neural networks trained with an InfoNCE objective to perform anomaly detection on images. Sehwag et al. classify anomalies using the Mahalanobis distance on the extracted space, while Tack et al. use a product of cosine similarities and norms.
### Self-supervised learning and anomaly detection for tabular data
Literature covering self-supervision for anomaly detection in tabular data is more limited. GOAD [32] extends the work of Golan and El-Yaniv [29] to a more generalised setting. They apply random affine transformations to the data and train a neural network to predict these transformations. At inference, they apply all possible transformations to the test data, obtain the prediction of each transformation from the network and aggregate the predictions to produce the anomaly score. The network should be able to predict the correct modification with higher confidence for the normal data versus the anomalies.
ICL [33] adapts the InfoNCE objective. It considers one sample at a time. Taking a sample \(\mathbf{x}_{i}\) of dimensionality \(d\), ICL splits \(\mathbf{x}_{i}\) into two parts. The dimensionality of the two parts depends on a given window size, \(k\) (\(k<d\)). The first part \(\mathbf{a}_{i}\) is a continuous section of size \(k\), while the second \(\mathbf{b}_{i}\) is its complement of size \(d-k\). A Siamese neural network containing two heads with dimensionalities \(k\) and \(d-k\) aims to push the representations together. The negatives are other contiguous segments of \(\mathbf{x}_{i}\) of size \(k\). As the neural network should be capable of aligning the normal data and not anomalies, the loss is the anomaly score.
Although both methods claim to be state-of-the-art for tabular anomaly detection, Reiss et al. [3] did not find this to be the case. They replicated the pipelines of GOAD and ICL. In addition, they used the trained neural networks of GOAD and ICL as feature extractors. After extracting the features, they ran \(k\)-NN on the new representations. They compared both setups to \(k\)-NN on the original data. Although GOAD and ICL are specifically designed to process tabular data, Reiss et al. found that \(k\)-NN on the original data was the best-performing approach. However, they did not run a hyperparameter search to optimise the choice of \(k\) (leaving it as \(k=5\)). They also used the original architectures designed for GOAD and ICL, which differ from each other. This choice could be another confounding factor affecting results.
## 3 Method
### Datasets
We use 26 multi-dimensional point datasets from Outlier Detection Datasets (ODDS) [34]. Each datum comprises one record, which contains multiple attributes. Table 1 summarises the properties of the datasets.
We follow the data split protocols described in previous tabular anomaly detection literature [32, 33]. We randomly select 50% of the normal data for training, with the remainder used for testing. The test split includes all anomalies. The training split did not use any anomalies as we adopt a one-class setup. We partition the training set further by leaving 20% for validation.
### Baseline approach
We run \(k\)-NN, iForest, LOF, OCSVM, and residual norms on the original training data. Even though Reiss et al. [3] only use \(k\)-NN in their experiments, we use multiple detectors to establish whether \(k\)-NN is the best detector or if there are other more appropriate detectors depending on the type of anomalies present. We analyse our findings in Section 4.7. Another anomaly detection study, ADBench [35], follows a similar protocol. However, their setup assumes anomalies are present in the training data. Through our experiments, we establish whether a purely one-class setup affects overall detector ranking. We use scikit-learn [36] to implement all detectors except for \(k\)-NN, which uses the faiss library [37].
We also investigate the detectors' sensitivity to different configurations by varying the hyperparameters. For \(k\)-NN and LOF, we report results for \(k=\{1,2,5,10,20,50\}\). For the residual norms, we look at how results change with a proportion of features, with percentages ranging from 10% to 90% in 10% increments \([10\%,20\%,...,90\%]\). We record our findings in Section 4.7. For the self-supervised tasks, we report the results based on the best hyperparameter configuration derived from these ablations. We retain the default scikit-learn parameters for iForest and OCSVM, which uses a radial basis function kernel.
The detectors run directly on the data and on a standardised version. We standardise each dimension independently by removing the mean and scaling to unit
variance. We also experimented with fully whitening the data but found attribute-wise standardisation rendered similar results.
### Self-supervision
#### 3.3.1 Pretext tasks
Although tabular data lacks overt intrinsic properties like those in images or text, we choose self-supervised tasks that we hypothesise can take advantage of its structure.
Firstly, we adapt ICL [33] and GOAD [32] to use them as pretext tasks. We do not directly implement ICL and GOAD as they score anomalies in an end-to-end manner. In contrast, our experiments focus on how representations from different pretext tasks affect shallow detection performance. Therefore, we refer to the ICL-inspired task as "**EICL**" (embedding-ICL) for the remainder of the paper. As GOAD uses random affine transformations, we can consider this a combination of predicting rotation and stretches. This configuration conflates two different tasks and could be trivial to solve. Therefore, we attempt to align it closer to the RotNet [1, 22] experiments for image-based anomaly detection by training a model to classify orthonormal rotations. This pretext task should profit from the rotationally invariant property of tabular data [38]. Hence we refer to the GOAD-inspired task as "**Rotation**".
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Dataset** & **Total size** & **Number of anomalies (\%)** & **Dimensionality** \\ \hline Annthyroid & 7,200 & 534 (7.4\%) & 6 \\ Arrhythmia & 452 & 66 (14.6\%) & 274 \\ BreastW & 683 & 239 (35.0\%) & 9 \\ Cardio & 1,831 & 176 (9.6\%) & 9 \\ Glass & 214 & 9 (4.2\%) & 9 \\ Heart & 224 & 10 (4.4\%) & 44 \\ HTTP & 567,469 & 2,211 (0.4\%) & 3 \\ Ionosphere & 351 & 126 (35.8\%) & 33 \\ Letter & 1,600 & 100 (6.3\%) & 32 \\ Lympho & 148 & 6 (4.1\%) & 18 \\ Mammography & 11,183 & 260 (2.3\%) & 6 \\ MNIST & 7,603 & 700 (9.2\%) & 100 \\ Musk & 3,062 & 97 (3.2\%) & 166 \\ Optdigits & 5,216 & 150 (2.9\%) & 64 \\ Pendigits & 6,870 & 156 (2.3\%) & 16 \\ Pima & 768 & 268 (34.9\%) & 8 \\ Satellite & 6,435 & 2,036 (31.6\%) & 36 \\ Satimage-2 & 5,803 & 71 (1.2\%) & 36 \\ Seismic & 2,584 & 170 (6.5\%) & 11 \\ Shuttle & 49,097 & 3,511 (6.6\%) & 9 \\ SMTP & 95,156 & 30 (0.03\%) & 3 \\ Speech & 3,686 & 61 (1.7\%) & 400 \\ Thyroid & 3,772 & 93 (2.4\%) & 6 \\ Vertebral & 240 & 30 (12.5\%) & 6 \\ Vowels & 1,456 & 50 (3.4\%) & 12 \\ WBC & 278 & 21 (5.6\%) & 30 \\ Wine & 129 & 10 (7.7\%) & 13 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of ODDS datasets.
The additional objectives used in the experiments are as follows:
**Predefined shuffling prediction (Shuffle)**: We pick a permutation of the dimensions of the data from a fixed set of permutations and shuffle the order of the attributes based on the selection. The model learns to predict that permutation.
**Predefined mask prediction (Mask classification)**: Given a mask rate \(r\quad(r<d)\), we initialise predefined classes that indicate which attributes to mask. We perform masking by randomly selecting another sample \(\mathbf{x_{j}}\) from the training set and replacing the chosen attributes in \(\mathbf{x_{i}}\) with those from \(\mathbf{x_{j}}\). We follow the protocol outlined in other tabular data literature [39], where they found this approach generated better representations compared to alternative masking strategies like imputation. The model learns to classify which predefined class was applied.
**Masked columns prediction (Mask columns)**: The model picks which attributes were masked given a mask rate \(r\). For example, if only the first attribute was masked, a correct classification should identify the first attribute and should not pick the other attributes. This is different from the mask classification task, where the predefined mask class is given a label from a fixed set of combinations rather than from the particular attribute that has been masked (for example, if there are only two classes, the labels for mask classification are 0 or 1).
**Denoising autoencoding (Autoencoder)**: Given a mask rate \(r\), we perturb \(\mathbf{x_{i}}\) by randomly selecting another sample \(\mathbf{x_{j}}\) and replacing a subset of \(\mathbf{x_{i}}\)'s attributes with those of \(\mathbf{x_{j}}\). The perturbed \(\mathbf{x_{i}}\) is the input. Given this input, the model learns to reconstruct the unperturbed \(\mathbf{x_{i}}\).
**Contrastive learning**: We create positive views of \(\mathbf{x_{i}}\) by rotating the data using an orthonormal matrix **(Contrastive rotation)**, permuting the attributes per the shuffle task **(Contrastive shuffle)**, or masking the attributes per the mask classification task **(Contrastive mask)**. We treat other data points in a minibatch as negatives. We only apply one augmentation at a time to isolate their effects.
#### 3.3.2 Network architectures and loss functions
We use the same neural network architectures to control for any potential effects on performance. Per the findings of Gorishniy et al. [40], we use ResNets [41] and FT-Transformers which performed well on their tabular data experiments (classification and regression). FT-Transformer is a transformer specially adapted for tabular inputs where each transformer layer operates on the feature level of one datum.
We train both architectures on all objectives except for EICL, where we only use ResNets. As EICL requires specific partitioning of the features, the FT-Transformer architecture would need to be modified. This modification is out of the scope of our experiments. We retain the same architecture (e.g., the number of blocks) for each pretext task and only vary the dimensionality of the output layer. The dimensionality corresponds to the number of preset classes for the rotation, shuffle, and mask classification tasks. The output dimensionality of the autoencoder task mirrors the input dimensionality. For the contrastive objectives (including EICL), we set the output as one of \(\{128,256,512\}\) depending on validation performance.
As previous literature has claimed specialised loss functions can improve out-of-distribution detection on other modalities [42, 43], we examine these to confirm whether they also improve tabular anomaly detection.
For the rotation, shuffle, and mask classification tasks, we use cross-entropy, adversarial reciprocal points learning (ARPL) [42], and additive angular margin (AAM) [44]. ARPL is a specialised loss function for out-of-distribution detection. The probability of a datum belonging to a class is proportional to its distance to a reciprocal point. The point represents "otherness" in the learnt feature space. AAM is a loss function typically used for facial recognition. AAM specifically enforces interclass similarity and ensures interclass separation using a specified margin. This results in more spherical features for each class. We include AAM as some literature claims spherical per-class features make out-of-distribution detection easier [45]. Finally, we incorporate the cross-entropy loss as studies have shown models trained with this loss function can meet or outperform specialised losses like ARPL with careful hyperparameter selection [43]. We experiment with mean squared error and mean absolute error for the autoencoders. We use the binary cross-entropy loss for masked column prediction, as multiple masked columns correspond to more than one label for each datum. For the contrastive objectives, we experiment with both InfoNCE and VICReg.
We summarise all the possible model configurations in Table 2.
#### Model selection
Due to the number of potential hyperparameter combinations, we perform random searches to determine the most appropriate models for anomaly detection. We pick hyperparameters randomly and train on the training split for each self-supervised task and dataset. As we cannot evaluate using anomalies, we select models that achieve the lowest loss on the normal validation data. As we want to analyse the effect of
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Anomaly detectors** & **Architectures** & **Self-supervised tasks** & **Loss functions** \\ \hline & & Rotation & Cross-entropy \\ & & Shuffle & ARPL \\ & & Mask classification & AAM \\ \cline{2-3} \(k\)-nearest neighbours & & Mask columns & Binary cross-entropy \\ \cline{2-3} Isolation forest & FT-Transformer & Autoencoder & MSE \\ Local outlier factor & & & MAE \\ One-class support vector machine & & EICL & \\ Residual norms & & Contrastive - rotation & InfoNCE \\ & & Contrastive - shuffle & VICReg \\ & & Contrastive - mask & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the model configurations.
different loss functions and architecture, the hyperparameter sweep stage results in a maximum of twelve configurations for each dataset and task. For example, the models trained on the rotation task would include ResNets and FT-Transformers, each architecture also includes the cross-entropy, ARPL, and AAM losses. There are also different configurations for standardised and non-standardised input data.
#### 3.3.4 Feature extraction
After training, we obtain the learnt features by passing input data through the self-supervised models. We extract the features from the penultimate layer. As we fix the architecture for the different tasks, we obtain 128-dimensional embeddings for ResNets and 192-dimensional embeddings for FT-Transformer. We train the anomaly detectors using the new training features and test them using the transformed test features. We do not apply any augmentations during inference to ensure a fair comparison between the self-supervised tasks. Figure 1 shows the workflow.
### Evaluation
We evaluate all anomaly detectors using the area under the receiving operator curve (AUROC) score. We can consider AUROC as the probability that a randomly selected anomaly will be ranked as more abnormal than a normal sample. Scores fall between 0% and 100%. A score of 50% indicates the detector cannot distinguish between anomalies and normal data points, while a score of 100% signals perfect anomaly discrimination. We choose AUROC as it does not require a threshold to control for false positives, for example.
Figure 1: Self-supervised anomaly detection workflow. The data are only augmented and fed through the projector during training.
### Additional ablations
In addition to evaluations with the ODDS dataset, we run more experiments to understand detector performance and scenarios where specific self-supervised objectives may perform better than others.
#### Synthesised anomalies
Although ODDS contains several datasets, the datasets may mix different types of anomalies. These mixes can make it difficult to diagnose why one representation performs better than another. Therefore, we evaluate how the pretext tasks and their learnt representations fare with synthesised anomalies. We keep the normal data in the train and test splits and only generate anomalies by perturbing the properties of the normal training data. We use the four synthetic anomaly categories as defined in ADBench [35, 46]. We use ADBench's code to create all types.
* **Local** anomalies deviate from their local cluster. We use Gaussian mixture models (GMM) to learn the underlying normal distribution. The covariance matrix undergoes scaling by a factor \(\alpha\) to generate the anomalies. We use \(\alpha=2\) in our experiments.
* **Cluster** anomalies use GMMs to learn the normal distribution. A factor \(\beta\) scales the mean feature vector to create the cluster anomalies. We use \(\beta=2\) in our experiments.
* **Global** anomalies originate from a uniform distribution \(U[\delta\cdot\min(\mathbf{X}_{i}^{k}),\delta\cdot\max(\mathbf{X}_{i}^{k})]\). \(\delta\) is a scaling factor, and the minimum and maximum values of an attribute \(\mathbf{X}_{i}^{k}\) define the boundaries. We use \(\delta=0.01\).
* **Dependency** anomalies do not follow the regular dependency structure seen in normal data. We use vine copulas to learn the normal distribution and Gaussian kernel density estimators to generate anomalies.
#### Corrupted input data
Previous work hypothesises neural networks underperform on tabular classification and regression because of their rotational invariance and lack of robustness to uninformative features [38]. We investigate if this occurs for anomaly detection. Simultaneously, we explore the shallow anomaly detectors' sensitivity to corrupted attributes. Understanding these results can give a practical insight into what self-supervision objectives and anomaly detectors work best when the data is noisy or incomplete. For our ablations, we follow Grinsztajn et al. [38] and apply the following corruptions to the raw data:
1. **Adding uninformative features**: We add extra attributes to \(\mathbf{X}\). We select a subset of attributes to imitate. We then generate features by sampling from a multivariate Gaussian based on the mean and interquartile range of the subset's values. We experiment with different proportions of additional features and limit the maximum number of extra attributes to be no greater than the existing number of features in the dataset.
2. **Missing values**: We randomly remove a proportion of the entries and replace the missing values using the mean of the attribute the value belongs to. We apply this transformation to both the train and test sets.
3. **Removing important features**: We train a random forest classifier to classify between normal samples and anomalies. We then drop a proportion of attributes based on the feature importance values output by the random forest, starting from the least important. This corruption violates the one-class assumption within our anomaly detection setup. However, we use this to analyse the robustness of the detectors and self-supervised models.
4. **Selecting a subset of features**: Similar to (3), we train a random forest classifier. We choose a proportion of attributes based on the feature importance values output from the random forest, starting from the most important.
After corrupting the data, we follow the same process of training the self-supervised models and feature extraction for the neural network experiments.
## 4 Results
### Self-supervision results
**No self-supervision task outperforms the baseline**. Figure 2 summarises the nearest neighbour performance derived from the embeddings
Figure 2: Box plot comparing nearest neighbour AUROCs for each of the embeddings, ordered by median performance. For each self-supervised task, we filter the results by architecture and loss function to include the embedding with the best-performing results.
approach. No self-supervision task exceeds \(k\)-NN on the raw tabular data. When comparing results at a pairwise level, Figure 3 shows that the baseline scores greatly outrank the self-supervised objectives. Similarly, performance using the self-supervised embeddings drops in the presence of corrupted data (Appendix, Figure A1). These results extend the findings in [38] that neural networks are also more sensitive to corrupted attributes in the anomaly detection task. When excluding the baseline, the classification-based tasks (shuffle, mask classification, and rotation) outperform their contrastive and reconstructive counterparts.
We observe similar results when we use different shallow detectors to perform anomaly detection (Figure 4), with one exception. Using residual norms on the embedding space is a better choice than \(k\)-NN. However, they still lag behind \(k\)-NN scores on
Figure 4: Box plot comparing detector performance on the self-supervised embeddings.
Figure 3: Critical difference diagram comparing the embeddings in a pairwise manner. The horizontal scale denotes the average rank of each embedding. The dark lines between different detectors indicate a statistical difference (\(p<0.05\)) in results when running pairwise comparison tests. The baseline scores greatly outrank the pretext tasks. In contrast, the scores among the pretext tasks are more closely aligned.
the original embeddings. We also observe that OCSVM performs consistently worse across all tasks.
### A case study on HTTP
To understand why self-supervision does not help, we will explore one ODDS dataset in detail. We proceed to test our reasoning on toy datasets and then analyse the remaining ODDS datasets.
We use _HTTP_ for our analyses. _HTTP_ is a modified subset of the KDD Cup 1999 competition data [47]. The competition task involved building a detector to distinguish between intrusive (attack) and typical network connections. The dataset initially contained 41 attributes from different sources, including HTTP, SMTP, and FTP. The ODDS version only uses the "service" attribute from the _HTTP_ information as it is considered one of the most basic features. The resulting subset is three-dimensional and comprises over 500,000 observations. Out of these samples, 2,211 (0.4%) are attacks.
It is easy to find attacks when running detectors directly on the raw ODDS variant of _HTTP_. In our experiments, all shallow methods achieve AUROCs between 87.9% and 100% on non-standardised data, with the median score being 99.7%. Further investigations show the attacks are separate from typical connections. A supervised logistic regression model trained to classify the two classes achieves 99.6% AUROC, even with only 200 sample anomalies for training.
However, we observe peculiar results when using representations devised from the pretext tasks for _HTTP_. \(k\)-NN performance drops drastically across the majority of tasks (Figure 5), sometimes yielding scores worse than random. Conversely, the other detectors maintain their performance. For example, when extracting features from the
Figure 5: Bar chart comparing baseline and self-supervised embedding results on HTTP.
rotation task1, \(k\)-NN obtains 71.8% AUROC, while iForest, OCSVM, and residual norms preserve AUROCs around 99%. In addition, logistic regression continues to classify anomalies with 99% AUROC in the supervised setting using the rotation task representations. As \(k\)-NN is susceptible to the curse of dimensionality, these initial results suggest the neural network representation introduces directions that obscure informative distances between the typical and intrusive samples. Moreover, as iForest uses a splitting strategy for detection, its consistent results indicate _some_ direction signalling anomalousness exists.
Footnote 1: Using the best-performing rotation model, which is an FT-Transformer trained with ARPL loss.
### Toy data analysis
It can be challenging to draw conclusions based on existing datasets, as they are large and often contain uninterpretable features. Therefore, we pivot to toy examples to understand these behaviours. We devise nine two-dimensional toy datasets of varying difficulty (Appendix, Figure A2). Like the experiments on the ODDS, we first evaluate performance directly on the two-dimensional representations. We then train ResNets on a two-class rotation prediction task, extract features from the penultimate embedding and re-run the detectors on the new space. We use this setting as rotations can be performed on two-dimensional data, and ResNets require less compute than the FT-Transformers. We apply the same architecture as the ODDS experiments, making the extracted features 128-dimensional.
Regardless of whether the network can or cannot identify the rotation applied to the data, we observe behaviours consistent with ODDS in most toy instances. Compared to the original two-dimensional results, detection performance drops for almost all detectors after extracting representations from the ResNets. As two dimensions are
Figure 6: Visualisations of the multiple Gaussian toy dataset. Light blue are the normal data and orange are the anomalies. The features extracted from the neural network appear to be more narrow (b) and stretched compared to their original 2D representation (a).
sufficient to capture the characteristics of the datasets, projecting the data to a 128-dimensional space only results in a stretched and narrow representation without extra information. The t-SNE plots highlight this activity. We show an example of the multiple Gaussian dataset in Figure 6.
We project the embeddings extracted from the ResNets to a lower dimensional space using the residual eigenvectors from the training data to verify whether the curse of dimensionality affects performance. We conduct this projection because the
Figure 7: Nearest neighbour performance on the toy datasets. The raw embedding (blue) is the best in almost all instances. However, the self-supervision embeddings (orange) improve when projecting to a lower dimensional space (green).
residual norm method outperforms \(k\)-NN in the self-supervised experiments. Therefore, we hypothesise that projecting to a smaller space should reduce the distracting influence of the primary principal components. Consequently, running shallow detectors in this new space should garner improvements. We discard half of the directions for the toy experiments to form 64-dimensional embeddings. The anomaly detectors perform better in this new space (Figure 7), corroborating the view that the neural network embeddings introduce irrelevant directions.
We can also use the toy scenarios to attempt to understand the behaviour of the detectors such as OCSVM. Our experiments suggest OCSVM fails when anomalies lie in the centre of the normal data. For example, the AUROC for OCSVM trained on the raw ring data signalled random performance at 50%, whereas \(k\)-NN could detect the anomalies perfectly.
### Analysing ODDS embeddings
We now proceed to run ablations on ODDS. Previous studies have shown that supervised classification performance correlates highly with out-of-distribution detection performance [43]. Therefore, we train linear classifiers on the self-supervised and original representation and compare classification performance. If there is a drop in performance on the self-supervised embeddings, the results would suggest the neural networks transform the data in a way that mixes anomalies with the normal samples. We could consequently attribute the poor self-supervised performance to this mixing rather than the presence of irrelevant directions. Figure 7(a) illustrates classification scores on the raw data while Figure 7(b) depicts the mean difference between the raw and self-supervised classification performances. Except for EICL, the differences are negligible. We can rule out the mixing effect and conclude that self-supervision generally does not affect the separability between the two classes.
## 6 Conclusion
Figure 8: Supervised linear classification results (normal versus anomaly) on raw data (a) and supervised classification comparisons against the self-supervised embeddings (b).
We now investigate the residual space of the embeddings by extending the toy dataset analyses to ODDS. We take the smallest eigenvalues (from 1% to 90% in 10% increments) to project the neural network embeddings to their residual representations. We proceed to re-run the shallow anomaly detectors in the new space. Figure 9 shows the results, aggregating both ResNet and FT-Transformer scores. We observed similar behaviour across the two architectures. Reducing the dimensionality indeed boosts performance. Throwing away the top 10% of principal components helps, although throwing away the top 90% also performs similarly. This observation aligns with previous findings that show residual directions capture information important for out-of-distribution detection [21]. The magnitude of normal data is minute in this space which is not necessarily the case for anomalies. Based on these results, we do not need complete neural network representations to perform anomaly detection. A subset suffices.
### Synthetic anomalies
Anomaly detection depends on two factors: the nature of the normal data and the nature of anomalies. Both classes can originate from complex, irregular distributions. These aspects make it difficult to pinpoint the causes of results on ODDS and other curated datasets. We attempt to disentangle these factors by analysing performance on synthetic anomalies. The anomalies curated in ODDS are a composite of these types. We calculated the correlation between the ODDS and the synthetic anomaly scores and found that the datasets exhibited correlations between multiple synthetic categories, highlighting the complex qualities of the anomalies. For example, when analysing the raw data representations, \(k\)-NN on the curated _Letter_ anomalies correlates strongly with local (\(\rho=0.84\)), global (\(\rho=0.49\)), and dependency (\(\rho=0.94\)) anomaly scores.
Figures 9(a) to 9(d) show the results across the four synthetic types. We show comparisons using \(k\)-NN as we found similar behaviours across the detectors. The
Figure 9: Ablation study showing how shallow detector results vary with subspace dimensionality.
contrastive objectives outperform the baseline in the local (Figure 9(a)) and cluster anomaly (Figure 9(b)) scenarios. This result suggests contrastive tasks are better at discerning differences at a local neighbourhood level.
No self-supervised approach beats the baseline when faced with global anomalies (Figure 9(c)). This result contributes to the idea that self-supervised representations introduce irrelevant directions. Since the global anomalies scatter across the representation space, these additional directions mask the meaningful distances between the anomalies and normal points. As a result, methods like \(k\)-NN become less effective. In addition, the ranking of the self-supervised tasks aligns most closely with their rankings on ODDS (Figure 13), which potentially highlights the overall properties of the ODDS datasets.
For the dependency anomalies, rotation and mask classification surpass the baseline (Figure 9(d)). Conversely, contrastive tasks perform the worst. Using a rotation or mask classification pretext task could help promote the intrinsic property that tabular data are non-invariant, which may help identify this type of anomaly.
### Architectural choices for self-supervision
We analyse the effects of architectures and loss functions on performance to provide starting points for improving deep learning methods for tabular anomaly detection. We illustrate the results using \(k\)-NN as we observe similar behaviours across detectors.
Figure 10: Bar plots comparing synthetic anomaly results across the representations.
**ResNets outperform transformers**. Our experiments indicate ResNets are a better choice than FT-Transformer (Figure (a)a). This result may be due to transformers needing more training data during the learning phase [48] - the ODDS datasets are relatively small.
**Standardisation is not necessary**. Standardising data before training neural networks does not offer much benefit (Figure (b)b).
**ARPL is a better choice for classification-type losses**. ARPL significantly outperforms cross-entropy and AAM when training classification-type tasks (Figure (c)c). Specialised losses like ARPL might represent "other" spaces better in the context of smaller datasets.
**InfoNCE is better than VICReg for contrastive-type losses**. This result (Figure (d)d) may be due to the intricacies of VICReg, which requires balancing three components (pair similarity, variance and covariance).
**F
Figure 11: Comparisons of how architecture and losses affect performance on the self-supervised embeddings.
### Benchmarking unsupervised anomaly detection
Finally, we compare the performance of each of the detectors overall to see how well they perform in one-class settings. We aggregate results across the baseline and self-supervised embeddings to provide a more generalised understanding of detector behaviour.
Figures 12 and 13 summarise the overall performances of each anomaly detector on ODDS. Even with the inclusion of self-supervised representations, \(k\)-NN performs best.
Figure 12: Box plot comparing detector performance on the raw and standardised data. The results include all hyperparameter variations where available.
Figure 13: Critical difference diagram ranking the different detectors.
#### 4.7.1 Hyperparameter ablations
We now examine the sensitivity of the detectors to changes in hyperparameters. These experiments were conducted directly on the raw ODDS data only to understand detector performance in an optimal representation space. By doing so, these results enable a better understanding of the detectors' inductive biases and why they may deteriorate in suboptimal self-supervised representations.
\(k\)**-NN**: Figure 14 shows performance remains relatively stable to changes in \(k\), suggesting the choice of this hyperparameter is trivial. As \(k\)-NN considers global relationships, this result indicates that anomalies already lie in distinct regions separate from the normal raw data.
**LOF**: Figure 15 illustrates how LOF performance changes with \(k\). Although LOF and \(k\)-NN consider points in a neighbourhood, LOF is more sensitive to the number of neighbours (as evidenced by the increase in performance when \(k=1\) and \(k=5\) for LOF). However, it is unclear how to choose a value of \(k\) so that LOF is competitive with the other detectors in the one-class setting.
Figure 14: Line plot showing how \(k\)-NN varies with the change in the number of nearest neighbours, aggregated across the ODDS datasets.
Figure 15: Line plot showing how LOF varies with the change in the number of nearest neighbours, aggregated across the ODDS dataset.
**Residual norms**: Figure 16 shows how performance varies with the percentage of attributes used. There are no notable trends, although performance remains better than random, even with a small subset (10%) of features. The number of relevant attributes in the original representation space is dataset-dependent as ODDS contains datasets from differing tasks. It is unclear how to choose the number of features to maximise the performance of residual norms in the original dataset space.
#### Corrupted input data
**Adding uninformative features**: All detectors are sensitive to irrelevant features (Figure 16(a)). Although residual norms do not achieve the highest performance, it is more stable under increasing noise levels. This result may be due to the residuals capturing the most meaningful directions of the data. In contrast, \(k\)-NN performance declines the most.
**Removing and selecting important features**: Overall, performance plateaus at around 50% of attributes, suggesting half of the raw features are irrelevant for anomaly detection. iForest and OCSVM are the most stable under varying subsets of features (Figures 16(b) and 16(c)).
**Missing values**: Most detectors exhibit a slight decline in AUROC with increasing proportions of missing values (Figure 16(d)). LOF is the exception, as performance drops significantly.
Overall, the results indicate \(k\)-NN is the best-performing detector when faced with clean and relevant features. However, the relative ranking of detectors changes in the presence of corrupted input data. As observed in our self-supervised results (Section 4.4), residual norms might be better at filtering out noisy directions. Furthermore, when there are fewer relevant features, iForest may be a better choice.
Figure 16: Line plot showing how residual norm varies with the change in residual dimensionality, aggregated across the ODDS dataset.
## References
* [1] S. B. K. K. Jain, and J. L. Friedman, "A survey of machine learning algorithms for machine learning," _IEEE Transactions on Signal Processing_, vol. 10, no. 1, pp. 115-120, 2002.
[MISSING_PAGE_POST]
## 5 Conclusion
### Limitations and future work
We limited our experiments to the ODDS, which is not necessarily representative of all tabular anomaly datasets. Several datasets underwent preprocessing during the curation of ODDS, which could affect results. For example, the values in _HTTP_ were log-transformed. In addition, the datasets are relatively small. As neural networks (particularly transformers) benefit from large amounts of data [48], it is unclear if self-supervision would be more advantageous in the big data case.
Furthermore, we isolated our analyses by extracting embeddings at the penultimate layer and running shallow anomaly detection algorithms. Although feature extraction at this stage combined with simple detectors is a popular strategy [11, 13, 14, 30], different parts of the neural network could provide more informative features [19]. Moreover, we chose to use shallow detectors to prioritise studying the effect of representations rather than studying the detection approach. Furthermore, the original implementations of ICL and GOAD evaluate anomalies using an entire neural network pipeline and use specific architectures for the tasks. Adapting these implementations for a pretext task with different architectures deviates from the original setup and could affect performance. Future work could look at extending the experiments to examine how varying pretext tasks with deep anomaly detection can yield better results [49]. In addition, studies focusing on improving deep tabular anomaly detectors could also start examining regularisation strategies. Our experiments suggest neural networks add irrelevant features, hence regularisation during the training process could help to control this behaviour.
### Summary
We trained multiple neural networks on various self-supervised pretext tasks to learn new representations for ODDS, a series of tabular anomaly detection datasets. We ran a suite of shallow anomaly detectors on the new embeddings and compared the results to the performance of the original data. None of the self-supervised representations outperformed the raw baseline.
We conducted ablations to try to understand this behaviour. Our empirical findings suggested that neural networks introduce irrelevant features, which degrade detector capability. As normal and anomalous data were easily distinguishable in the original tabular representations, neural networks merely stretched the data. They did not introduce any additional informative information. However, we demonstrated performance was recoverable by projecting the embeddings to a residual subspace.
As the anomalies from ODDS derive from complex distributions, we repeated the experiments on synthetic data to understand the pretext tasks' influence on detecting particular anomaly types. We showed in specific scenarios that self-supervision can be beneficial. Contrastive tasks were better at picking up localised anomalies, while classification tasks were better at identifying differences in dependency structures.
Finally, we studied different shallow detectors by aggregating performances across the baseline and self-supervised representations. We showed that localised methods like
\(k\)-NN and LOF worked best on ODDS but were susceptible to performance degradation with corrupted data. In contrast, iForest was more robust. Our findings provided practical insights into when one detector might be preferable to another.
Overall, our findings suggest current deep learning approaches do not add much benefit when the original feature space succinctly represents the normal data. This situation is often the case for tabular data, and we demonstrated this by showing performance degrades when removing features in the original space. If the feature space did not succinctly represent the normal data, we would not observe such large degradations. This setup differs from other domains. For example, pixels in images contain lots of semantically irrelevant information. Therefore, neural networks can distil information from pixels to extract useful semantic features and self-supervision is beneficial.
## 6 Data availability
Publicly available datasets were analysed in this study. The ODDS datasets are accessible from [https://odds.cs.stonybrook.edu/](https://odds.cs.stonybrook.edu/).
**Fig. A2**: Illustrations of the toy test data. Blue points are normal whereas orange points are anomalous. |
2301.13604 | Nonlinearities in Macroeconomic Tail Risk through the Lens of Big Data
Quantile Regressions | Modeling and predicting extreme movements in GDP is notoriously difficult and
the selection of appropriate covariates and/or possible forms of nonlinearities
are key in obtaining precise forecasts. In this paper, our focus is on using
large datasets in quantile regression models to forecast the conditional
distribution of US GDP growth. To capture possible non-linearities, we include
several nonlinear specifications. The resulting models will be huge dimensional
and we thus rely on a set of shrinkage priors. Since Markov Chain Monte Carlo
estimation becomes slow in these dimensions, we rely on fast variational Bayes
approximations to the posterior distribution of the coefficients and the latent
states. We find that our proposed set of models produces precise forecasts.
These gains are especially pronounced in the tails. Using Gaussian processes to
approximate the nonlinear component of the model further improves the good
performance, in particular in the right tail. | Jan Prüser, Florian Huber | 2023-01-31T13:02:59Z | http://arxiv.org/abs/2301.13604v2 | # Nonlinearities in Macroeconomic Tail Risk through the Lens of Big Data Quantile Regressions
###### Abstract
Modeling and predicting extreme movements in GDP is notoriously difficult and the selection of appropriate covariates and/or possible forms of nonlinearities are key in obtaining precise forecasts. In this paper, our focus is on using large datasets in quantile regression models to forecast the conditional distribution of US GDP growth. To capture possible non-linearities we include several nonlinear specifications. The resulting models will be huge dimensional and we thus rely on a set of shrinkage priors. Since Markov Chain Monte Carlo estimation becomes slow in these dimensions, we rely on fast variational Bayes approximations to the posterior distribution of the coefficients and the latent states. We find that our proposed set of models produces precise forecasts. These gains are especially pronounced in the tails. Using Gaussian processes to approximate the nonlinear component of the model further improves the good performance in the tails.
**JEL**: C11, C32, C53
**KEYWORDS**: Growth at risk, quantile regression, global-local priors, non-linear models, large datasets.
Introduction
Modeling and predicting the conditional distribution of output growth has attracted considerable academic attention in recent years. Starting at least with the influential paper by Adrian et al. (2019), focus has shifted towards analyzing whether there exist asymmetries between a predictor (in their case financial conditions) and output growth across different quantiles of the empirical distribution. Several other papers (Adrian et al., 2018; Ferrara et al., 2019; Gonzalez-Rivera et al., 2019; Delle Monache et al., 2020; Plagborg-Moller et al., 2020; Reichlin et al., 2020; Figueres and Jarocinski, 2020; Adams et al., 2021; Mitchell et al., 2022) have started to focus on modeling full predictive distributions using different approaches and information sets. However, most of these contributions have been confined to models which exploit small datasets and, at least conditional on the quantile analyzed, assume linear relations between GDP growth and the predictors.2
Footnote 2: A recent exception is Kohns and Szendrei (2021) who estimate large-scale quantile regressions and then apply ex-post sparsification to sharpen predictive inference.
Times of economic stress such as the global financial crisis (GFC) or the Covid-19 pandemic have highlighted that exploiting information contained in many time series and allowing for nonlinearities improves predictive performance in turbulent periods (see, e.g., Huber et al., 2023). Since economic dynamics change in volatile economic regimes, models that control for structural breaks allow for different effects of economic shocks over time or imply nonlinear relations between GDP growth and its predictors often excel in forecasting applications (see D'Agostino et al., 2013; Carriero et al., 2016; Adrian et al., 2021; Clark et al., 2022; Pfarrhofer, 2022; Huber et al., 2023). Moreover, another important empirical regularity is that the set of predictors might change over time. This is because variables which are seemingly unimportant in normal periods (such as financial conditions) play an important role in recessions and yield important information on future behavior of output growth.
This discussion highlights that the effect of predictors on output growth depends on the quantile under consideration and thus appears to be state dependent and modeling the transition might call for nonlinear econometric models. The key challenge, however, is to identify the different determinants of GDP growth across quantiles while taking possible nonlinearities into account. In this paper, we aim to solve these issues by proposing a Bayesian quantile regression (QR) which can be applied to huge information sets, and which is capable of capturing nonlinearities of unknown form. Our model is a standard QR model that consists of two parts.
The first assumes a linear relationship between the covariates and quantile-specific GDP growth whereas the second component assumes an unknown and possibly highly nonlinear relationship between the two. The precise form of nonlinearities is captured through three specifications. One is parametric and based on including polynomials up to a certain order, whereas the remaining two are nonparametric. Among these nonparametric specifications we include B-splines (see Shin et al., 2020) and Gaussian processes (see Williams and Rasmussen, 2006). Both have been shown to work well when it comes to function estimation and forecasting.
The combination of a linear and nonlinear term implies that the dimension of the parameter space increases substantially. Since all these models can be cast in terms of a linear regression conditional on appropriately transformed covariates, we can use regularization techniques to decide on whether more flexibility is necessary and which variables should enter the model. We achieve this through several popular shrinkage priors that have excellent empirical properties in large dimensions and are relatively easy to implement. These shrinkage priors enable us to select promising subsets of predictors and the degree of nonlinearities for each quantile separately.
Posterior inference using Markov Chain Monte Carlo (MCMC) techniques in these dimensions proves to be an issue because we have to estimate a large-scale regression model for all quantiles of interest. This procedure needs to be repeated a large number of times if we wish to carry out an out-of-sample forecasting exercise. To reduce the computational burden enormously we estimate the QRs using Variational Bayes (VB).3 This estimation strategy approximates the exact full conditional posterior distributions with simpler approximating distributions. These approximating densities are obtained by minimizing the Kullback-Leibler (KL) distance between some known density \(q\) and the exact posterior distribution \(p\). Hence, integration in huge dimensions is replaced by a simpler optimization routine. Our approach is fast and allows for computing all results of our forecasting exercise without the use of high performance computing environments.
Footnote 3: For an introduction, see Blei et al. (2017) and an algorithm for QRs is provided in Bufrei (2019).
We apply our techniques to the large dimensional FRED-QD dataset (McCracken and Ng, 2016) and focus on single and multi-step-ahead forecasting of US GDP growth over a hold-out period ranging from 1991Q2 to 2021Q3. The different nonlinear models we consider are high dimensional and feature up to around 1,000 coefficients per equation.
The empirical results can be summarized as follows. Using huge information sets and nonlinear models in combination with priors that introduce substantial shrinkage pays off for tail
forecasts. In both tails, forecast improvements relative to the small-scale QR model developed in Adrian et al. (2019) are sizable. When we focus on the center of the distribution the differences become smaller. Once we allow for nonlinearities we find modest improvements in predictive accuracy. Comparing the different nonlinear specifications reveals that Gaussian processes offer the largest improvements vis-a-vis the linear QR. This indicates that a successful tail forecasting model should be able to extract important information from huge datasets, while controlling for possibly nonlinear relations. When we focus on the key properties of the proposed priors we observe that priors that imply a dense model (characterized by many small coefficients) yield good tail forecasts.
The paper is structured as follows. The next section introduces the general QR and the scale-location mixture representation to cast the model in terms of a standard generalized additive regression with auxiliary latent variables. We then focus on the different priors used, provide additional details on the nonlinear components of the models, briefly discuss VB, outline how we estimate the posterior distributions of the parameters and latent quantities, and illustrate the computational properties of our approach. Section 3 discusses our empirical findings. The final section summarizes and concludes the paper. An Online Appendix includes additional technical details, empirical results and more precise information on the used dataset.
## 2 Bayesian analysis of general QRs
### The likelihood function
In this paper, our goal is to model the dependence between the \(\mathfrak{q}^{th}\) quantile of GDP growth \(y_{t}\) and a panel of \(K\) predictors in \(\{\mathbf{x}_{t}\}_{t=1}^{T}\) with \(K\) being huge. The covariates include a wide range of macroeconomic and financial indicators. Possible nonlinearites between \(y_{t}\) and \(\mathbf{x}_{t}\) are captured through a function \(g_{\mathfrak{q}}(\mathbf{x}_{t})\), with \(g_{\mathfrak{q}}:\mathbb{R}^{K}\rightarrow\mathbb{R}\). The fact that \(K\) is large and the inclusion of nonlinear functions of \(\mathbf{x}_{t}\) implies that the number of parameters is large relative to the number of observations \(T\).
Our workhorse model is the QR developed in Koenker and Bassett (1978). As opposed to the standard QR, our model decomposes the \(\mathfrak{q}^{th}\) quantile function \(\mathcal{Q}_{\mathfrak{q}(y_{t})}\) in a linear and nonlinear part and a non-standard error distribution:
\[y_{t}=\mathbf{x}_{t}^{\prime}\mathbf{\beta}_{\mathfrak{q}}+g_{\mathfrak{q}}(\mathbf{x}_{t })+\varepsilon_{t}, \tag{1}\]
where \(\boldsymbol{\beta}_{\mathfrak{q}}\) is a \(K-\)dimensional vector of quantile-specific regression coefficients and \(\varepsilon_{t}\) is a shock term with density \(f_{\mathfrak{q}}\) such that the \(\mathfrak{q}^{th}\) quantile equals zero:
\[\int_{-\infty}^{0}f_{\mathfrak{q}}(\varepsilon_{t})d\varepsilon_{t}=\mathfrak{ q}.\]
Conditional on the quantile, this model resembles a generalized additive model (GAM), see Hastie and Tibshirani (1987).
We approximate \(g_{\mathfrak{q}}(\boldsymbol{x}_{t})\) using nonlinear transformations of \(\boldsymbol{x}_{t}\):
\[g_{\mathfrak{q}}(\boldsymbol{x}_{t})\approx\sum_{m=1}^{M}\gamma_{qm}z_{m}( \boldsymbol{x}_{t})=\boldsymbol{z}_{t}^{\prime}\gamma_{\mathfrak{q}} \tag{2}\]
with \(\boldsymbol{\gamma}_{\mathfrak{q}}=(\gamma_{\mathfrak{q}\mathfrak{1}},\ldots, \gamma_{\mathfrak{q}M})^{\prime}\), \(\boldsymbol{z}_{t}=(z_{1}(\boldsymbol{x}_{t}),\ldots,z_{M}(\boldsymbol{x}_{t} ))^{\prime}\) and \(z_{m}(\boldsymbol{x}_{t})\) denotes a basis function that depends on \(\boldsymbol{x}_{t}\) with \(\gamma_{\mathfrak{q}m}\) denoting the corresponding basis coefficient. This basis function depends on the specific approximation model used to infer the nonlinear effects and our additive representation nests models commonly used in the machine learning literature (such as Gaussian processes, splines, neural networks but also more traditional specifications such as time-varying parameter models). We will discuss the precise specification of \(z_{m}\) (and thus \(\boldsymbol{z}_{t}\)) in more detail in Sub-section 2.3. Here it suffices to note that depending on the specification, \(M\) could be very large. For instance, in the Gaussian process case, \(M=T\) and thus the number of regression coefficients would be \(K+T\).
If \(f_{\mathfrak{q}}\) remains unspecified, estimation of \(\boldsymbol{\beta}_{\mathfrak{q}}\) and \(\boldsymbol{\gamma}_{\mathfrak{q}}\) is achieved by solving the following optimization problem:
\[\operatorname*{arg\,max}_{\{\beta_{\mathfrak{q}},\boldsymbol{\gamma}_{ \mathfrak{q}}\}}=\sum_{t=1}^{T}\rho_{\mathfrak{q}}(y_{t}-\boldsymbol{x}_{t}^{ \prime}\boldsymbol{\beta}_{\mathfrak{q}}-\boldsymbol{z}_{t}^{\prime}\boldsymbol {\gamma}_{\mathfrak{q}}),\]
with \(\rho_{\mathfrak{q}}(l)=l[\mathfrak{q}-\mathbb{I}(l<0)]\) denoting the loss function. This optimization problem is straightforward to solve but, if \(K+M\) is large, regularization is necessary. This motivates a Bayesian approach to estimation and inference.
From a Bayesian perspective, carrying out posterior inference requires the specification of a likelihood and suitable priors. Following Yu and Moyeed (2001) we assume that the shocks \(\varepsilon_{t}\) follow an asymmetric Laplace distribution (ALD) with density:
\[f_{\mathfrak{q}}(\varepsilon_{t})=\mathfrak{q}(1-\mathfrak{q})\exp{(-\rho_{ \mathfrak{q}}(\varepsilon_{t}))}.\]
The key thing to notice is that the \(\mathfrak{q}^{th}\) quantile equals zero and the parameter \(\mathfrak{q}\) controls the
skewness of the distribution. Kozumi and Kobayashi (2011) show that one can introduce auxiliary latent quantities to render the model with ALD distributed shocks conditionally Gaussian. This is achieved by exploiting a scale-location mixture representation (West, 1987):
\[\varepsilon_{\mathfrak{q}\mathfrak{q}} =\theta_{\mathfrak{q}}\nu_{\mathfrak{q}\mathfrak{q}}+\tau_{ \mathfrak{q}}\sqrt{\sigma_{\mathfrak{q}}\nu_{\mathfrak{q}\mathfrak{q}}}u_{ \mathfrak{q}},\] \[\theta_{\mathfrak{q}} =\frac{1-2\mathfrak{q}}{\mathfrak{q}(1-\mathfrak{q})},\quad \tau_{\mathfrak{q}}^{2}=\frac{2}{\mathfrak{q}(1-\mathfrak{q})},\quad\nu_{ \mathfrak{q}\mathfrak{q}}\sim\mathcal{E}\left(\frac{1}{\sigma_{\mathfrak{q}} }\right),\quad u_{\mathfrak{t}}\sim\mathcal{N}(0,1),\]
where \(\mathcal{E}\left(\frac{1}{\sigma_{\mathfrak{q}}}\right)\) denotes the exponential distribution and \(\sigma_{\mathfrak{q}}\) is a scaling parameter. Hence, conditional on knowing \(\nu_{\mathfrak{q}}=(\nu_{\mathfrak{q}1},\ldots,\nu_{\mathfrak{q}\mathfrak{ T}})^{\prime},\theta_{\mathfrak{q}},\tau_{\mathfrak{q}}\), \(\sigma_{\mathfrak{q}}\) and appropriately selecting \(g_{\mathfrak{q}}\), the model is a linear regression model with response \(\hat{y}_{\mathfrak{t}}=y_{\mathfrak{t}}-\theta_{\mathfrak{q}}\nu_{\mathfrak{ q}\mathfrak{t}}\) and Gaussian shocks that are conditionally heteroskedastic. This conditional likelihood will form the basis of our estimation strategy.
To complete the model specification we assume that \(\frac{1}{\sigma_{\mathfrak{q}}}\sim\mathcal{G}(c_{0},d_{0})\), where \(c_{0}\) is the shape and \(d_{0}\) the rate parameter of the Gamma distribution which we set both to zero in order to obtain a flat prior. The choice of the prior distribution on \(\mathbf{\beta}_{\mathfrak{q}}\) and \(\mathbf{\gamma}_{\mathfrak{q}}\) is essential for our high dimensional QRs. We discuss different suitable choices in the next section.
### Priors for the quantile regression coefficients
For the large datasets we consider in this paper, \(M+K\gg T\) and thus suitable shrinkage priors are necessary to obtain precise inference. Kohns and Szendrei (2021) and Mitchell et al. (ming) use flexible shrinkage priors in large-scale QRs and show that these work well for tail forecasting. We build on their findings by considering a range of different priors on \(\mathbf{\beta}_{\mathfrak{q}}\) and \(\mathbf{\gamma}_{\mathfrak{q}}\). All these priors belong to the class of so-called global-local shrinkage priors (Polson and Scott, 2010) and have the following general form:
\[\mathbf{\beta}_{\mathfrak{q}}|\psi_{\mathfrak{q}1}^{\beta},\ldots, \psi_{\mathfrak{q}K}^{\beta},\lambda_{\mathfrak{q}}^{\beta} \sim\prod_{j=1}^{K}\mathcal{N}(0,\psi_{\mathfrak{q}j}^{\beta}\lambda_{ \mathfrak{q}}^{\beta}),\quad\psi_{\mathfrak{q}j}^{\beta}\sim u,\quad\lambda_{ \mathfrak{q}}^{\beta}\sim\pi,\] \[\mathbf{\gamma}_{\mathfrak{q}}|\psi_{\mathfrak{q}1}^{\gamma},\ldots, \psi_{\mathfrak{q}M}^{\gamma},\lambda_{\mathfrak{q}}^{\gamma} \sim\prod_{j=1}^{M}\mathcal{N}(0,\psi_{\mathfrak{q}j}^{\gamma}\lambda_{ \mathfrak{q}}^{\gamma}),\quad\psi_{\mathfrak{q}j}^{\gamma}\sim u,\quad\lambda_ {\mathfrak{q}}^{\gamma}\sim\pi,\]
with \(\lambda_{\mathfrak{q}}^{s}\) (\(s\in\{\beta,\gamma\}\)) denoting a quantile-specific global shrinkage parameter and \(\psi_{\mathfrak{q}j}^{s}\) are local scaling parameters that allow for non-zero coefficients in the presence of strong global shrinkage (i.e., with \(\lambda_{\mathfrak{q}}^{s}\) close to zero). The functions \(u\) and \(\pi\) refer to mixing densities which, if suitably chosen, translate into different shrinkage priors. In this paper, all the priors we consider can be cast into this form but differ in the way the mixing densities \(u\) and \(\pi\) are chosen. Since these
priors are well known, we briefly discuss them in the main text and relegate additional technical details to the Online Appendix.
We focus on five shrinkage priors that have been shown to work well in a wide variety of forecasting applications (see, e.g., Huber and Feldkircher, 2019; Cross et al., 2020; Chan, 2021; Pruser, 2022). The first prior we consider is the Ridge prior. The Ridge prior is a special case of a global-local prior with local parameters set equal to \(1\) and a global shrinkage parameter which follows an inverse Gamma distribution. Formally, this implies setting \(\psi^{s}_{\mathfrak{q}j}=1\) for all \(\mathfrak{q},j\) and \(\lambda^{s}_{\mathfrak{q}}\sim\mathcal{G}^{-1}(e_{0},e_{1})\). The hyperparameters \(e_{0}\) and \(e_{1}\) control the tightness of the prior. We set these equal to \(e_{0}=e_{1}=0\). This prior shrinks all coefficients uniformly towards zero and provides little flexibility to allow for idiosyncratic (i.e., variable-specific) deviations from the overall shrinkage pattern.
This issue is solved by estimating the local shrinkage parameters. The Horseshoe (HS, see, Carvalho et al., 2010), our second prior, does this. This prior sets \(u\) and \(\pi\) to a half-Cauchy distribution: \(\sqrt{\psi^{s}_{\mathfrak{q}j}}\sim\mathcal{C}^{+}(0,1)\) and \(\sqrt{\lambda^{s}_{\mathfrak{q}}}\sim\mathcal{C}^{+}(0,1)\). The HS possesses excellent posterior contraction properties (see, e.g., Ghosh et al., 2016; Armagan et al., 2013; van der Pas et al., 2014). Moreover, it does not rely on any additional tuning parameters.
Another popular global-local shrinkage prior is the Normal-Gamma (NG) prior of Griffin and Brown (2010). This prior assumes that \(u\) and \(\pi\) are Gamma densities. More formally, \(\psi^{s}_{\mathfrak{q}j}\sim\mathcal{G}(\vartheta,\lambda^{s}_{\mathfrak{q}} \vartheta/2)\) and \(\lambda^{s}_{\mathfrak{q}}\sim\mathcal{G}(c_{0},d_{0})\), with \(\vartheta\) being a hyperparameter that controls the tail behavior of the prior, and \(c_{0}\) and \(d_{0}\) are hyperparameters that determine the overall degree of shrinkage. We set \(c_{0}=d_{0}=0\) and \(\vartheta=0.1\). This choice implies heavy global shrinkage on the coefficients but also implies fat tails of the marginal prior of the coefficients after integrating out the local scaling parameters. The Bayesian LASSO is obtained as a special case of the NG prior with \(\vartheta=1\).
Finally, the Dirichlet-Laplace prior (Bhattacharya et al., 2015) assumes that the local scaling parameter \(\psi^{s}_{\mathfrak{q}j}\) is a product of a Dirichlet-distributed random variate \(\phi^{s}_{\mathfrak{q}j}\sim\text{Dir}(\alpha,\ldots,\alpha)\) and a parameter \(\tilde{\psi}^{s}_{\mathfrak{q}j}\sim\mathcal{E}(1/2)\) that follows an exponential distribution. Hence, the Dirichlet-Laplace prior sets \(\psi^{s}_{\mathfrak{q}j}=(\phi^{s}_{\mathfrak{q}j})^{2}\tilde{\psi}^{s}_{ \mathfrak{q}j}\). On the global scaling parameters we use a Gamma distribution \(\sqrt{\lambda^{\beta}_{\mathfrak{q}}}\sim\mathcal{G}(K\alpha,1/2)\) and \(\sqrt{\lambda^{\gamma}_{\mathfrak{q}}}\sim\mathcal{G}(M\alpha,1/2)\). We set \(\alpha=\frac{1}{K}\) for the linear part and \(\alpha=\frac{1}{M}\) for the non-linear part.
### Capturing nonlinearities in high dimensional QRs
In extreme periods such as the GFC or the Covid-19 pandemic, nonlinearities in macroeconomic data become prevalent. We control for this by having a nonlinear part in our QR. As stated in (2), we capture possible nonlinearities in \(\mathbf{x}_{t}\) through nonlinear transformations \(z_{m}(\mathbf{x}_{t})\).
The first and simplest nonlinear specification maps \(\mathbf{x}_{t}\) into the space of polynomials. Bai and Ng (2008) capture nonlinearities in macro data through polynomials and by relying on factor-based predictive regressions. We follow this approach and define the corresponding basis function as follows:
\[\mathbf{z}_{t}=((\mathbf{x}_{t}^{2})^{\prime},(\mathbf{x}_{t}^{3})^{\prime},\ldots,(\mathbf{x}_ {t}^{N})^{\prime})^{\prime}.\]
Deciding on the order of the polynomial \(N\) is a model selection issue and suitable shrinkage priors can be adopted. In our empirical work, we focus on the cubic case. This specification will overweight large movements in \(\mathbf{x}_{t}\) and should thus be suitable for quickly capturing sharp downturns in the business cycle. In this case, the number of coefficients triples since \(M=3K\). The resulting nonlinear model is called _Polynomial-QR_.
Adding cubic terms allows us to capture nonlinearities in a relatively restricted manner. Since the precise form of nonlinearities is typically unknown, the remaining two specifications we consider are nonparametric and only require relatively mild prior assumptions on the form of nonlinear interactions. The first of these two is the B-Spline (see, e.g., De Boor, 2001, for a review). B-Splines have a proven track record in machine learning and computer science (Shin et al., 2020).
For the B-spline, we assume that each element in \(\mathbf{x}_{t}\) exerts a (possibly) nonlinear effect on \(y_{t}\) that might differ across covariates. This implies that \(g_{\mathfrak{q}}(\mathbf{x}_{t})\) equals:
\[g_{\mathfrak{q}}(\mathbf{x}_{t})\approx\sum_{k=1}^{K}\mathbf{\Phi}_{k}(\mathbf{x_{\bullet,j}})\mathbf{\gamma}_{\mathfrak{q},k}.\]
Here, we let \(\mathbf{\Phi}_{k}\) denote a \(T\times r\) matrix of B-spline basis functions that depend on the \(j^{th}\) covariate in \(\mathbf{X}=(\mathbf{x}_{1}^{\prime},\ldots,\mathbf{x}_{T}^{\prime})^{\prime}\), \(\mathbf{x_{\bullet,j}}\) and \(r\) is the number of knots. In this case, the number of nonlinear coefficients is \(M=rK\). In our empirical work we place the knots at the following quantiles of \(\mathbf{x_{\bullet,j}}\): {0, 0.05, 0.1, 0.25, 0.50, 0.75, 0.90, 0.95, 1}, implying that \(r=9\) and thus \(M=9K\). We will henceforth call this model _Spline-QR_.
The last specification we consider is the Gaussian process (GP) regression. GP regression
is a nonparametric estimation method that places a GP prior on the function \(g_{\mathfrak{q}}(\mathbf{x}_{t})\):
\[g_{\mathfrak{q}}(\mathbf{x}_{t})\sim\mathcal{GP}(\mu_{\mathfrak{q}}(\mathbf{x}_{t}), \mathcal{K}(\mathbf{x}_{t},\mathbf{x}_{\mathfrak{t}})).\]
The mean function \(\mu_{\mathfrak{q}}(\mathbf{x}_{t})\) is, without loss of generality, set equal to zero and \(\mathcal{K}(\mathbf{x}_{t},\mathbf{x}_{\mathfrak{t}})\) is a kernel function that encodes the relationship between \(\mathbf{x}_{t}\) and \(\mathbf{x}_{\mathfrak{t}}\) for \(t,\mathfrak{t}=1,\ldots,T\). It is worth noting that our additive specification implies that if the mean function is set equal to zero, the model is centered on a standard QR.
Since \(\mathbf{x}_{t}\) is observed in discrete time steps, the GP prior implies a Gaussian prior on \(\mathbf{g}_{\mathfrak{q}}=(g_{\mathfrak{q}}(\mathbf{x}_{1}),\ldots,g_{\mathfrak{q}}( \mathbf{x}_{T}))^{\prime}\):
\[\mathbf{g}_{\mathfrak{q}}\sim\mathcal{N}(\mathbf{0}_{T},\mathbf{K}(\mathbf{w})),\]
where \(\mathbf{K}(\mathbf{w})\) is a \(T\times T\)-dimensional matrix with \((t,\mathfrak{t})^{th}\) element \(\mathcal{K}(\mathbf{x}_{t},\mathbf{x}_{\mathfrak{t}})\). \(\mathbf{w}=(w_{1},w_{2})^{\prime}\) is a set of hyperparameters that determine the properties of the kernel (and thus the estimated function).
The GP regression is fully specified if we determine the kernel function \(\mathcal{K}\). In this paper, we use the Gaussian (or squared exponential) kernel:
\[\mathcal{K}(\mathbf{x}_{t},\mathbf{x}_{\mathfrak{t}})=w_{1}\times\exp\left(-\frac{w_{ 2}}{2}||\mathbf{x}_{t}-\mathbf{x}_{t}||^{2}\right).\]
The hyperparameters \(\mathbf{w}\) are set according to the median heuristic proposed in Arin et al. (2017).
What we discuss above is the function-space view of the GP regression. An alternative way of expressing the GP is the so-called weight-space view. The weight-space view is obtained by integrating out \(\mathbf{g}_{\mathfrak{q}}\), yielding the following regression representation:
\[\mathbf{y}=\mathbf{X}\mathbf{\beta}_{\mathfrak{q}}+\mathbf{Z}\mathbf{\gamma}_{\mathfrak{q}}+\mathbf{ \varepsilon},\]
with \(\mathbf{y}\) denoting the stacked dependent variables, \(\mathbf{Z}\) is the lower Cholesky factor of \(\mathbf{K}\) and \(\mathbf{\gamma}_{\mathfrak{q}}\sim\mathcal{N}(0,\mathbf{I}_{T})\). Notice that \(\mathbf{g}_{\mathfrak{q}}=\mathbf{Z}\mathbf{\gamma}_{\mathfrak{q}}\). Hence, the Cholesky factor of the kernel matrix provides the basis functions, and the parameters can be readily estimated. In this case, the number of nonlinear coefficients is \(M=T\). Since we use a shrinkage prior on \(\mathbf{\gamma}_{\mathfrak{q}}\), the corresponding implied kernel is given by \(\mathbf{Z}\mathbf{B}_{\mathfrak{q}}^{\gamma}\mathbf{Z}^{\prime}\). The \(M\times M\) matrix \(\mathbf{B}_{\mathfrak{q}}^{\gamma}\) is a prior covariance matrix with \(\mathbf{B}_{\mathfrak{q}}^{\gamma}=\lambda_{\mathfrak{q}}^{\gamma}\times\text{ diag}(\psi_{\mathfrak{q}1}^{\gamma},\ldots,\psi_{\mathfrak{q}M}^{\gamma})\). Approximating \(g_{\mathfrak{q}}\) using GPs leads to the _GP-QR_ specification.
This completes our choice of nonlinear techniques used in the big data QR. Alternative
choices (such as allowing for time-varying parameters, neural networks or Bayesian additive regression trees) can be straightforwardly introduced in this general framework.
### A brief introduction to variational Bayes
The high dimensionality of the state space calls for alternative techniques to carry out posterior inference. We opt for using variational approximations to the joint posterior density. In this section, we provide a discussion on how VB works in general. For an excellent in-depth introduction, see Blei et al. (2017). In machine learning, variational techniques have been commonly used to estimate complex models such as deep neural networks (see, e.g., Polson and Sokolov, 2017). In econometrics, recent papers use VB in huge dimensional multivariate time series models such as VARs (Gefang et al., 2022; Chan and Yu, 2020) or state space models to speed up estimation (Koop and Korobilis, 2023). In a recent paper, Korobilis and Schroder (2022), propose a QR factor model and estimate it using VB techniques.
To simplify the exposition, we fix the prior variances. The appendix provides information on how we estimate the prior variances (and associated hyperparameters) using VB. Let \(\mathbf{\xi}_{\mathsf{q}}=(\mathbf{\beta}_{\mathsf{q}},\mathbf{\gamma}_{\mathsf{q}},\sigma _{\mathsf{q}},\mathbf{\nu}_{\mathsf{q}})\) denote a generic vector which stores all unknowns of the model, with \(\mathbf{\nu}_{\mathsf{q}}=(\nu_{\mathsf{q}1},\ldots,\nu_{\mathsf{q}T})\) denoting the latent components.
Our aim is to approximate the joint posterior distribution \(p(\mathbf{\xi}_{\mathsf{q}}|\mathbf{y})\) using an analytically tractable approximating distribution \(q(\mathbf{\xi}_{\mathsf{q}})\). This variational approximation is found by minimizing the Kullback-Leibler (KL) distance between \(p\) and \(q\). One can show that minimization of the KL distance is equivalent to maximizing the evidence lower bound (ELBO) defined as:
\[\text{ELBO}=\mathbb{E}_{q(\mathbf{\xi}_{\mathsf{q}})}\left(\log p(\mathbf{\xi}_{ \mathsf{q}},\mathbf{y})\right)-\mathbb{E}_{q(\mathbf{\xi}_{\mathsf{q}})}\left(\log q( \mathbf{\xi}_{\mathsf{q}})\right), \tag{3}\]
with \(\mathbb{E}_{q(\mathbf{\xi}_{\mathsf{q}})}\) denoting the expectation with respect to \(q(\mathbf{\xi}_{\mathsf{q}})\). This implies that finding the approximating density \(q\) replaces the integration problem (which is typically solved through MCMC sampling) with an optimization problem (which is fast and thus scales well into high dimensions).
A common and analytically tractable choice of approximating densities assumes that \(q(\mathbf{\xi}_{\mathsf{q}})\) is factorized as follows:
\[q(\mathbf{\xi}_{\mathsf{q}})=\prod_{s=1}^{S}q_{s}(\mathbf{\xi}_{\mathsf{q}s}),\]
where \(\mathbf{\xi}_{\mathsf{q}s}\) denotes a partition of \(\mathbf{\xi}_{\mathsf{q}}\). A particular example (which we use in this paper) would specify \(\mathbf{\xi}_{\mathsf{q}1}=(\mathbf{\beta}^{\prime}_{\mathsf{q}},\mathbf{\gamma}^{\prime} _{\mathsf{q}})^{\prime}\), \(\xi_{\mathsf{q}2}=\sigma_{\mathsf{q}}\) and \(\mathbf{\xi}_{\mathsf{q}3}=\mathbf{\nu}_{\mathsf{q}}\).
This class is called the mean field variational approximation and assumes that the different blocks \(\mathbf{\xi}_{\mathfrak{qs}}\) are uncorrelated.4 Notice that all our priors on \(\mathbf{\xi}_{\mathfrak{q}}\) can be written as:
Footnote 4: Frazier et al. (2022) state that mean field VB approximations might perform poorly in models with a large number of latent variables. However, they also note that the resulting model forecasts could still perform well in practice.
\[p(\mathbf{\xi}_{\mathfrak{q}})=\prod_{s=1}^{S}p(\mathbf{\xi}_{\mathfrak{q}s}),\]
and using the fact that:
\[\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{q}})}(\log p(\mathbf{\xi}_{\mathfrak{q}},\mathbf{y} ))=\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{q}})}(\log p(\mathbf{y}|\mathbf{\xi}_{\mathfrak{ q}}))+\sum_{s=1}^{S}\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{q}})}(\log p(\mathbf{\xi}_{ \mathfrak{q}s})),\]
the ELBO can be stated as:
\[\text{ELBO}=\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{q}})}(\log p(\mathbf{y}|\mathbf{\xi}_{ \mathfrak{q}}))+\sum_{s=1}^{S}\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{q}})}(\log p (\mathbf{\xi}_{\mathfrak{q}s}))-\sum_{s=1}^{S}\mathbb{E}_{q(\mathbf{\xi}_{\mathfrak{ q}})}(\log q(\mathbf{\xi}_{\mathfrak{q}s})).\]
Wand et al. (2011) prove that under the variational family the optimal approximating densities are closely related to the full conditional posterior distributions:
\[q_{s}^{*}(\mathbf{\xi}_{\mathfrak{q}})=\exp\left[\mathbb{E}_{q(\mathbf{\xi}_{ \mathfrak{q}})}(\log p(\mathbf{\xi}_{\mathfrak{q}s}|\mathbf{y},\mathbf{\xi}_{\mathfrak{q},-s})\right],\]
where \(\mathbf{\xi}_{\mathfrak{q},-s}\) is the vector \(\mathbf{\xi}_{\mathfrak{q}}\) with the \(s^{th}\) component excluded. Hence, if \(p(\mathbf{\xi}_{\mathfrak{q}s}|\mathbf{y},\mathbf{\xi}_{\mathfrak{q},-s})\) is known (which is the case for the QR regression based on the auxiliary representation discussed in the previous subsection), the elements in \(\mathbf{\xi}_{\mathfrak{q}s}\) can be updated iteratively (by conditioning on the expected values of \(\mathbf{\xi}_{\mathfrak{q},-s}\)) until the squared difference of the ELBO or of all elements of \(\xi_{\mathfrak{q}s}\) is smaller than some small \(\epsilon\) between two subsequent iterations.
### Approximate Bayesian inference in general QRs
In this section we briefly state the three approximating densities (\(q_{s}^{*}(\mathbf{\xi})\)) used to estimate the parameters and latent quantities in the QR regression. We provide derivations for the three approximating densities of the three parameter groups: \(\tilde{\mathbf{\beta}}_{\mathfrak{q}}=(\mathbf{\beta}_{\mathfrak{q}}^{\prime},\mathbf{ \gamma}_{\mathfrak{q}}^{\prime})^{\prime}\), \(\sigma_{\mathfrak{q}}\) and \(\mathbf{\nu}_{\mathfrak{q}}\) in the Online Appendix.
We start by discussing the approximating densities for the regression and basis coefficients.
A Gaussian distribution approximates the posterior of \(\mathbf{\hat{\beta}}_{\mathsf{q}}\):
\[p(\mathbf{\hat{\beta}}_{\mathsf{q}}|\bullet)\approx\mathcal{N}\left(\mathbb{E}(\mathbf{ \hat{\beta}}_{\mathsf{q}}),\mathbf{\hat{\Sigma}}_{\mathbf{\kappa}_{\mathsf{q}}}\right),\]
with variance and mean given by, respectively:
\[\mathbf{\hat{\Sigma}}_{\mathbf{\hat{\beta}}_{\mathsf{q}}} =\left[\sum_{t=1}^{T}\frac{\mathbf{f}_{t}\mathbf{f}_{t}^{\prime}}{\tau_{ \mathsf{q}}^{2}}\mathbb{E}\left(\frac{1}{\nu_{\mathsf{q}t}}\right)\mathbb{E} \left(\frac{1}{\sigma_{\mathsf{q}}}\right)+\mathbf{B}_{\mathsf{0}\mathsf{q}}^{-1} \right]^{-1},\] \[\mathbb{E}(\mathbf{\tilde{\beta}}_{\mathsf{q}}) =\mathbf{\hat{\Sigma}}_{\mathbf{\tilde{\beta}}_{\mathsf{q}}}\left[ \mathbb{E}\left(\frac{1}{\sigma_{\mathsf{q}}}\right)\sum_{t=1}^{T}\mathbb{E} \left(\frac{1}{\nu_{\mathsf{q}t}}\right)\frac{\mathbf{f}_{t}\left(y_{t}-\theta_{ \mathsf{q}}\left[\mathbb{E}\left(\frac{1}{\nu_{\mathsf{q}t}}\right)\right]^{ -1}\right)}{\tau_{\mathsf{q}}^{2}}\right].\]
\(\mathbf{f}_{t}=(\mathbf{x}_{t}^{\prime},\mathbf{z}_{t}^{\prime})^{\prime}\) and \(\mathbf{B}_{\mathsf{0}\mathsf{q}}^{-1}=\mathrm{diag}(\mathbf{B}_{\mathsf{q}}^{\beta}, \mathbf{B}_{\mathsf{q}}^{\gamma})^{-1}\) is a prior precision matrix with \(\mathbf{B}_{\mathsf{q}}^{\beta}=\lambda_{\mathsf{q}}^{\beta}\times\mathrm{diag}( \psi_{\mathsf{q}\mathsf{1}}^{\beta},\ldots,\psi_{\mathsf{q}K}^{\beta})\) and \(\mathbf{B}_{\mathsf{q}}^{\gamma}=\lambda_{\mathsf{q}}^{\gamma}\times\mathrm{diag}( \psi_{\mathsf{q}\mathsf{1}}^{\gamma},\ldots,\psi_{\mathsf{q}K}^{\gamma})\). The approximating densities used to estimate the prior hyperparameters are provided in Section 1 of the Online Appendix.
The latent variable \(\nu_{\mathsf{q}\mathsf{f}}\) follows a generalized inverse Gaussian (GIG) distribution: \(\mathrm{GIG}(r,A,B)\)5 with
Footnote 5: We use the following parametrization of the GIG distribution: \(\log\left(\mathrm{GIG}(x)\right)\propto(0.5-1)\log(x)-\left(Ax+\frac{1}{2} \frac{B}{x}\right)\).
\[p(\nu_{\mathsf{q}\mathsf{f}}|\bullet)\approx\mathrm{GIG}\left(\frac{1}{2},\ \underbrace{2\mathbb{E}\left(\frac{1}{\sigma_{\mathsf{q}}}\right)+\frac{\theta_ {\mathsf{q}}^{2}}{\tau_{\mathsf{q}}^{2}}\mathbb{E}\left(\frac{1}{\sigma_{ \mathsf{q}}}\right)}_{A_{\mathsf{q}}},\ \ \ \underbrace{\frac{\mathbb{E}\left(\frac{1}{\sigma_{ \mathsf{q}}}\right)}{\tau_{\mathsf{q}}^{2}}\left[\left(y_{t}-\mathbf{f}_{t}^{ \prime}\mathbb{E}(\mathbf{\tilde{\beta}}_{\mathsf{q}})\right)^{2}+\mathbf{f}_{t}^{ \prime}\mathbf{\hat{\Sigma}}_{\mathbf{\tilde{\beta}}_{\mathsf{q}}}\mathbf{f}_{t}\right]} _{B_{\mathsf{q}}}\right).\]
The moments of \(\nu_{\mathsf{q}\mathsf{f}}\) are given by
\[\mathbb{E}\left(\nu_{\mathsf{q}\mathsf{f}}^{j}\right)=\left(\frac{\sqrt{B_{ \mathsf{q}}}}{\sqrt{A_{\mathsf{q}}}}\right)^{j}\frac{K_{1/2+j}\left(\sqrt{A_{ \mathsf{q}}B_{\mathsf{q}}}\right)}{K_{1/2}\left(\sqrt{A_{\mathsf{q}}B_{ \mathsf{q}}}\right)},\]
where \(K_{x}\) denotes the modified Bessel function of the second kind.
Finally, we approximate
\[p\left(\frac{1}{\sigma_{\mathsf{q}}}|\bullet\right)\approx\mathcal{G}(c_{ \mathsf{q}\mathsf{1}},d_{\mathsf{q}\mathsf{1}})\]
with
\[c_{\mathsf{q1}} =c_{0}+1.5T,\] \[d_{\mathsf{q1}} =d_{0}+\sum_{t=1}^{T}\mathbb{E}(\nu_{\mathsf{q}t})+\frac{1}{2\tau_{ \mathsf{q}}^{2}}\sum_{t=1}^{T}(\mathbb{E}\left(\frac{1}{\nu_{\mathsf{q}t}} \right)\left(y_{t}-\mathbf{f}_{t}^{\prime}\mathbb{E}(\tilde{\mathbf{\beta}}_{\mathsf{q }})\right)^{2}+2\theta_{\mathsf{q}}(\mathbf{f}_{t}^{\prime}\mathbb{E}(\tilde{\mathbf{ \beta}}_{\mathsf{q}})-y_{t})\] \[+\mathbb{E}(\nu_{\mathsf{q}t})\theta_{\mathsf{q}}^{2}+\mathbb{E} \left(\frac{1}{\nu_{\mathsf{q}t}}\right)\mathbf{f}_{t}^{\prime}\hat{\mathbf{\Sigma}}_{ \tilde{\mathbf{\beta}}_{\mathsf{q}}}\mathbf{f}_{t}),\]
and \(\mathbb{E}\left(\frac{1}{\sigma_{\mathsf{q}}}\right)=\frac{c_{\mathsf{q1}}}{d _{\mathsf{q1}}}\).
### Comparing computation times between VB and MCMC
These steps, in combination with the updating steps for the priors detailed in the Online Appendix, form the basis of our VB algorithm. As stated in the introduction, the key advantage of using VB instead of more precise MCMC-based techniques is computational efficiency. Before we turn to our empirical work, we illustrate this point using synthetic data.
To illustrate the computational merits of employing VB-based approximations, Fig. 1 shows the estimation times for different values of \(M+K\) using our VB-based QR (for a specific quantile) and the QR estimated through the Gibbs sampler. The MCMC algorithm is repeated \(10,000\) times. The figure shows that the computational burden increases lightly in the number of covariates for VB. When we focus on MCMC estimation, the computational requirements increase sharply in the number of covariates. Especially in our empirical work, where \(K+M\) is often above \(1,000\), VB proves to be a fast alternative to MCMC-based quantile regressions. It is also worth stressing that if the number of quantiles to estimate is large (and no parallel computing facilities are available), MCMC-based estimation becomes excessively slow.
Figure 1: Comparison of computation times against the number of covariates \(M+K\)
Forecasting output growth using huge dimensional QRs
In this section, we present our forecasting results. The next sub-section provides information on the dataset and the forecasting setup. We then proceed by discussing the results from QRs that exclude the nonlinear part in Sub-section 3.2. The question whether nonlinearities are important is investigated in Sub-section 3.3, and Sub-section 3.4 deals with how forecast accuracy changes over time. Sub-section 3.5 discusses the determinants of the different tail forecasts and differences in the shrinkage properties across priors.
### Data overview and forecasting setup
We use the quarterly version of the McCracken and Ng (2016) dataset. The data set covers information about the real economy (output, labor, consumption, orders and inventories), money, prices and financial markets (interest rates, exchange rates, stock market indexes). All series are seasonally adjusted and transformed to be approximately stationary. The set of variables included in \(\mathbf{x}_{t}\) and their transformation codes are described in Table 1 of the Online Appendix. All models we consider also include the first lag of GDP growth.6 Forecasts are carried out using direct forecasting by appropriately lagging the elements in \(\mathbf{x}_{t}\).
Footnote 6: We find that including more lags of GDP growth only has small effects on the empirical results.
Our sample runs from 1971Q1 to 2021Q3 and we use the period 1991Q2 to 2021Q3 as our hold-out period. The forecasting design is recursive. This implies that we estimate all our models on an initial training sample with data until 1991Q1 and produce one-quarter- and four-quarters-ahead predictive distributions for 1991Q2 and 1992Q1, respectively. After obtaining these, we add the next observation (1991Q2) and recompute the models to obtain the corresponding predictive densities for 1991Q3 and 1992Q2. This procedure is repeated until we reach the end of the hold-out period.
As a measure of overall forecasting accuracy we focus on the continuous ranked probability score (CRPS). The CRPS is a measure of density forecasting accuracy and generalizes the mean absolute error (MAE) to take into account how well a given model predicts higher order moments of a target variable.
The CRPS measures overall density fit. Considering overall CRPSs possibly masks relevant idiosyncrasies of model performance across quantiles. If a decision maker is interested in downside risks to GDP growth, she might value a model more that does well at the critical 5 or 10 percentiles as opposed to the remaining regions of the predictive distribution. To shed light
on asymmetries across different predictive quantiles, we focus on the quantile score (QS):
\[\text{QS}_{\mathfrak{q}t}=(y_{t}-Q_{\mathfrak{q}t})(\mathfrak{q}-\mathbf{1}_{\{y _{t}\leq Q_{\mathfrak{q}t}\}}),\]
where \(Q_{\mathfrak{q}t}\) is the forecast of the \(\mathfrak{q}^{th}\) quantile of \(y_{t}\) and \(\mathbf{1}_{\{y_{t}\leq Q_{\mathfrak{q}t}\}}\) denotes the indicator function that equals one if \(y_{t}\) is below the forecast for the \(\mathfrak{q}^{th}\) quantile.
The QS can also be used to construct quantile-weighted (qw) CRPS scores (Gneiting and Ranjan, 2011). These qw-CRPSs can be specified to put more weight on certain regions of the predictive distribution. In general, the qw-CRPS is computed as:
\[\text{qw-CRPS}=\frac{2}{J-1}\sum_{j=1}^{J-1}\omega(\zeta_{j})\text{QS}_{ \mathfrak{s}_{j}t},\]
with \(\zeta_{j}=j/J\), \(J-1=19\) denoting the number of quantiles we use to set up the qw-CRPS and \(\mathfrak{s}_{j}\) selects the \(j^{th}\) element from the set of quantiles we consider. This set ranges from \(0.05\) to \(0.95\) with a step size of \(0.05\) and thus, \(\mathfrak{s}_{1}=0.05,\mathfrak{s}_{2}=0.10,\ldots,\mathfrak{s}_{19}=0.95\).
We use two weighting functions \(\omega(\zeta_{j})\) that focus on different regions of the predictive density. These schemes are motivated in Gneiting and Ranjan (2011). The first (CRPS-left) puts more weight on the left tail (i.e. downside risks) and is specified as \(\omega(\zeta_{j})=(1-\zeta_{j})^{2}\), while the second (CRPS-tails) puts more weight on both tails as opposed to the center of the distribution: \(\omega(\zeta_{j})=(2\zeta_{j}-1)^{2}\). Notice that if we use equal weights we obtain a discrete approximation to the CRPS.
### Results based on linear QRs
We start discussing the QRs that set \(g(\mathbf{x}_{t})=0\) for all \(t\). Here, our goal is to show that including more information pays off relative to the model proposed in Adrian et al. (2019). Hence, we benchmark the QR models to the model which only includes lagged GDP growth and the NFCI. This model is henceforth called the ABG model and estimated in the same way as in the original paper.
Table 1 shows average (over time) qw-CRPSs relative to the ABG model. Numbers smaller than one suggest that a given model outperforms the ABG benchmark whereas numbers exceeding unity indicate that the model produces less precise density forecasts.
The table reveals a great deal of heterogeneity with respect to different priors. Popular GL priors such as the HS, the NG or the DL lead to forecasts that are often slightly worse than the
ones obtained from the benchmark. However, priors such as the Ridge or the LASSO (which is particular known for over-shrinking significant signals (see, e.g., Griffin and Brown, 2010)) yield forecasts that are better than the benchmark forecasts for both forecast horizons and across the different variants of the CRPS. Our findings corroborate recent results in Carriero et al. (2022) who show that large QRs with shrinkage improve upon the ABG benchmark.
This is especially pronounced in the case of the Ridge prior. In this case, the accuracy gains vis-a-vis the ABG benchmark reach 17 percent and, in most cases, accuracy differences are statistically significant according to the Diebold and Mariano (1995) test.
Turning to the different forecast horizons reveals that specifications that do well in terms of short-term forecasting also produce precise longer-term predictions. For the LASSO-based model, four-quarters-ahead accuracy gains are slightly more pronounced whereas for the Ridge we do not find discernible differences across both forecast horizons.
Next, we drill deeper into the quantile-specific forecasting performance by considering QSs for \(\mathfrak{q}\) ranging from \(\mathfrak{q}\in\{0.05,0.1,0.25,0.5,0.75,0.95,0.99\}\). These, for one-step-ahead forecasts, are shown in Fig. 2 and Fig. 3 provides the four-steps-ahead results. Before starting our discussion it is worth stressing that many of these differences are statistically significant with respect to the DM test. The corresponding results are provided in the Online Appendix (see Fig. 15 and 16).
Similar to the findings based on the CRPSs, there is a great deal of heterogeneity across priors. Both the LASSO and the Ridge prior improve upon the ABG benchmark for all quantiles by relatively large margins. These gains appear to be more pronounced in the tails, reaching over 20 percent in terms of the QSs. When focusing on the center of the distribution (i.e., the median forecast), the gains are much smaller. In general, the other priors perform considerably
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{One-quarter-ahead} & \multicolumn{3}{c}{Four-quarters-ahead} \\ \cline{2-7} & CRPS & CRPS-tails & CRPS-left & CRPS & CRPS-tails & CRPS-left \\ \hline HS & 1.06 & 1.06 & 1.05 & 0.99 & 0.95 & 1.02 \\ \cline{2-7} RIDGE & 0.88 & 0.84 & 0.83 & 0.87 & 0.86 & 0.87 \\ \cline{2-7} NG & 1.01 & 0.98 & 0.98 & 1.01 & 0.96 & 1.03 \\ LASSO & 0.91 & 0.89 & 0.91 & 0.87 & 0.85 & 0.88 \\ DL & 1.10 & 1.09 & 1.08 & 1.09 & 1.06 & 1.14 \\ \hline \hline \end{tabular}
* _Notes_: We highlight in light gray (dark gray) rejection of equal forecasting accuracy against the benchmark model at significance level 10% (5%) using the test in Diebold and Mariano (1995) with adjustments proposed by Harvey et al. (1997). Results are shown relative to the AGB model and are based on the full sample.
\end{table}
Table 1: CRPS for linear models
worse. The only exception turns out to be the NG prior which, displays an excellent performance in the left tail, while being still outperformed by the LASSO and the Ridge prior.
Considering four-quarters-ahead tail forecasts yield a similar but less pronounced picture.
Figure 3: Four-quarters-ahead quantile scores for different values of \(\mathfrak{q}\), averaged over the hold-out period.
Figure 2: One-quarter-ahead quantile scores for different values of \(\mathfrak{q}\), averaged over the hold-out period.
For higher-order forecasts, priors that did well at the one-quarter-ahead horizon (LASSO and Ridge) also yield precise tail forecasts. One remarkable difference from short-term forecasts is that higher order median forecasts appear to be much more precise than the ones obtained from the ABG benchmark specification.
This brief discussion gives rise to a simple recommendation for practitioners. If interest is on producing precise tail forecasts (irrespective of the forecast horizon) it pays off to use large QRs coupled with either a LASSO or Ridge-type prior. Since the Ridge prior is much simpler (i.e., it only features a single hyperparameter) and the empirical performance is very similar to the LASSO, our focus from now on will be on comparing the Ridge-based QR with a range of non-linear specifications.
### Allowing for nonlinearities in large scale QRs
In the previous sub-section we have shown that using big QRs leads to tail forecasts that are superior to the ones of the benchmark ABG specification. Conditional on the quantile, these models are linear in the parameters. However, recent literature (see, e.g., Clark et al., 2022b) suggests that nonlinearities become more important in the tails. Hence, we now address this question within our approximate framework.
Table 2 shows relative CRPSs for the different nonlinear models. As opposed to Table 1, all results are now benchmarked against the QR with the Ridge prior. This allows us to directly measure the performance gains from introducing nonlinearities relative to setting \(g_{\mathsf{q}}(\mathbf{x}_{t})=0\). Notice that the absence of gray shaded cells in the table indicates that the DM test does not point towards significant differences in forecast accuracy between the linear and the different nonlinear QRs.
Despite this, a few interesting insights emerge from the table. First, many numbers in the table are close to unity and differences are not statistically significant from the best performing linear QR.7 This indicates that once we include many predictors, additionally controlling for nonlinearities of different forms only yields small positive (and sometimes negative) gains in terms of tail forecasting accuracy. Second, the first finding strongly depends on the approximation techniques chosen. Among all three specifications, using GPs is superior to using either polynomials or B-Splines to approximate the unknown function \(g_{\mathsf{q}}\). Second, and focusing on
GP-QR specifications, the specific prior chosen matters appreciably. Whereas the results for the conditionally linear models clearly suggest that the LASSO and Ridge priors are producing the most precise density forecasts. The results for the nonlinear models tell a slightly different story. We observe that the Ridge does well again but, for one-quarter-ahead tail forecasts, is outperformed by the HS. The LASSO, by contrast, is the weakest specification. Since the LASSO is known to overshrink significant signals (see, e.g., Griffin and Brown, 2010), it could be that it misses out important information arising from the GP-based basis functions. Third, and finally, if we consider four-quarters-ahead predictions the QR coupled with a GP and a Ridge prior becomes the single best performing model again.
To again gain a better understanding on which quantiles of the predictive distribution drive the CRPSs, Figs. 4 and 5 are similar to Figs. 2 and 3 and show the QSs for different quantiles. These are normalized to the linear QR with ridge prior so that numbers smaller than one indicate that nonlinearities improve predictive accuracy for a given quantile and numbers exceeding one imply that nonlinearities decrease forecasting accuracy.
In general, both figures tell a consistent story: nonlinearities help in the right tail across
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{One-quarter-ahead} & \multicolumn{3}{c}{Four-quarters-ahead} \\ \cline{2-7} & CRPS & CRPS-tails & CRPS-left & CRPS & CRPS-tails & CRPS-left \\ \hline \multicolumn{7}{c}{Polynomials} \\ \hline HS & 1.02 & 1.00 & 1.08 & 0.96 & 0.97 & 1.00 \\ RIDGE & 0.98 & 0.94 & 1.01 & 0.96 & 0.97 & 1.00 \\ NG & 1.03 & 0.99 & 1.07 & 0.95 & 0.96 & 1.00 \\ LASSO & 1.05 & 1.04 & 1.08 & 1.02 & 1.00 & 0.99 \\ DL & 1.07 & 1.04 & 1.15 & 1.22 & 1.20 & 1.29 \\ \hline \multicolumn{7}{c}{B-Splines} \\ \hline HS & 1.13 & 1.16 & 1.18 & 1.10 & 1.14 & 1.05 \\ RIDGE & 1.08 & 1.08 & 1.08 & 1.09 & 1.13 & 1.04 \\ NG & 1.15 & 1.17 & 1.20 & 1.13 & 1.17 & 1.07 \\ LASSO & 1.07 & 1.06 & 1.09 & 1.02 & 1.01 & 0.99 \\ DL & 0.98 & 1.00 & 1.01 & 1.04 & 1.08 & 1.02 \\ \hline \multicolumn{7}{c}{Gaussian Processes} \\ \hline HS & 0.96 & 0.94 & 0.97 & 1.05 & 1.02 & 1.10 \\ RIDGE & 0.97 & 0.95 & 0.98 & 0.96 & 0.95 & 0.97 \\ NG & 0.98 & 0.95 & 0.98 & 1.06 & 1.02 & 1.08 \\ LASSO & 1.04 & 1.04 & 1.07 & 1.01 & 0.99 & 1.00 \\ DL & 1.02 & 0.97 & 1.00 & 1.22 & 1.20 & 1.29 \\ \hline \end{tabular} Results are shown relative to the linear QR with a Ridge prior and are based on the full sample.
\end{table}
Table 2: CRPSs for nonlinear models
both forecast horizons, for all three nonlinear specifications, and for most priors considered. The only exception to this pattern are four-quarters-ahead right tail forecasts of GDP growth when B-Splines are used. When there are gains, they are often sizable. For instance, in the case of the QR-GP model we observe accuracy improvements up to 25 percent relative to the linear QR model.
When we focus on the left tail, accuracy premia often turn negative. In some cases (such as for GP models with Ridge, NG and HS priors) there are accuracy gains for predicting downside risks but these gains are only rather small (reaching five percent in the case of the QR-GP regression with a Ridge prior).
### Heterogeneity of forecast accuracy over time
Up to this point, our analysis focused on averages over time. In the next step we will focus on how forecasting performance changes over the hold-out period. To shed light on the importance of nonlinearities over time, we again compare the different nonlinear specifications to the linear QR regression with a Ridge prior. Figs. 5 and 6 show the cumulative CRPSs relative to the linear benchmark QR for one-quarter and four-quarters-ahead forecasts.
Figure 4: One-quarter-ahead quantile scores for different values of \(\mathfrak{q}\), averaged over the hold-out period and normalized to the QR with a Ridge prior.
We start by focusing on the one-quarter-ahead forecasts first. For this specification, the density accuracy performance is heterogenous over time. In the first part of the sample, models using either polynomials or Gaussian processes coupled with a DL prior yield CRPSs that are superior to the linear benchmark. However, these accuracy gains vanish during the GFC. When we put more weight on tail forecasting accuracy (and consider GP-QRs), the gains disappear as early as during the 2001 recession that followed the 9/11 terrorist attacks and the burst of the dot-com bubble.
In the pandemic, we observe a sharp increase in predictive accuracy for several priors (most notably the Ridge and NG priors). This pattern is more pronounced for the weighted variants of the CRPSs. Considering the other nonlinear model specifications gives rise to similar insights. Spline-based approximations to \(g_{\mathsf{q}}\) generally perform poorly up until the pandemic. During the pandemic, even this specification improves sharply against the linear benchmark specification. This pattern is particularly pronounced for the GP-QRs.
Considering the performance of the models and priors that did well on average (GP-QRs with Ridge and the HS) reveals that most of these gains are actually driven by superior performance during the pandemic.
Figure 5: Four-quarters-ahead quantile scores for different values of \(\mathsf{q}\), averaged over the hold-out period and normalized to the QR with a Ridge prior.
Figure 6: Cumulative one-quarter-ahead CRPS relative to the linear QR with the Ridge prior over the hold-out period.
Figure 7: Cumulative four-quarter-ahead CRPS relative to linear QR with a Ridge prior over the hold-out period.
Turning to four-quarters-ahead forecasts provides little new insights. Models using the DL prior do not excel in the first part of the hold-out period and are generally outperformed by the linear QR. However, accuracy improvements during the GFC and the pandemic are quite pronounced for splines and the GP-QRs.
To sum up this discussion, our results indicate that forecast performance is heterogenous over time. Different models such as the Polynomial-QR and the GP-QR with a DL prior outperform in the early part of the hold-out period. This performance premium vanishes during the first two recessions observed in the sample. By contrast, other models such as QR-GP with either the NG or the LASSO do not gain much in tranquil periods but excel during recessions.
### Properties and determinants of the quantile forecasts
The previous sub-sections have outlined that QRs and QRs with nonlinear components perform well in terms of tail forecasting. In this sub-section, our goal is to investigate which variables determine the quantile forecasts and in what respect successful shrinkage priors differ from their less successful counterparts.
The presence of nonlinearities complicates our investigation since it is not clear on how to measure the effect of \(\mathbf{x}_{t}\) on a given quantile of \(y_{t}\) in the presence of nonlinearities. As a simple solution, we follow Clark et al. (2022) and approximate the nonlinear, quantile-specific model using a linear posterior summary (see Woody et al., 2021). Specifically, we estimate the following regression model:
\[Q_{\mathfrak{q},t}=\mathbf{x}_{t}^{\prime}\hat{\mathbf{\alpha}}_{\mathfrak{q}}+\hat{ \varepsilon}_{t},\quad\hat{\varepsilon}_{t}\sim\mathcal{N}(0,\sigma_{\hat{ \varepsilon}_{\mathfrak{q}}}^{2}).\]
On the linearized coefficients we use a Horseshoe prior and on the error variances an inverse Gamma prior. To achieve interpretability and decouple shrinkage and selection (see Hahn and Carvalho, 2015), we then apply the SAVS estimator proposed in Ray and Bhattacharya (2018) to the posterior mean of \(\hat{\mathbf{\alpha}}_{\mathfrak{q}}\).8 This will yield a sparse variant of \(\hat{\mathbf{\alpha}}_{\mathfrak{q}}\), that is easy to interpret and can be understood as the best linear approximation to the corresponding quantile forecast arising from the nonlinear model. For brevity, we focus on one-step-ahead forecasts (i.e. \(\mathbf{x}_{t}\) includes a single lag of all variables). Results for four-quarters-ahead are included in the Online Appendix.
Footnote 8: Huber et al. (2021) and Hauzenberger et al. (2021) apply SAVS to multivariate time series models and show that it works well for forecasting.
Figure 8 shows the results of this exercise across nonlinear specifications and priors. Starting with the left tail forecasts and linear models suggests that most quantile forecasts are not related to elements in \(\mathbf{x}_{t}\) in a robust manner. There are only two exceptions. The first relates to the NG prior. In this case, real money growth (M1real) survives the sparsification step and the relationship indicates that declines in money growth imply an increase in tail risks (i.e. a decline in GDP growth in the ten percent quantile). The other exception relates to the DL prior. In this case, employment growth in education and health services (USEHS) remains. If we focus on nonlinear models other variables appear to be correlated with forecasts of tail risks. Among the different priors, we find some variables which show up repeatedly. Among these are all nonfarm employees (PAYEMS), money growth, capacity utilization in manufacturing (CUMFNS) and private fixed investment (both residential and non-residential). Most of these variables are forward looking in nature and thus consistent with our intuition that economic agents form expectations about the state of the economy in the future and thus change their investment decisions accordingly. Notice that the relationship between private fixed investment is particularly pronounced for GP-QRs under the HS, NG and the DL prior. Another pattern worth mentioning is that the LASSO-based forecasts are generated from sparse models across both linear and nonlinear specifications.
Once we focus on the center of the distribution we find that forecasts from linear models
Figure 8: One-quarter-ahead linearized posterior summaries across quantiles
are driven by one or two variables. Most prominently, specifications that do well in terms of point forecasts (such as the Ridge and Lasso) yield point forecasts that display a strong relationship with (lagged) money growth. In case we adopt a nonlinear specification, some differences arise across specifications. For polynomials, median forecasts under all priors except the DL are related to very few predictors, with money growth and short-run unemployment showing up for the NG and LASSO models. The DL prior implies a more dense model. This could be a possible reason for the rather weak performance of this specification. When we turn to spline-based models we again find a similar pattern. Money growth shows up in the case of the NG and LASSO and short-run unemployment predicts median output growth if we adopt a HS, Ridge or NG prior. Models that capture nonlinearities through GPs, our best performing nonlinear specifications, give rise to a very consistent pattern across priors. In all cases, lagged money growth appears to be a robust predictor of GDP growth. And it impacts GDP growth forecasts negatively.
Finally, when our focus is on right-tail forecasts, all models become much more dense. Variables that have been showing up in the case of left-tail and point forecasts again show up (most notably money growth and short-term unemployment). Additional variables such as initial unemployment claims or prices remain in the sparse model as well. But there is no clear pattern across models except for the fact that money growth also remains in the set of robust predictors even if much shrinkage is introduced.
The analysis based on linearized coefficients provides information on which variables are predictive for output growth forecasts across quantiles. However, the analysis in Sub-sections 3.2 and 2.3 suggests that differences in forecast performance are driven by the prior. To understand which properties of a given prior exert a positive effect on predictive accuracy, we now focus on the shrinkage hyperparameters of the different priors. Comparing the amount of shrinkage introduced through the different priors is not straightforward. Here, our measure of choice is based on using the re-scaled log determinant of the prior covariance matrices as a measure of overall shrinkage for each respective prior. Since all prior covariance matrices are diagonal this simply amounts to summing over the log of the diagonal elements of \(\mathbf{B}_{0\mathsf{q}}\) and then normalizing through by the number of diagonal elements. This constitutes a rough measure of overall shrinkage and we can compute it for each quarter in the hold-out period. Again, we will focus on shrinkage introduced in one-quarter-ahead predictive regressions. The four-quarters-ahead results are qualitatively similar and included in the Online Appendix.
Log-determinants of the prior covariance matrices over the hold-out period are depicted in Fig. 9. The figure includes (if applicable) solid lines which refer to the amount of shrinkage introduced on the linear coefficients and dashed lines which refer to the log-determinants of the prior covariances that relate to the shrinkage factors on the basis coefficients of the different nonlinear models.
From this figure, a few interesting insights emerge. First, the different priors introduce different degrees of shrinkage. Overall, two priors stand out in terms of the amount of shrinkage they introduce. The first one is the DL. This is rather surprising given the fact that this prior performs worst in the forecasting horse race but also leads to posterior summaries which feature several non-zero coefficients. Our conjecture is that this prior forces the vast majority of coefficients to effectively zero but several coefficients remain sizable and the corresponding set of variables is still too large and overfitting issues arise. The prior that introduces the largest amount of shrinkage is the LASSO. In this case, almost all coefficients are very small. These observations are corroborated by boxplots, included in the Online Appendix (see Figs. 3 to 6 in the Online Appendix), which show the scaling parameters over three sub-samples. Our results imply that models which feature a large number of shrunk coefficients provide better forecasts than models which feature many coefficients that are effectively zero and some coefficients that are non-zero and sizable. This is consistent with findings in Giannone et al. (2021) who provide
Figure 9: Overall shrinkage in one-step-ahead predictive QRs
empirical evidence that macroeconomic data is rather dense as opposed to sparse. Notice that the fact that dense models produce accurate tail forecasts is not inconsistent with our analysis based on linearized posterior summaries. This is because the linearized model under a shrinkage and sparsification approach strikes a balance between achieving a good model fit while keeping the model as simple as possible. Hence, if the covariates in the panel co-move, shrinkage and sparsification techniques will select one of these variables.
Second, in almost all cases the amount of shrinkage introduced on the nonlinear part of the different models is much larger than the degree of shrinkage on linear coefficients. This holds for most priors, nonlinear methods and over all time periods. One exception is the Spline-QR specification with a DL prior and when the right tail is considered. Interestingly, this specific combination of much stronger shrinkage on the linear part of the model and less shrinkage on the nonlinear part leads to good forecasts in the right tail (see Fig. 4).
Third, and finally, there is (with some notable exceptions) relatively little time-variation in the amount of shrinkage over the hold-out period. The only exception are the GP-QRs. In this case, the amount of shrinkage decreases appreciably from 2013 onward.
## 4 Concluding remarks
In this paper, we have shown that combining QRs with nonlinear specifications and large datasets leads to precise quantile forecasts of GDP growth. Since the resulting models are high dimensional, we consider several popular shrinkage priors to regularize estimates. MCMC-based estimation of these huge dimensional models is slow. Hence, we speed up computation by using VB approximation methods that approximate the joint posterior distribution using simpler approximating densities.
The empirical results indicate that our methods work remarkably well when the CRPS is taken under consideration. When we put more weight on the tail forecasting performance, we find that most of the overall gains are driven by a strong performance in both the left and right tail while the performance in the center of the distribution is close to the predictive accuracy of the simple quantile regression proposed in Adrian et al. (2019). These results, however, differ across priors and nonlinear specifications. In principle, it can be said that models featuring simple shrinkage priors, such as the LASSO or Ridge, in combination with GPs to capture nonlinearities of arbitrary form yield the most precise forecasts. |
2309.07053 | Pearl's and Jeffrey's Update as Modes of Learning in Probabilistic
Programming | The concept of updating a probability distribution in the light of new
evidence lies at the heart of statistics and machine learning. Pearl's and
Jeffrey's rule are two natural update mechanisms which lead to different
outcomes, yet the similarities and differences remain mysterious. This paper
clarifies their relationship in several ways: via separate descriptions of the
two update mechanisms in terms of probabilistic programs and sampling
semantics, and via different notions of likelihood (for Pearl and for Jeffrey).
Moreover, it is shown that Jeffrey's update rule arises via variational
inference. In terms of categorical probability theory, this amounts to an
analysis of the situation in terms of the behaviour of the multiset functor,
extended to the Kleisli category of the distribution monad. | Bart Jacobs, Dario Stein | 2023-09-13T16:09:13Z | http://arxiv.org/abs/2309.07053v2 | # Pearl's and Jeffrey's Update as
###### Abstract
The concept of updating a probability distribution in the light of new evidence lies at the heart of statistics and machine learning. Pearl's and Jeffrey's rule are two natural update mechanisms which lead to different outcomes, yet the similarities and differences remain mysterious. This paper clarifies their relationship in several ways: via separate descriptions of the two update mechanisms in terms of probabilistic programs and sampling semantics, and via different notions of likelihood (for Pearl and for Jeffrey). Moreover, it is shown that Jeffrey's update rule arises via variational inference. In terms of categorical probability theory, this amounts to an analysis of the situation in terms of the behaviour of the multiset functor, extended to the Kleisli category of the distribution monad.
probabilistic reasoning, probabilistic programming, category theory, machine learning, statistical inference, variational inference, denotational semantics, Pearl, Jeffrey. +
Footnote †: journal: Computer Science
## 1 Introduction
Suppose you test for a certain disease, say Covid. You take three consecutive tests, because you wish to be sure - two of them come out positive but one is negative. How do you compute the subsequent (posterior) probability that you actually have the disease? In a medical setting one starts from a prevalence, that is, an _a priori_ disease probability, which is assumed to hold for the whole population. Medical tests are typically not perfect: one has to take their sensitivity and specificity into a account. They tell, respectively, if someone has the disease, the probability that the test is positive, and if someone does not have the disease, the probability that the test is negative.
When all these probabilities (prevalence, sensitivity, specificity) are known, one can apply Bayes' rule and obtain the posterior probability after a single test. But what if we do three tests? And what if we do a thousand tests?
It turns out that things become fuzzy when tests are repeated multiple times. One can distinguish two approaches, associated with Pearl and Jeffrey. They agree on single tests. But they may disagree wildly on multiple tests, see the example in Section 2 below. This is disconcerting, certainly in the current age of machine learning, in which so many decisions are based on statistical learning and decision making.
Earlier work (of one of the authors) [6, 8] analysed the approaches of Pearl and Jeffrey. The difference there was formulated in terms of learning from 'what is right' and from 'what is wrong'. As will be
recalled below, Pearl's update rule involves increasing validity (expected value), whereas Jeffrey's rule involves decreasing (Kullback-Leibler) divergence. The contributions of this paper are threefold.
* It adds the perspective of probabilistic programming. Pearl's and Jeffrey's approaches to updating are formulated, for the medical test example, in a standard probabilistic programming language, namely WebPPL [5, 6], see Section 2. Pearl's update is straightforwardly expressible using built-in conditioning constructs, while Jeffrey's update involves nested inference, a simple form of reasoning about reasoning [14]. We further explore the different dynamics behind the two update techniques are operationally using rejection samplers in Section 6.
* The paper also offers a new perspective on the Pearl/Jeffrey distinction in terms of different underlying generative models and their associated likelihoods: with Pearl's update rule one increases one form of 'Pearl' likelihood, whereas with Jeffrey's update rule one increases another form of 'Jeffrey' likelihood. These two likelihoods are described in terms of different forms of evaluating data (as a multiset of data points) with respect to a multinomial distribution. Theses two forms of likelihood are directly related to the respective update mechanisms, see Section 7. Pearl likelihood occurs in practice, for example as the basis of the multinomial naive Bayes classifer [13], while Jeffrey likelihood -- and its difference to Pearl's -- is new, as far as we know.
* Pearl's likelihood directly leads to the associated update rule, see Theorem 7.3. For Jeffrey's likelihood the connection is more subtle and involves variational inference [11, 12]: it is shown that Jeffrey's update is least divergent from the update rule for Jeffrey likelihood, in a suitable sense, see Theorem 8.5. This likelihood update rule is described categorically in terms of the extension of the multiset functor to the Kleisli category of the (discrete) distribution monad, see [4, 8]. This analysis clarifies the mathematical situation, for instance in Equation 11, where it is shown that this extended multiset functor commutes with the 'daggerger' reversal of channels. This is a new result, with a certain esthetic value.
This paper develops the idea that Pearl's and Jeffrey's rule involve a difference in perspective: are we trying to learn something about an individual or about a population?
## 2 A Motivating Example
Consider some disease with an _a priori_ probability (or 'prevalence') of 5%. There is a test for the disease with the following characteristics:
* ('sensitivity') If someone has the disease, then the test is positive with probability of 90%.
* ('specificity') If someone does not have the disease, there is a 95% chance that the test is negative.
We are told that someone takes three consecutive tests and sees two positive and one negative outcome. These test outcomes are our observed data that we wish to learn from.
The question is: what is the posterior probability that this person has the disease, in the light of this test data? You may wish to stop reading here and calculate this probability yourself. Outcomes, using Pearl's and Jeffrey's rule, will be provided in Examples 4.3 and 5.2 below.
Below we present several possible implementations of the medical test situation in the probabilistic programming language WebPPL [5, 6], giving three different solutions to the above question. The code starts by defining a function test which models the test outcome, incorporating the above sensitivity and specificity. Here, flip(p) tosses a biased coin with bias p.
```
vartest=function(dis){ returndis?(flip(0.9)?'pos':'neg'):(flip(0.95)?'neg':'pos'); }
```
We then define three inference functions which we simply label as prog1, prog2, prog3. At this stage we do not wish to connect them to Pearl/Jeffrey. We invite the reader to form a judgement about what is
the 'right' way to model the above situation with three test outcomes ('pos', 'pos', 'neg').
```
varprog1=function(){ vardis=flip(0.05); condition(test(dis)=='pos'); condition(test(dis)=='neg'); returndis; } varprog2=function(){ vartarget=uniformDraw(['pos','pos','neg']); vardis=flip(0.05); condition(test(dis)==target); returndis; } } varprog3=function(){ vartarget=uniformDraw(['pos','pos','neg']); returnsample(Infer(function(){ vardis=flip(0.05); condition(test(dis)==target); returndis; })) }
```
All functions make use of the condition command to instruct WebPPL to compute a conditional probability distribution. prog1 uses three successive conditions, while the other two use a single condition on a randomly chosen target. prog3 additionally makes use of _nested inference_, that is, it wraps the Infer function around part of its code. Nested inference is a form of reasoning about reasoning [13] and has been applied for example to the study of social cognition, linguistics and theory of mind [5, Ch. 6]. We give a short overview of WebPPL's semantics and usage in Section 10. All programs can be run using exhaustive enumeration or rejection sampling as inference algorithms, which we elaborate further in Section 4.
The three functions can be executed in WebPPL and the posteriors visualized using the command viz(Infer(prog1)). The posterior disease probabilities of each of the programs are respectively:
* prog1: 64%
* prog2: 9%
* prog3: 33%
The same probabilities appear in the mathematical analysis in Examples 4.3 and 5.2 below.
An interesting question to ask is: suppose we do not have 3 tests (2 positive, 1 negative), but 3000 tests (2000 positive, 1000 negative). Does that change the outcome of the above computations? Not so for the second and third program, which only require a statistical sample of the data. The first program however, quickly converges to 100% disease probability when the number of tests increases (still assuming the same ratio of 2 positive and 1 negative). But this first program becomes increasingly difficult to compute, because each test result emits further conditioning instructions that the inference engine needs to take into account. The two other programs on the other hand scale almost trivially. We return to this scaling issue at the end of Section 7.
The three implementations will be reiterated throughout the paper and related to Pearl's and Jeffrey's update. In Section 6, where we also make their semantics explicit using rejection samplers.
## 3 Multisets, Distributions, and Channels
Sections 3 - 5 introduce the mathematics underlying the update situations that we are looking at. This material is in essence a recap from [7, 9]. We write \(\mathcal{M}\) and \(\mathcal{D}\) for the multiset and distribution monads on the category \(\mathbf{Sets}\) of sets and functions. For a set \(X\), multisets \(\varphi\in\mathcal{M}(X)\) can equivalently be written as a function \(\varphi\colon X\to\mathbb{N}\) with finite support, or as a finite formal sum \(\sum_{i}n_{i}|\,x_{i}\rangle\), where \(n_{i}\in\mathbb{N}\) is the multiplicity of element \(x_{i}\in X\). Similarly, a distribution \(\omega\in\mathcal{D}(X)\) is written either as a function \(\omega\colon X\to[0,1]\) with finite support and \(\sum_{x}\omega(x)=1\), or as a finite formal convex combination \(\sum_{i}r_{i}|\,x_{i}\rangle\) with \(r_{i}\in[0,1]\) satisfying \(\sum_{i}r_{i}=1\).
Functoriality of \(\mathcal{M}\) (and \(\mathcal{D}\)) works in the following manner. For a function \(f\colon X\to Y\) we have \(\mathcal{M}(f)\colon\mathcal{M}(X)\to\mathcal{M}(Y)\), given as \(\mathcal{M}(f)(\varphi)(y)=\sum_{x\in f^{-1}(y)}\varphi(x)\).
For a multiset \(\varphi\in\mathcal{M}(X)\) we write \(\|\varphi\|\in\mathbb{N}\) for its size, defined as sum of its multiplicities: \(\|\varphi\|\coloneqq\sum_{x}\varphi(x)\). When this size is not zero, we can define an associated distribution \(\text{\it flrn}(\varphi)\in\mathcal{D}(X)\), via frequentist learning (normalisation), as:
\[\text{\it flrn}(\varphi)\coloneqq\sum_{x\in X}\frac{\varphi(x)}{\|\varphi\|} \,\big{|}\,x\,\big{\rangle}.\]
For \(K\in\mathbb{N}\) we write \(\mathcal{M}[K](X)=\{\varphi\in\mathcal{M}(X)\mid\|\varphi\|=K\}\) for the set of multiset of size \(K\). There is an accumulation function \(\text{\it acc}\colon X^{K}\to\mathcal{M}[K](X)\), given by \(\text{\it acc}(x_{1},\ldots,x_{K})=1|\,x_{1}\rangle+\cdots+1|\,x_{K}\rangle\). For instance \(\text{\it acc}(a,b,a,c,a,b)=3|\,a\rangle+2|\,b\rangle+1|\,c\rangle\), using \(X=\{a,b,c\}\) and \(K=6\).
For two distributions \(\omega\in\mathcal{D}(X)\), \(\rho\in\mathcal{D}(Y)\) one can form the (parallel) product distribution \(\omega\otimes\rho\in\mathcal{D}(X\times Y)\), with \(\big{(}\omega\otimes\rho\big{)}(x,y)=\omega(x)\cdot\rho(y)\). We often use the \(K\)-fold product \(\omega^{K}=\omega\otimes\cdots\otimes\omega\in\mathcal{D}(X^{K})\).
A distribution \(\omega\in\mathcal{D}(X)\) may be seen as an urn with coloured balls, where \(X\) is the set of colours. The number \(\omega(x)\in[0,1]\) is the probability of drawing a ball of colour \(x\). We are interested in \(K\)-sized draws, formalised as multiset \(\varphi\in\mathcal{M}[K](X)\). The multinomial distribution \(\text{\it mn}[K](\omega)\in\mathcal{D}\big{(}\mathcal{M}[K](X)\big{)}\) assigns probabilities to such draws:
\[\text{\it mn}[K](\omega)\,\coloneqq\,\mathcal{D}(\text{\it acc})\big{(} \omega^{K}\big{)}\,=\,\sum_{\varphi\in\mathcal{M}[K](X)}\,\boldsymbol{(} \varphi\big{)}\cdot\prod_{x\in X}\omega(x)^{\varphi(x)}\,\big{|}\,\varphi \big{\rangle}\quad\text{ where }\quad\boldsymbol{(}\varphi\big{)}\coloneqq\frac{\|\varphi\|!}{ \prod_{x}\varphi(x)!}. \tag{1}\]
A Kleisli map \(c\colon X\to\mathcal{D}(Y)\) for the distribution monad \(\mathcal{D}\) is often called a _channel_, and written as \(c\colon X\nrightarrow Y\). For instance, the above accumulation map \(\text{\it acc}\colon X^{K}\to\mathcal{M}[K](X)\) has a probabilistic inverse \(\text{\it arr}\colon\mathcal{M}[K](X)\to\mathcal{D}(X^{K})\), where \(\text{\it arr}\) stands for arrangement, see [8] for details. This arrangement is defined as:
\[\text{\it arr}(\varphi)\coloneqq\sum_{\boldsymbol{x}\in\text{\it acc}^{-1}( \varphi)}\frac{1}{\big{(}\varphi\big{)}}\,\big{|}\,\boldsymbol{x}\,\big{\rangle} \qquad\text{with $\boldsymbol{(}\varphi\big{)}$ as defined in \eqref{eq:mnp}}. \tag{2}\]
Kleisli extension gives a pushforward operation along a channel: a distribution \(\omega\in\mathcal{D}(X)\) can be turned into a distribution \(c\,\succcurcorner=\omega\in\mathcal{D}(Y)\) via the formula:
\[c\,\succcurcorner=\,\sum_{y\in Y}\left(\sum_{x\in X}\omega(x)\cdot c(x)(y) \right)\big{|}\,y\,\big{\rangle}.\]
This new distribution \(c\,\succcurcorner=\omega\) is often called the prediction. One can prove: \(\text{\it flrn}\succcurcorner=\text{\it mn}[K](\omega)=\omega\) and \(\text{\it arr}\succcurcorner=\text{\it mn}[K](\omega)=\omega^{K}\), see [8].
The following two programs are equivalent ways of sampling from a prediction \(c\mathbin{\binds}\omega\):
\[\begin{array}{l}\hline\hline\texttt{x}\leftarrow\omega\\ \texttt{y}\leftarrow\textit{c}(\texttt{x})\\ \texttt{samples.add(y)}\end{array} \tag{3}\]
It shows that such sampling can be done in two steps: The notation \(x\leftarrow\omega\) is used for sampling a random element \(x\in X\) from a distribution \(\omega\in\mathcal{D}(X)\), where the randomness takes the probabilities in \(\omega\) into account. This is a standard construct in probabilistic programming. If multiple samples \(x_{i}\leftarrow\omega\) are taken, and accumulated in a multiset \(\varphi\in\mathcal{M}(X)\), then the normalisation \(\text{\it{frm}}(\varphi)\) of \(\varphi\) approaches the original distribution \(\omega\).
Lastly, the tensor product \(\otimes\) extends pointwise to channels: \((c\otimes d)(x,y)=c(x)\otimes d(y)\). Then one can prove, for instance, \((c\otimes d)\mathbin{\binds}(\omega\otimes\rho)=(c\mathbin{\binds}\omega) \otimes(d\mathbin{\binds}\rho)\).
## 4 Validity, Conditioning, and Pearl's Update Rule
A (fuzzy) predicate on a set \(X\) is a function \(p\colon X\to[0,1]\). Each element \(x\in X\) gives rise to a _point predicate_\(\mathbf{1}_{x}\colon X\to[0,1]\), with \(\mathbf{1}_{x}(y)=1\) if \(x=y\) and \(\mathbf{1}_{x}(y)=0\) if \(x\neq y\). For two predicates \(p_{1},p_{2}\colon X\to[0,1]\) we can form a conjunction \(p_{1}\mathbin{\&}p_{2}\colon X\to[0,1]\) via pointwise multiplication: \((p_{1}\mathbin{\&}p_{2})(x)=p_{1}(x)\cdot p_{2}(x)\).
The validity (or expected value) of a predicate \(p\colon X\to[0,1]\) in a distribution \(\omega\in\mathcal{D}(X)\) is written as \(\omega\models p\) and defined as:
\[\omega\models p\;\mathrel{\mathop{\kern 0.0pt\raisebox{1.29pt}{\scalebox{1 \mbox{$\Leftarrow$}}}}\limits}\sum_{x\in X}\omega(x)\cdot p(x).\]
When this validity is non-zero we can define the updated distribution \(\omega|_{p}\in\mathcal{D}(X)\) as:
\[\omega|_{p}\mathrel{\mathop{\kern 0.0pt\raisebox{1.29pt}{\scalebox{1 \mbox{$\Leftarrow$}}}}\limits}\sum_{x\in X}\frac{\omega(x)\cdot p(x)}{\omega \models p}\,\big{|}\,x\,\big{\rangle}. \tag{4}\]
For a channel \(c\colon X\mathbin{\binds}Y\) and a predicate \(q\colon Y\to[0,1]\) on its codomain, we can define a pullback predicate \(c\mathbin{\binds}q\) on \(X\) via the formula:
\[\big{(}c\mathbin{\binds}q\big{)}(x)\mathrel{\mathop{\kern 0.0pt\raisebox{1.29pt}{ \scalebox{1\mbox{$\Leftarrow$}}}}\limits}\sum_{y\in Y}c(x)(y)\cdot q(y).\]
The following result contains the basic facts that we need here. Proofs can be found for instance in [6, 8].
**Lemma 4.1**: _For a channel \(c\colon X\mathbin{\binds}Y\), a distribution \(\omega\in\mathcal{D}(X)\), predicates \(p,p_{1},p_{2}\) on \(X\) and \(q\) on \(Y\),_
1. \(c\mathbin{\binds}\omega\models q\,=\,\omega\models c\mathbin{\binds}q\)_;_
2. \(\omega|_{p_{1}}|_{p_{2}}=\omega|_{p_{1}\&p_{2}}\)_;_
3. \(\omega|_{p}\models p\,\geq\,\omega\models p\)_._
The last result shows that a predicate \(p\) is'more true' in an updated distribution \(\omega|_{p}\) than in the original \(\omega\). The next result from [6, 8] contains both the formulation of Pearl's update, and the associated validity increase.
**Theorem 4.2**: _Let \(c\colon X\mathbin{\binds}Y\) be a channel with a prior distribution \(\omega\in\mathcal{D}(X)\) on its domain and a predicate \(q\colon Y\to[0,1]\) on its codomain. The posterior distribution \(\omega_{P}\in\mathcal{D}(X)\) of \(\omega\), via Pearl' update rule, with the evidence predicate \(q\), is defined as:_
\[\omega_{P}\mathrel{\mathop{\kern 0.0pt\raisebox{1.29pt}{\scalebox{1 \mbox{$\Leftarrow$}}}}\limits}\omega|_{c\mathbin{\binds}q}\qquad\text{and satisfies}\qquad c\mathbin{\binds}\omega_{P}\models q\,\geq c\mathbin{\binds}\omega\models q.\qed\]
The proof follows from an easy combination of points (i) and (iii) of Lemma 4.1. The increase in validity that is achieved via Pearl's rule means that the validity of predicate \(q\) is higher in the predicted distribution obtained from the posterior distribution \(\omega_{P}\), than in the prediction obtained from original, prior distribution \(\omega\).
The following are two rejection samplers that allow sampling from a posterior distribution: On the left below we show how to obtain an updated distribution \(\omega|_{p}\) via sampling, and on the right how to get a Pearl update \(\omega|_{c\ll q}\).
\[\begin{array}{l}\hline\hline\texttt{x}\leftarrow\omega\\ \texttt{y}\leftarrow\texttt{flip}(p(\texttt{x}))\\ \texttt{if y == 1:}\\ \texttt{samples.add(x)}\end{array} \tag{5}\]
The probabilistic program prog1 at the end of Section 2 computes the Pearl update. How this update works in detail will be described next.
**Example 4.3**: We are now in a situation to explain the 64% posterior disease probability claimed in Section 2. It is obtained via repeated Pearl updates. We first translate the information given there into mathematical structure.
We use \(X=\{d,d^{\perp}\}\) for the set with elements \(d\) for disease and \(d^{\perp}\) for no-disease. The given prevalence of 5% for the disease corresponds to a prior distribution \(\omega\in\mathcal{D}(X)\) given by \(\omega=\frac{1}{20}|\,d\,\rangle+\frac{19}{20}|\,d^{\perp}\,\rangle\).
The test is formalised as a channel \(c\colon X\to\mathcal{D}(Y)\) where \(Y=\{p,n\}\) the set of positive and negative test outcomes. The sensitivity and specificity of the test translate into, respectively:
\[c(d)\,\coloneqq\,\tfrac{9}{10}|\,p\,\rangle+\tfrac{1}{10}|\,n\,\rangle\qquad \text{ and }\qquad c(d^{\perp})\,\coloneqq\,\tfrac{1}{20}|\,p\,\rangle+\tfrac{19}{20}|\,n\,\rangle.\]
There are two obvious point predicates \(\mathbf{1}_{p}\colon Y\to[0,1]\) and \(\mathbf{1}_{n}\colon Y\to[0,1]\) on the set \(Y=\{p,n\}\) of test outcomes. We are told that there are two positive and one negative test. This translates in the conjunction (\(c\ll\mathbf{1}_{p}\)) & (\(c\ll\mathbf{1}_{p}\)) & (\(c\ll\mathbf{1}_{p}\)) & (\(c\ll\mathbf{1}_{n}\)). Since conjunction is commutative, the order does not matter. Updating with this conjection is equivalent to three successive update, see Lemmma 4.1 (ii), and gives the claimed outcome:
\[\omega_{P}\,=\,\omega\big{|}_{(c\ll\mathbf{1}_{p})\&(c\ll\mathbf{1 }_{p})\&(c\ll\mathbf{1}_{n})}\,=\,\omega\big{|}_{c\ll\mathbf{1}_{p}}\big{|}_ {c\ll\mathbf{1}_{p}}\big{|}_{c\ll\mathbf{1}_{n}} =\,\tfrac{648}{1009}\big{|}\,d\,\rangle+\tfrac{361}{1009}\big{|} \,d^{\perp}\,\rangle\] \[\approx\,0.642\big{|}\,d\,\rangle+0.358\big{|}\,d^{\perp}\,\rangle.\]
This is the probability computed in prog1 in Section 2.
The validity increase associated with Pearl's update rule takes the following form.
\[c\,\coloneqq\,\omega_{P}\models(c\,\coloneqq\,\mathbf{1}_{p})^{2}\,\&\,(c \,\coloneqq\,\mathbf{1}_{n})\,\approx\,0.049\,\geq\,0.0096\,\approx\,c\, \coloneqq\,\omega\models(c\ll\mathbf{1}_{p})^{2}\,\&\,(c\,\coloneqq\, \mathbf{1}_{n}).\]
## 5 Dagger channels and Jeffrey's update rule
First we recall that the difference (divergence) between two distributions \(\omega,\rho\in\mathcal{D}(X)\) is commonly expressed as _Kullback-Leibler divergence_, defined as:
\[D_{\text{KL}}\big{(}\omega,\,\rho\big{)}\,\coloneqq\,\sum_{x\in X}\omega(x) \cdot\ln\left(\frac{\omega(x)}{\rho(x)}\right),\qquad\text{where ln is the natural logarithm}. \tag{6}\]
The main ingredient that we need for Jeffrey's rule is the dagger of a channel \(c\colon X\nLeftrightarrow Y\) with respect to a prior distribution \(\omega\in\mathcal{D}(X)\). This dagger is a channel \(c^{\dagger}_{\omega}\colon Y\nLeftrightarrow X\) in the opposite direction. It is also called Bayesian inversion, see [2, 1], and it is defined on \(y\in Y\) as:
\[c^{\dagger}_{\omega}(y)\coloneqq\omega|_{c\prec\mathbf{1}_{y}}\stackrel{{ (\ref{eq:def})}}{{=}}\sum_{x\in X}\,\frac{\omega(x)\cdot c(x)(y)}{(c \mathbin{\gtrdot}\omega)(y)}\,\big{|}\,x\,\big{\rangle}. \tag{7}\]
We again combine Jeffrey's rule with its main divergence reduction property, from [8]. The set-up is very much as for Pearl's rule, in Theorem 4.2, but with evidence now in the form of distribution instead of a predicate.
**Theorem 5.1**: _Let \(c\colon X\nLeftrightarrow Y\) be a channel with a prior distribution \(\omega\in\mathcal{D}(X)\) and an evidence distribution \(\tau\in\mathcal{D}(Y)\). The posterior distribution \(\omega_{J}\in\mathcal{D}(X)\) of \(\omega\), obtained via Jeffrey's update rule, with the evidence distribution \(\tau\), is defined as:_
\[\omega_{J}\coloneqq c^{\dagger}_{\omega}\mathbin{\gtrdot}\tau\qquad\text{ and satisfies}\qquad\text{ $D_{\text{KL}}$}\big{(}\tau,\,c\mathbin{\gtrdot}\omega_{J}\big{)}\,\leq\,\text{$D_{ \text{KL}}$}\big{(}\tau,\,c\mathbin{\gtrdot}\omega\big{)}.\qed\]
The proof of this divergence decrease is remarkably hard, see [8] for details. The result says that the prediction from \(\omega_{J}\) is less wrong than from \(\omega\), when compared to the 'target' distribution \(\tau\).
**Example 5.2**: We build on the test channel \(c\colon X\nLeftrightarrow Y\) and prevalence distribution \(\omega\in\mathcal{D}(X)\) from Example 4.3. The first task is to compute the dagger channel \(f\coloneqq c^{\dagger}_{\omega}\colon Y\nLeftrightarrow X\). It yields:
\[f(p)=\tfrac{18}{37}\big{|}\,d\,\big{\rangle}+\tfrac{19}{37}\big{|}\,d^{\perp} \,\big{\rangle}\qquad\text{ and }\qquad f(n)=\tfrac{2}{363}\big{|}\,d\,\big{\rangle}+\tfrac{361}{363}\big{|} \,d^{\perp}\,\big{\rangle}.\]
The fact that there are two positive and one negative test translates into the 'empirical' evidence distribution \(\tau=\tfrac{2}{3}\big{|}\,p\,\big{\rangle}+\tfrac{1}{3}\big{|}\,n\,\big{\rangle} \in\mathcal{D}(Y)\). The posterior, updated disease distribution, obtained from this evidence, gives the \(33\%\) probability mentioned in Section 2:
\[\omega_{J}=f\mathbin{\gtrdot}\tau=\tfrac{13142}{40293}\big{|}\,d\,\big{\rangle} +\tfrac{27151}{40293}\big{|}\,d^{\perp}\,\big{\rangle}\approx 0.326\big{|}\,d\, \big{\rangle}+0.674\big{|}\,d^{\perp}\,\big{\rangle}.\]
This probability is computed by prog3 in Section 2.
The divergence decrease from Theorem 5.1 takes the following form:
\[\text{$D_{\text{KL}}$}\big{(}\tau,\,c\mathbin{\gtrdot}\omega_{J}\big{)}\approx 0.24\, \leq\,0.98\,\approx\,\text{$D_{\text{KL}}$}\big{(}\tau,\,c\mathbin{\gtrdot} \omega\big{)}.\]
Having seen this, we may ask: why not use the evidence distribution \(\tau=\tfrac{2}{3}\big{|}\,p\,\big{\rangle}+\tfrac{1}{3}\big{|}\,n\,\big{\rangle}\) not as a predicate \(q=\tfrac{2}{3}\mathbf{1}_{p}+\tfrac{1}{3}\mathbf{1}_{n}\), and then do a single Pearl update:
\[\omega|_{c\mathbin{\gtrdot}q}=\tfrac{2}{23}\big{|}\,d\,\big{\rangle}+\tfrac{ 21}{23}\big{|}\,d^{\perp}\,\big{\rangle}\approx 0.087\big{|}\,d\,\big{\rangle}+0.913 \big{|}\,d^{\perp}\,\big{\rangle}. \tag{8}\]
This is the distribution computed by program **prog2** in Section 2.
For future use we record the following standard properties of the dagger of a channel (7).
**Lemma 5.3**:
1. _Daggers preserve sequential composition: for two successive channels_ \(X\stackrel{{ c}}{{\nLeftrightarrow}}Y\stackrel{{ d}}{{\nLeftrightarrow}}Z\) _and a distribution_ \(\omega\in\mathcal{D}(X)\)_,_ \[\big{(}d\mathbin{\gtrdot}c\big{)}^{\dagger}_{\omega}=c^{\dagger}_{\omega} \mathbin{\gtrdot}d^{\dagger}_{c\mathbin{\gtrdot}\omega}.\]
2. _Daggers preserve parallel composition: for two channels_ \(c\colon X\nLeftrightarrow A\)_,_ \(d\colon Y\nLeftrightarrow B\) _with distributions_ \(\omega\in\mathcal{D}(X)\)_,_ \(\rho\in\mathcal{D}(Y)\)_,_ \[\big{(}c\mathbin{\gtrdot}d\big{)}^{\dagger}_{\omega\otimes\rho}=c^{\dagger}_{ \omega}\mathbin{\gtrdot}d^{\dagger}_{\rho}.\]
## 6 An Operational Understanding of Jeffrey's Update
We return to the probabilistic programs of Section 2. As discussed in Section 4, prog1 expresses repeated Pearl updates. It remains to understand the difference between prog2 and prog3. As shown in (8), prog2 corresponds to a single Pearl's update with the target distribution, as predicate. Further, prog3 is Jeffrey's update, with the nested inference corresponding to the computation of the dagger channel \(c_{\omega}^{\dagger}\). The difference between the two programs prog2 and prog3 is surprisingly subtle, so we begin by illustrating it using a different kind of metaphor, and derive a rejection sampler for each case in turn.
Consider a large queue of people waiting in front of a club. Each person prefers either rock or pop. The club's management wants to achieve a target ratio of 75% rock fans on the inside. To that end, they equip their doorman with a special ticker device, see Figure 6. The ticker displays a current target (either 'Rock' or 'Pop'), and the doorman admits the next person if and only if they prefer the targeted style. The doorman can click the device to obtain a new target (either by cycling sequentially through the targets, or picking one randomly), but there remains a choice when to click.
1. Single Pearl Policy: pick a new target after every person: ```
forpersoninqueue: ifperson.preference==ticker.target: club.admit(person) ticker.click()
```
2. Jeffrey Policy: pick a new target only after admitting a person: ```
forpersoninqueue: ifperson.preference==ticker.target: club.admit(person) ticker.click()
```
It may be clear that only the Jeffrey Policy is suitable to achieve the management's goal. Approximately 75% of the people which are admitted are rock fans. This is in line with the key property of Jeffrey's update rule: reducing the divergence with the target distribution \(\tau\), see Theorem 5.1. It is unclear what the single Pearl policy achieves in this context.
We may also wonder how the door policy influences other statistical properties of the audience (such as age or gender) which may correlate with music preference: If the prior distribution in the queue is \(\omega\), what will the resulting distribution be inside the club? For the Jeffrey Policy, this update is precisely described by Jeffrey's update. We summarize this section with a concrete description of rejection samplers for Pearl's update with a random target (left) and Jeffrey's update (right), corresponding to the semantics of the probabilistic programs prog2 and prog3:
```
whileTrue: x\(\leftarrow\omega\) y\(\leftarrow\)c(x) target\(\leftarrow\tau\) ify==target: samples.add(x)
```
```
whileTrue: x\(\leftarrow\omega\) x\(\leftarrow\)c(x) ify==target: samples.add(x) target\(\leftarrow\tau\)
```
Figure 1: Ticker device
## 7 Likelihoods and Generative Models for Pearl and Jeffrey
This section first identifies two forms of likelihood of data in the situation with a statistical model given by a channel \(X\nrightarrow Y\) and a distribution on \(X\). It then relates these two forms of likelihood to the two update rules of repeated-Pearl and Jeffrey -- in Theorems 4.2 and 5.1.
**Definition 7.1**: Let \(\psi\in\mathcal{M}[K](Y)\) be a multiset of data, of size \(K=\|\psi\|\in\mathbb{N}\). Let \(c\colon X\nrightarrow Y\) be a channel with a distribution \(\omega\in\mathcal{D}(X)\) on its domain.
1. The _Jeffrey likelihood_ of the multiset \(\psi\) is given by the number: \[\text{mn}[K]\big{(}c\succcurlyeq\omega\big{)}(\psi).\]
2. The _Pearl likelihood_ of \(\psi\) in the same model is the first expression below, which has several alternative formulations. It uses the abbreviation \(\text{mn}[K](c)\coloneqq\text{mn}[K]\circ c\). \[\big{(}\text{mn}[K](c)\succcurlyeq\omega\big{)}(\psi) = \text{mn}[K](c)\succcurlyeq\omega\models\mathbf{1}_{\psi}\] \[= \omega\models\text{mn}[K](c)\preccurlyeq\mathbf{1}_{\psi}\qquad \text{by Lemma \ref{lem:def} (i)}.\]
Associated to these two likelihoods are different generative models, _i.e._ distributions over multisets, in \(\mathcal{D}(\mathcal{M}[K](Y))\), which we evaluate on the dataset \(\psi\). For Jeffrey likelihood in item (i) we first do the Kleisli extension \(c\succcurlyeq(\cdot)\) of \(c\) and then take the multinomial, as in the composite:
We can concisely illustrate this with string diagrams using an informal 'plate' notation to copy parts of the string diagram (inspired by the use of plates in graphical models), see Figure 2 on the left. In contrast, for the Pearl likelihood in item (ii) we use the composite \(\text{mn}[K](c)\coloneqq\text{mn}[K]\circ c\) in the pushforward:
Here, the plate does not extend over the distribution \(\omega\), whose output is copied instead of resampled, see Figure 2 on the right.
The Pearl likelihood is used in the multinomial naive Bayes classifier [13]. For the likelihood of Jeffrey we shall see alternative formulations in Section 8 below.
Figure 2: Graphical representation of Jeffrey likelihood on the left, and Pearl likelihood on the right, see Definition 7.1.
Our first result says that minimising the Kullback-Leibler divergence that occurs in Theorem 5.1 -- and that is actually reduced by Jeffrey's update rule -- corresponds to maximising the Jeffrey likelihood of Definition 7.1 (i).
**Theorem 7.2**:
1. _For distributions_ \(\omega,\omega^{\prime}\in\mathcal{D}(X)\) _and channels_ \(c,c^{\prime}\colon X\nrightarrow Y\)_, with data_ \(\psi\in\mathcal{M}(Y)\)_, we have that Jeffrey likelihood is oppositely ordered to Kullback-Leibler divergence in:_ \[\operatorname{mn}[K]\big{(}c\mathbin{\succcurlyeq}\omega\big{)}(\psi)\leq \operatorname{mn}[K]\big{(}c^{\prime}\mathbin{\succcurlyeq}\omega^{\prime} \big{)}(\psi)\,\Longleftrightarrow\,D_{\text{KL}}\big{(}\text{\sl{\sl{\sl{ \sl{\sl{\sl{\sl{\sl{\sl{\sl{\sl{\sl{\sl
_This updated distribution \(\omega|_{\text{mn}[K](c)\ll\mathbf{1}_{\psi}}=\text{mn}[K](c)_{\omega}^{\dagger}( \psi)\in\mathcal{D}(Y)\) can be described via repeated Pearl updates as:_
\[\omega|_{\text{mn}[K](c)\ll\mathbf{1}_{\psi}} = \omega|_{\text{$\mathcal{K}_{y\in Y}(c\ll\mathbf{1}_{y})^{\psi(y)} $}}\] \[= \omega|_{\text{$(c\ll\mathbf{1}_{y_{1}})^{\psi(y_{1})}\,\&\,\cdots \,\&\,(c\ll\mathbf{1}_{y_{n}})^{\psi(y_{n})}$}}\qquad\text{if $\operatorname{supp}(\psi)=\{y_{1},\ldots,y_{n}\}$}\] \[= \omega|_{\text{$\underbrace{(c\ll\mathbf{1}_{y_{1}})\,\&\,\cdots \,\&\,(c\ll\mathbf{1}_{y_{1}})}_{\psi(y_{1})\text{ times}}$}}\,\&\,\cdots\,\&\,\underbrace{\text{$(c\ll\mathbf{1}_{y_{1}})\,\&\,\cdots\,\&\,(c\ll \mathbf{1}_{y_{n}})$}}_{\psi(y_{m})\text{ times}}\] \[= \omega|_{c\ll\mathbf{1}_{y_{1}}}\cdots|_{c\ll\mathbf{1}_{y_{1}} }\cdots|_{c\ll\mathbf{1}_{y_{n}}}.\]
We have used such successive updates in the calculation of the disease probabilities according to Pearl in Example 4.3.
Proof.: We first note that we can write Pearl's likelihood as:
\[\omega\models\text{mn}[K](c)\ll\mathbf{1}_{\psi} = \sum_{x\in X}\,\omega(x)\cdot\text{mn}[K]\big{(}c(x)\big{)}(\psi)\] \[= \big{(}\psi\big{)}\cdot\sum_{x\in X}\,\omega(x)\cdot\prod_{y\in Y }(c\ll\mathbf{1}_{y})(x)^{\psi(y)}\] \[= \big{(}\psi\big{)}\cdot\sum_{x\in X}\,\omega(x)\cdot\left(\, \mathop{\mathchoice{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\hss}\hbox{$\smash{ \hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt depth -0.2pt width 0.2pt\hss}}$}}{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.0pt depth -0.
Once we have this channel extension \(\mathcal{M}[K](c)\), for a multiset of data \(\psi\in\mathcal{M}[K](Y)\) we can form the following update of the multinomial distribution, abbreviated as \(\sigma\in\mathcal{D}\big{(}\mathcal{M}[K](X)\big{)}\).
\[\sigma\,\coloneqq\,\text{{mn}}[K](\omega)\big{|}_{\mathcal{M}[K](c)\, \asymp\,\mathbf{1}_{\psi}} \tag{10}\]
We like to think of this \(\sigma\) as a distribution of the form \(\text{{mn}}[K](\omega^{\prime})\). The obvious way to obtain this distribution \(\omega^{\prime}\) is via frequentist learning, as \(\text{{flrn}}\,\succcurlyeq\sigma\). Indeed, as we have seen before (3), \(\text{{flrn}}\,\succcurlyeq\text{{mn}}[K](\rho)=\rho\). The first of our two main results in this section is Theorem 8.3; it says that \(\text{{flrn}}\,\succcurlyeq\sigma\) is the Jeffrey update \(c^{\dagger}_{\omega}\,\succcurlyeq\text{{flrn}}(\psi)\). This is a technically non-trivial result.
* Next we use techniques from variational inference [10, 11]: we like to determine the 'best' distribution \(\omega^{\prime}\) such that \(\text{{mn}}[K](\omega^{\prime})\) approximates the above distribution \(\sigma\) in (10). We thus look for the distribution with minimal Kullback-Leibler divergence. There again we find Jeffrey's update: \[\operatorname*{argmin}_{\omega^{\prime}\in\mathcal{D}(X)}\,D_{\text{KL}}\big{(} \text{{mn}}[K](\omega^{\prime}),\,\sigma\big{)}\,=\,c^{\dagger}_{\omega}\, \succcurlyeq\text{{flrn}}(\psi).\] This is the content of our second main result below, Theorem 8.5.
### Jeffrey's rule via Frequentist Learning
Taking multisets of a particular size \(K\in\mathbb{N}\) forms a functor \(\mathcal{M}[K]\colon\mathbf{Sets}\to\mathbf{Sets}\). This functor can be extended to the Kleisli category \(\mathcal{K}\!(\mathcal{D})\) of the distribution monad \(\mathcal{D}\). This works via a distributive law \(\mathcal{M}[K]\mathcal{D}\Rightarrow\mathcal{D}\mathcal{M}[K]\), see [3, 7]. The extension can also be written via accumulation and arrangement, see Lemma 8.1 (i) below. We shall use it in that form.
The resulting extension is still written as \(\mathcal{M}[K]\colon\mathcal{K}\!(\mathcal{D})\to\mathcal{K}\!(\mathcal{D})\). It sends a set/object \(X\) in \(\mathcal{K}\!(\mathcal{D})\) to the set \(\mathcal{M}[K](X)\) of multisets of size \(K\). On a channel/morphism \(c\colon X\twoheadrightarrow Y\) one defines a channel \(\mathcal{M}[K](c)\colon\mathcal{M}[K](X)\twoheadrightarrow\mathcal{M}[K](Y)\) via the distributive law as:
Notice that we have written \(\mathcal{M}(c)\) for the application of the multiset functor \(\mathcal{M}\colon\mathbf{Sets}\to\mathbf{Sets}\), in order to distinguish it from the extension \(\mathcal{M}[K]\colon\mathcal{K}\!(\mathcal{D})\to\mathcal{K}\!(\mathcal{D})\).
**Lemma 8.1**:
1. _For a channel_ \(c\colon X\twoheadrightarrow Y\) _and a number_ \(K\in\mathbb{N}\) _the following diagram commutes._ \[\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:}\diagram {c:}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c: }\diagram{c:}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{ }\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{c:}\diagram{}\diagram{c:} \diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:}\diagram{}\diagram{c:} \diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{c:} \diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{} \diagram{c:}\diagram{}\diagram{c:}\diagram{}\diagram{c:}\diagram{}\}\diagram{{c:}\}\]
_The functor_ \((-)^{K}\colon\mathcal{K}\!(\mathcal{D})\to\mathcal{K}\!(\mathcal{D})\) _is the_ \(K\)_-fold tensor product, and_ \(\overline{\mathcal{D}}\colon\mathcal{K}\!(\mathcal{D})\to\mathcal{K}\!(\mathcal{D})\) _is the standard extension of a monad to its Kleisli category, given on_ \(c\colon X\twoheadrightarrow Y\) _by_ \(\overline{\mathcal{D}}(c)=\eta\circ c\colon\mathcal{D}(c)\colon\mathcal{D}(X) \twoheadrightarrow\mathcal{D}(Y)\)_, where_ \(\eta\) _is the unit of the monad_ \(\mathcal{D}\)_._
**Proof.** This follow from the results in [8]. \(\square\)
A crucial observation is that the formulation of the extension \(\mathcal{M}[K](c)\) in Lemma 8.1 (i) also works for daggers. It demonstrates that'multisets' and 'daggers' commute, see (11) below.
**Proposition 8.2**: _Consider a channel \(c\colon X\nrightarrows Y\) with a distribution \(\omega\in\mathcal{D}(X)\) and a number \(K\in\mathbb{N}\). Then the following diagram of daggers commutes._
_This means that the extended multiset functor commutes with daggers, where the original prior distribution \(\omega\) is replaced by the multinomial distribution \(\operatorname{mn}[K](\omega)\), that is:_
\[\mathcal{M}[K]\big{(}c^{\dagger}_{\omega}\big{)}\,=\,\mathcal{M}[K](c)^{ \dagger}_{\operatorname{mn}[K](\omega)}. \tag{11}\]
**Proof.** We concentrate on proving commutation of the diagram, since it implies (11) via Lemma 8.1 (i). We use Lemma 5.3 (i) as first step in:
\[\mathcal{M}[K](c)^{\dagger}_{\operatorname{mn}[K](\omega)} =\,\Big{(}\operatorname{acc}\circ c^{K}\circ\operatorname{arr} \Big{)}^{\dagger}_{\operatorname{mn}[K](\omega)}\] \[=\,\operatorname{arr}^{\dagger}_{\operatorname{mn}[K](\omega)} \circ\big{(}c^{K}\big{)}^{\dagger}_{\operatorname{arr}\succcurcorner\succcurcorner \succcurcorner\succcurcorner\succcurcorner\] \[=\,\operatorname{acc}\circ\big{(}c^{\dagger}_{\omega}\big{)}^{K} \circ\operatorname{arr}.\]
This last equation is justified by the three following steps.
* The dagger channel \(\operatorname{arr}^{\dagger}_{\operatorname{mn}[K](\omega)}\colon X^{K} \nrightarrows\mathcal{M}[K](X)\) is determined on \(\boldsymbol{x}\in X^{K}\) as: \[\operatorname{arr}^{\dagger}_{\operatorname{mn}[K](\omega)}( \boldsymbol{x})\,\stackrel{{\eqref{eq:xx}}}{{=}}\,\sum_{\varphi \in\mathcal{M}[K](X)}\,\frac{\operatorname{mn}[K](\omega)(\varphi)\cdot \operatorname{arr}(\varphi)(\boldsymbol{x})}{(\operatorname{arr}\succcurcorner \succcurcorner\operatorname{mn}[K](\omega))(\boldsymbol{x})}\,\big{|}\, \varphi\,\big{\rangle}\] \[=\,\frac{\operatorname{mn}[K](\omega)(\operatorname{acc}( \boldsymbol{x}))\cdot\frac{1}{\boldsymbol{\langle\rangle}}}{\omega^{K}( \boldsymbol{x})}\,\big{|}\operatorname{acc}(\boldsymbol{x})\,\big{\rangle}\,= \,\frac{\prod_{y}\omega(y)^{\operatorname{acc}(\boldsymbol{x})(y)}}{\prod_{i} \omega(x_{i})}\,\big{|}\operatorname{acc}(\boldsymbol{x})\,\big{\rangle}\,= \,1\big{|}\operatorname{acc}(\boldsymbol{x})\,\big{\rangle}.\]
* We again use \(\operatorname{arr}\succcurcorner\succcurcorner\operatorname{mn}[K](\omega)= \omega^{K}\), so that we can apply Lemma 5.3 (ii): \[\big{(}c^{K}\big{)}^{\dagger}_{\operatorname{arr}\succcurcorner \succcurcorner\operatorname{mn}[K](\omega)}\,=\,\big{(}c^{K}\big{)}^{\dagger}_{ \omega^{K}}\,=\,\big{(}c^{\dagger}_{\omega}\big{)}^{K}.\]
* For the channel \(\operatorname{acc}^{\dagger}_{c^{K}\succcurcorner\succcurcorner\operatorname{ mn}[K](\omega))}\colon\mathcal{M}[K](Y)\nrightarrows Y^{K}\) we observe that \(c^{K}\succcurcorner\succcurcorner\operatorname{mn}[K](\omega))=c^{K}\succcurcorner \succcurcorner=(c\succcurcorner\omega)^{K}\) so that: \[\operatorname{acc}^{\dagger}_{c^{K}\succcurcorner\succcurcorner \operatorname{mn}[K](\omega))}(\psi)\,=\,\operatorname{acc}^{\dagger}_{(c^{ \dagger}\succcurcorner\omega)^{K}}(\psi) \stackrel{{\eqref{eq:x}}}{{=}}\sum_{\boldsymbol{y}\in X ^{K}}\,\frac{(c\succcurcorner\omega)^{K}(\boldsymbol{y})\cdot\operatorname{ acc}(\boldsymbol{y})(\psi)}{\big{(}\operatorname{acc}\succcurcorner\succcurcorner \omega)^{K})(\psi)}\,\big{|}\,\boldsymbol{y}\,\big{\rangle}\] \[=\,\sum_{\boldsymbol{y}\in\operatorname{acc}^{-1}(\psi)}\, \frac{(c\succcurcorner\omega)^{K}(\boldsymbol{y})}{\operatorname{mn}[K](c\succcurcorner \omega)(\psi)}\,\big{|}\,\boldsymbol{y}\,\big{\rangle}\] \[=\,\sum_{\boldsymbol{y}\in\operatorname{acc}^{-1}(\psi)}\,\frac{ 1}{\boldsymbol{\langle\psi\rangle}}\,\big{|}\,\boldsymbol{y}\,\big{\rangle}\,= \,\operatorname{arr}(\psi).\]
At this stage we return to Jeffrey likelihood \(\text{mn}[K]\big{(}c\mathbin{\gg}\omega\big{)}(\psi)\), as described in Definition 7.1 (i). Using the extended functor \(\mathcal{M}[K]\colon\mathcal{K}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
irrelevant constant that depends only on \(\sigma\), not on \(\omega\).
\[D_{\text{KL}}\big{(}\sigma,\,\text{mn}[K](\omega)\big{)} \stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_ |
2309.13026 | Directed propaganda in the majority-rule model | Advertisement and propaganda have changed continuously in the past decades,
mainly due to the people's interactions at online platforms and social
networks, and operate nowadays reaching a highly specific online audience
instead targeting the masses. The impacts of this new media effect, oriented
directly for a specific audience, is investigated on this study, in which we
focus on the opinion evolution of agents in the majority-rule model,
considering the presence of directed propaganda. We introduce $p$ as the
probability of a "positive" external propaganda and $q$ as the probability to
the agents follow the external propaganda. Our results show that the usual
majority-rule model stationary state is reached, with a full consensus, only
for two cases, namely when the external propaganda is absent or when the media
favors only one of the two opinions. However, even for a small influence of
external propaganda, the final state is reached with a majority opinion
dominating the population. For the case in which the propaganda influence is
strong enough among the agents, we show that the consensus can not be reached
at all, and we observe the polarization of opinions. In addition, we show
through analytical and numerical results that the system undergoes an
order-disorder phase transition that occurs at $q_c = 1/3$ for the case $p =
0.5$. | Fabricio L. Forgerini, Nuno Crokidakis, Marcio A. V. Carvalho | 2023-09-22T17:42:52Z | http://arxiv.org/abs/2309.13026v2 | # Directed propaganda in the majority-rule model
###### Abstract
Advertisement and propaganda have changed continuously in the past decades, mainly due to the people's interactions at online platforms and social networks, and operate nowadays reaching a highly specific online audience instead targeting the masses. The impacts of this new media effect, oriented directly for a specific audience, is investigated on this study, in which we focus on the opinion evolution of agents in the majority-rule model, considering the presence of directed propaganda. We introduce \(p\) as the probability of a "positive" external propaganda and \(q\) as the probability to the agents follow the external propaganda. Our results show that the usual majority-rule model stationary state is reached, with a full consensus, only for two cases, namely when the external propaganda is absent or when the media favors only one of the two opinions. However, even for a small influence of external propaganda, the final state is reached with a majority opinion dominating the population. For the case in which the propaganda influence is strong enough among the agents, we show that the consensus can not be reached at all, and we observe the polarization of opinions. In addition, we show through analytical and numerical results that the system undergoes an order-disorder phase transition that occurs at \(q_{\rm c}=1/3\) for the case \(p=0.5\).
Opinion dynamics; Majority-rule model; Directed propaganda; Collective phenomena; Phase transitions
## 1 Introduction
The challenge of understanding the collective behavior emerging from individuals in society has fascinated physicists for the last few decades and their contributions gave rise to what have been called sociophysics, an attempt to explain how a social collective behavior emerges from individual agents [1, 2, 3]. However, the lineage of the agent-based models can be traced back to the 1940's, with the theoretical conception of Von Neumann's self-replicating machines [4] and Sakoda's general discrete dynamical model for social interactions [5]. It is important to note that both developments emerged alongside the early computers, and, on several occasions, the initial simulations were conducted without their assistance.
Von Neumann's devices would follow instructions in order to create identical
copies of themselves. Stanislaw Ulam, a mathematician and close associate of von Neumann, further refined this idea by suggesting that the machine could be represented on paper as a grid of cells, giving rise to what would later be termed cellular automata. Intrigued by Ulam's proposition, von Neumann proceeded to develop the initial design, thus pioneering the concept of cellular automata.
The foundational Sakoda model encompasses the dynamics of social interactions within a network involving two distinct groups of individuals. The model incorporates social specific attitudes such as attraction, repulsion, and neutrality, which guide the evolution of social bondings. Each individual in the network evaluates its social expectations across all feasible locations, displaying a preference for proximity to individuals associated with positive attitudes of attraction while avoiding locations near those linked to negative attitudes of repulsion. This evaluative process is randomly iterated across all individuals, leading to the recursive implementation of Sakoda's algorithm. When certain conditions are met, this iterative process drives the system towards the emergence of a well-organized spatial pattern.
Departing from the paradigm of von Neumann's machine, John Conway's Game of Life [6] functioned based on elementary rules operating within a simulated realm portrayed as a two-dimensional field. Furthermore, the emergence of the agent-based model as a framework for studying social systems can be largely attributed to the efforts of computer scientist Craig Reynolds [7], who sought to model the dynamics of living biological agents, commonly referred to as artificial life. But it was the groundbreaking model of Sugarscape, developed by Joshua M. Epstein and Robert Axtell [8], which served as a platform for conducting large-scale simulations and investigations. It enabled the exploration of intricate dynamics encompassing a wide range of social phenomena, such as seasonal migrations, pollution, sexual reproduction, combat, disease transmission, and cultural diffusion.
In 1999, Gilbert Nigel, in collaboration with Klaus G. Troitzsch, made a significant scholarly contribution by publishing the inaugural textbook on social simulation, entitled "Simulation for the Social Scientist" [9]. This pioneering work provided a comprehensive foundation for understanding and applying simulation techniques in the context of social science research.
Kathleen M. Carley made a crucial contribution in promoting the utilization of simulation methodologies in the context of organizational systems. Her scholarly work, "Computational Organizational Science and Organizational Engineering" [10]. Following these initial advancements, a proliferation of models, methodologies, and scholarly publications in the field of social simulation ensued. The topics have also diversified, encompassing knowledge transmission, institutions, reputation, social norms, elections, economics, among many others. Of particular relevance to this study is the theme of advertisement and propaganda, or belief transmission. A variety of sociological relevant problems have been studied by the means of the statistical mechanics techniques jointly with the massive use of computational modeling and simulations [11]. Different realms from social sciences such as opinion [12, 13, 14, 15, 16] and language dynamics [17, 18], community formation and detection [19, 20], gossip and
rumor spreading [21, 22, 23], to name a few, have been extensively studied by physicists in collaboration with scientist from different areas.
The basic starting point of thermodynamics and statistical mechanics focus on the study of an isolated system with interacting particles, in which the macrostates of the system are described by a set of parameters of its microscopic constituents. By direct analogy from the statistical physics one can apply the methods used in the framework of the statistical mechanics to the study of social problems, such as opinion dynamics, in which the individuals can interact with other individuals as the particles do in the standard thermodynamics systems [24, 25].
The traditional advertisement and propaganda have enormously changed in the last decades, from the usual mass media that reach a broad audience with same type of information, to the directed advertisement, mediated by complex algorithms and artificial intelligence operating in social media and online platforms on the web [26, 27]. This new form of how the information is reaching people is poorly studied in the context of opinion dynamics and yet it is not well understood how it can affect the way consensus (or majority) is formed in a population [28, 29, 30].
Recently, researchers of several areas are showing interest in models of opinion diffusion in social media and polarization formation. Examining the dissemination of news topics on Twitter, the authors in [31] discusses through a topic modeling algorithm that the data suggests that there is a relationship between the most-mentioned profiles, the more frequent keywords and the main underlying news topics. In Ref. [32], the authors studied how strategies of influencers to gain more followers can influence the overall opinion distribution. They found that moving towards extreme positions can be a beneficial strategy for influencers to gain followers in social medias. Yet regarding information dissemination, the authors in [33] studied which factors influence whether a user of online social networks disseminates information or not. They found that the network type has only a weak influence on the distribution of content, whereas the message type has a clear influence on how many users receive a message. A recent work investigated how main stream media signed interaction might shape the opinion space. The authors focused on how different size (in the number of media) and interaction patterns of the information system may affect collective debates and thus the opinions' distribution. The results show that plurality and competition within information sources lead to stable configurations where several and distant cultures coexist [34]. Another recent work studied the dynamics of opinion diffusion in social networks, where the authors considered post transmission and post distribution, representing the users' behavior and the social network algorithm, respectively. The results that the dynamics can converge to consensus formation or to polarization. It was also found that friendship rewiring helps promote echo chamber formation, and that the social network algorithm is crucial to mitigate or promote polarization [35]. For recent reviews of other similar models, see [36, 37].
We are interested here in the majority-rule model [14, 38, 39]. It was originally pro
posed by Serge Galam to describe how a fully-connected population composed by \(N\) individuals reach consensus, in which the agents can decide in favor or against a particular subject [40]. A random number of agents, called "discussion group", is selected to debate, interacting among each other and, after the discussion, all of them follow the most popular opinion, the majority opinion of the group. The results for the original model show that the initial majority opinion win the debate after some time, i.e., the steady states of the model are formed by full consensus situations, where all agents share the same opinion, represented by variables \(+1\) or \(-1\). The model was after extended by several researchers [28, 41, 42, 43, 44, 45].
In this study we investigate the evolution of agents' opinion in the majority-rule model. We introduce a directed propaganda, oriented directly for the discussion group, that we argue can mimic the complex algorithms and artificial intelligence used in advertisements on social media and online platforms. Our results suggest that even a small influence of directed propaganda is enough to avoid consensus in the population. In addition, one can observe an order-disorder phase transition for a specific value of the parameters related to the directed propaganda.
This paper is organized as follows: in Section 2 we present our model as well their dynamic rules. The numerical and analytical results are discussed in Section 3 and our conclusions and final remarks are presented in Section 4. In Appendix one can see the details of the analytical calculations.
## 2 Model
The dynamics of agreement and disagreement is treated in terms of the variation of the number of different opinion states in population, where each individual can have one of two possible opinions (\(\pm 1\)). The average opinion in the system, represented by \(M\), for a population with \(N\) individuals, is the order parameter of the system (or the magnetization), defined as
\[M=\frac{1}{N}\left|\sum_{i=1}^{N}S_{i}\right|\, \tag{1}\]
in which \(S_{i}\) is the individual opinion for each agent (\(i=1,2,...,N\)), considered at each time step. We define one time step as an attempt to \(N\) agents try to change their opinions by interacting in discussions groups. In our study, for simplicity, we considered a discussion group with a constant size of three agents. Differently from the original model, we are considering an external directed propaganda, acting like an external field, \(H=\pm 1\). However, this directed propaganda has no effect on the entire system, but affects only agents on discussion group instead. We argue that the introduction of the directed propaganda as an external field addressed to a specific group of agents can influence the discussion group beyond the majority, leading the entire system to not reach consensus. Furthermore, this introduction of an external directed field can mimic, in a simple way, the actual new forms of online
propaganda.
One can see in Fig. 1 a schematic representation of our model, in which three agents are selected at random from a fully connected graph to form a discussion group. After the action of the external influence or the discussion process, the agents' states are updated and the system carries on with the opinion dynamics. The dynamic rules for our model are very simple and they are explained in the following, as well the updating of individual agents' opinions.
At each time step, we:
1. Select at random the three agents to form the discussion group: \(S_{1}\), \(S_{2}\) and \(S_{3}\);
2. Set the propaganda direction \(H=+1\) with probability \(p\) and \(H=-1\) with probability \(1-p\);
3. With probability \(q\) all agents in discussion group follow the propaganda direction \(H\), independent of the majority/minority opinion inside the group. So the opinions are updated to \(S_{1}=S_{2}=S_{3}=+1\) if \(H=+1\) or \(S_{1}=S_{2}=S_{3}=-1\) if \(H=-1\), depending on the choice of \(H\) in the previous step;
4. On the other hand, with probability \(1-q\), the agents follow the majority opinion inside the group: if the opinion of one agent is different from the other two, the agent flips to follow the majority.
We introduce two parameters, namely \(p\) and \(q\). The first one, \(p\), is the probability of a "positive" external propaganda. The parameter \(q\) is the probability to the agents in discussion group change their opinions and follow the external propaganda
Figure 1: The schematic representation of our model. From the fully connected population (a) we chose at random (b) the discussion group (c) and, with probability \(p\) we set the external field \(H=+1\), for instance (d). With probability \(q\), all agents follow the external field (e) and the return to the lattice to continue the system dynamics (f).
direction. In other words, \(p\) is a measure of the propaganda "volume" for one specific direction and \(q\) is a measure of how strong or how effective the propaganda is in order to influence the agents. Our results in the next section will show that, even for a small value of the parameter \(q\), the system does not show the full consensus at stationary state that is observed in the original majority-rule model.
## 3 Numerical and Analytical Results
The introduction of the propaganda in the majority-rule model, acting like an external field and directly influencing the discussion group, changes the stationary state of the model and thus the full consensus can no longer be reached. In fig. 2 we exhibit the time evolution of the average opinion \(M\), obtained from numerical simulations of the model, considering Eq. (1), for \(q=0.1\) and typical values of \(p\). As one can see in Fig. 2, for a small value of \(q\), the system does not reach a stationary state with full consensus, except for the cases where \(p=0\) or \(p=1\). These situations means that there is no competition between the propaganda, since there is only one possible direction of \(H\) and, therefore, it is possible for the system to reach full consensus. Any other case, due to the competition between the two directions of propaganda, a majority opinion (but not totality) is obtained, mimicking a democratic situation. Taking into account an entire population, a very common situation nowadays, in any real situation and for most democratic states, is that there is always a fraction of agents with opinion contrary to the dominant opinion.
In our simulations, the system's dynamics initiate with a random distribution
Figure 2: Average opinion as function of time for \(p=0\), \(p=0.2\), \(p=0.5\), \(p=0.8\) and \(p=1\), for fixed \(q=0.1\). Full consensus is reached only for \(p=0\) or \(p=1\).
of opinions among the agents, i.e., with the same probability each agent can have \(+1\) or \(-1\) opinion. In Fig. 3 we show three different instants of time for the agents' opinion. For sufficient long times, the systems reaches the steady state and, even though the individual opinions may change, the average opinion is constant in time.
Since we are considering a fully-connected population, we can consider a mean-field approach to obtain some analytical results for the model. Defining \(f_{1}\) and \(f_{-1}\) as the stationary densities of each possible state (\(+1\) or \(-1\), respectively) and considering the normalization condition where \(f_{1}+f_{-1}=1\), analytical expressions can be derived for those densities. By considering the probabilities that a given agent change its opinion (\(+1\to-1\) or \(-1\to+1\)), we calculate the variations of the average opinion and calculate \(M\) at the stationary state (see the Appendix for a detailed description of our calculations).
As discussed in the Appendix, we can obtain a third-order polynomial for the fraction of agents with \(+1\) opinion, given by
\[2\left(q-1\right)f_{1}^{3}-3\left(q-1\right)f_{1}^{2}-f_{1}+p\,q=0\, \tag{2}\]
which depends on the two parameters \(p\) and \(q\).
For the specific case \(p=0.5\), we obtain a first solution given by \(f_{1}=f_{-1}=1/2\). The magnetization can be obtained from \(M=|f_{1}-f_{-1}|\) which gives us \(M=0\), representing a paramagnetic state. The other two solutions of the third-order polynomial give us \(f_{1}\neq f_{-1}\) (see the Appendix), which indicates a ferromagnetic phase, where one of the two opinions, \(+1\) or \(-1\), is the dominant opinion in the population. Considering again the relation \(M=|f_{1}-f_{-1}|\) for the mentioned two solutions, we obtain
\[M=\frac{\sqrt{3q^{2}-4q+1}}{q-1}\, \tag{3}\]
Figure 3: Evolution of the agents’ opinions in our model. Three different instants of time for the same initial configuration is shown. The \(-1\) opinion is represented by a black pixel, while the white one represents the \(+1\) opinion. We plot a grid \(L\times L\) with \(L=500\) agents, and we considered the parameters \(p=0.5\) and \(q=0.3\). As initial configuration, we considered equal fractions for the two opinions, randomly distributed in the grid. From left panel to right, one can see the beginning of the dynamics, an intermediate instant of time and the steady state configuration.
which can be rewritten in the critical phenomena language as \(M\sim(q_{c}-q)^{\beta}\), where the critical point is \(q_{c}=1/3\) and the order parameter critical exponent is \(\beta=1/2\), as shown in Appendix. The order-disorder nonequilibrium phase transition at \(q_{c}\) seems to be in the Ising mean-field universality class, as it is usual in opinion models with two or three states [46]. Our model only presents a phase transition for the case \(p=0.5\), as it is discussed in the Appendix.
In fig. 4 we exhibit the numerical results for the steady-state magnetization for different values of \(q\). The critical point \(q_{c}\) obtained analytically for \(p=0.5\) is in agreement with our numerical results, as one can see in the right panel of fig. 4. For the other values of \(p\), we observe no phase transition, as discussed in the Appendix. For such case, we observe that one of the two opinions, \(+1\) or \(-1\), is the majority opinion in the population, for all values of \(p\) and \(q\). In such a case, we observe that increasing \(q\), i.e., increasing the effect of the directed propaganda in the population, leads to more polarized states (lower values of \(M\)).
We also studied the steady-state magnetization for the case \(q=1\) and different values of \(p\). Considering \(q=1\) in Eq. (2), one can obtain \(f_{1}=p\), which by the normalization condition leads to \(f_{-1}=1-p\). Thus, the stationary average opinion is \(M=|f_{1}-f_{-1}|=|2p-1|\). Our numerical simulations are in agreement with the analytical calculation. As one can see in fig. 5, except for the case when \(p=0.5\), one of the two possible opinions is the dominant position in the population. Exceptions are observed for the extreme cases \(p=0\) and \(p=1\). Considering the analytical results, we see that for \(p=0\) the stationary state of the system exhibits a population with all agents sharing opinion \(-1\) (\(f_{-1}=1\)), whereas for \(p=1\) the final state of the population is a full consensus with all agents sharing the same opinion \(+1\) (\(f_{1}=1\)). These consensus situations are easily to be understood. Indeed, for \(q=1\) the majority rule will not be applied, and the agents in the discussion group will
Figure 4: Steady state average opinion (or the magnetization \(M\)) versus \(q\), for distinct values of p. As one can see on the left panel, when \(p=0.5\), for large enough values of \(q\), the system does not present a majority opinion, representing a paramagnetic state where \(M=0\). For the other values of \(p\) we have \(M>0\) for all values of \(q\). On the right panel, at the critical point \(q_{c}=1/3\), an order-disorder nonequilibrium phase transition occurs for the case \(p=0.5\). The solid line is the analytical solution, Eq. (3), and the red circles are numerical results.
always follow the media effect. For the case \(p=0\), the directed propaganda will always favors opinion \(-1\), leading all agents to follow the media opinion after a long time. Analogously, for \(p=1\) the directed propaganda will always favors opinion \(+1\), leading to another situation of full consensus at the steady states.
Summarizing, we observe full consensus states (\(M=1\)) only in special situations (\(q=0\) for any value of \(p\), and \(q=1\) for \(p=0\) and \(p=1\)). Usually in extensions of the original majority-rule model [38] the goal is to study such consensus situations and how the population reaches the consensus states [39, 41, 44, 45]. Here, we included a directed propaganda in a simple way, and the results show that we can observe the phenomenon of polarization of opinions considering a simple mechanism.
## 4 Final remarks
We studied the majority-rule model with the inclusion of an external propaganda (external field) acting directly to the discussion group instead the entire population. The local group propaganda can favors opinion \(+1\) with probability \(p\) and opinion \(-1\) with probability \(1-p\). After that, all agents in the discussion group
Figure 5: Steady state average opinion (or the magnetization \(M\)) versus \(p\) for the case \(q=1\). The blue circles were obtained from the numerical simulations of the model, whereas the solid red line is the analytical result \(M=|2p-1|\). Except for \(p=0.5\), one opinion is always the dominant opinion in the population. Full consensus are observed for \(p=0\) and \(p=1\).
follow the media opinion with probability \(q\), or with the complementary probability \(1-q\) the agents follow the standard majority rule. We argue that the directed propaganda can mimic in a very simple way the complex algorithms and artificial intelligence used in advertisements on social media and online platforms. The inclusion of the directed propaganda modify the original majority-rule model, introducing an order-disorder phase transition. We show through analytical and numerical results that this transition occurs at \(q_{c}=1/3\) for the case \(p=0.5\). For \(p\neq 0.5\) this transition is absent. Our results show that the usual majority-rule model stationary state is reached, with a full consensus, for the case when the external propaganda is absent (\(q=0\)). In the other limiting case \(q=1\), where the agents will always follow the directed propaganda, the agents can reach full consensus for the extreme cases \(p=0\) and \(p=1\). However, even for a small influence of the external propaganda, the final state is reached with majority but not full consensus. For the case which the propaganda influence is strong enough among the agents, we show that the consensus can not be reached at all. In other words, the consideration of directed propaganda, in a simple way, in the majority-rule model favors the polarization of opinions, and in the symmetric case \(p=0.5\) we observe full polarization (magnetization \(M=0\)).
## Appendix A Analytical calculations
Following the approach of References [41, 47, 48], we computed the stationary order parameter \(M\). Let us first define \(f_{1}\) and \(f_{-1}\) as the stationary probabilities of each possible state (\(+1\) or \(-1\), respectively), in a way that we have the normalization condition
\[f_{1}+f_{-1}=1. \tag{1}\]
We have to calculate the probability that a given agent suffers the change \(+1\to-1\) or \(-1\to+1\). We are considering groups of 3 agents, so one can have distinct variations of the magnetization, depending on the states of the 3 agents and on the probabilities \(p\) and \(q\). For example, the probability to choose at random 3 agents with opinions \(o=+1\), i.e, a configuration \((+,+,+)\), is \(f_{1}^{3}\). With probability \(q\) the agents will follow the local media effect. In addition, with probability \(p\) the configuration remains \((+,+,+)\), which does not affect the magnetization of the system, since the local magnetic field in this case is given by \(H=+1\). However, with probability \(1-p\) the local field will be given by \(H=-1\), so the group will change opinion to \((-,-,-)\), which cause a variation of \(-6\) in the magnetization. In other words, the magnetization decreases 6 units with probability \(q\left(1-p\right)f_{1}^{3}\). One can denote this probability as \(r(-6)\), i.e., the probability that the magnetization variation is equal to \(-6\). Generalizing, one can define \(r(k)\), with \(-6\leq k\leq+6\) in this case, as the probability that the magnetization variation is \(k\) after the application of the model's rules. As the order parameter (magnetization) stabilizes in the steady
states, we have that its average variation must vanish in those steady states, namely,
\[6\left[r(+6)-r(-6)\right]+4\left[r(+4)-r(-4)\right]+2\left[r(+2)-r(-2)\right]=0\,.\] (A.2)
In this case, we have
\[r(+6) = p\,q\,f_{-1}^{3}\] \[r(-6) = (1-p)\,q\,f_{1}^{3}\] \[r(+4) = 3\,p\,q\,f_{1}\,f_{-1}^{2}\] \[r(-4) = 3\,(1-p)\,q\,f_{1}\,2\,f_{-1}\] \[r(+2) = 3\,p\,q\,f_{1}^{2}\,f_{-1}+3\,(1-q)\,f_{1}^{2}\,f_{-1}\] \[r(-2) = 3\,(1-p)\,q\,f_{1}\,f_{-1}^{2}+3\,(1-q)\,f_{1}\,f_{-1}^{2}\.\]
Thus, the null average variation condition Eq. (A.2), together with the normalization condition Eq. (A.1) written in the form \(f_{-1}=1-f_{1}\), gives us a third-order polynomial for \(f_{1}\), namely
\[2\left(q-1\right)f_{1}^{3}-3\left(q-1\right)f_{1}^{2}-f_{1}+p\,q=0\.\] (A.3)
The polynomial gives us three distinct solutions.
Let us first consider the specific case \(p=0.5\). The mentioned three solutions are given by
\[f_{1} = \frac{1}{2}\] (A.4) \[f_{1} = \frac{(q-1)\pm\sqrt{3q^{2}-4q+1}}{2(q-1)}\.\] (A.5)
The first solution, \(f_{1}=1/2\), leads to \(f_{-1}=1/2\) due to the normalization condition, Eq. (A.1). Thus, \(f_{1}=f_{-1}\) indicates a paramagnetic phase. The other 2 solutions lead to \(f_{1}\neq f_{-1}\), which indicates a ferromagnetic phase, where one of the two opinions, \(+1\) or \(-1\), is the majority opinion in the population. As Eq. (A.5) predicts two solutions (see the \(\pm\) signals), one has two curves for each value of \(q\) for \(q<q_{c}\), where \(q_{c}\) is the critical point, that will be obtained in the following. When \(f_{1}\) assumes one of the values in Eq. (A.5), consequently \(f_{-1}\) takes the other one [49].
The order parameter can be obtained from \(M=|f_{1}-f_{-1}|\), which give us
\[M=\frac{\sqrt{3q^{2}-4q+1}}{q-1}=\frac{\sqrt{3(1-q)(1/3-q)}}{q-1}\sim(q_{c}-q) ^{\beta}\,\] (A.6)
where \(q_{c}(p=1/2)=1/3\) and \(\beta=1/2\). Eq. (A.6) is valid for \(q<q_{c}\) and it shows an order-disorder nonequilibrium phase transition at \(q_{c}=1/3\), and the mentioned transition appears to be in the Ising mean-field universality class, as is usual in opinion models with two or three states [46].
For \(p\neq 0.5\), the general solution of Eq. (A.3) leads to 2 complex conjugate solutions, and the third solution is a real function for \(f_{1}\). In such a case, the absence of a solution \(f_{1}=1/2\) indicates the absence of a paramagnetic phase, and thus there is no phase transition since we always will have \(f_{1}\neq f_{-1}\), leading to a ferromagnetic
state where one of the two opinions is the majority opinion in the population (\(M>0\)). Thus, the model only presents a phase transition for the case \(p=1/2\).
## Acknowledgments
NC acknowledges financial support from the Brazilian funding agencies Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq, Grant 310893/2020-8) and Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ, Grant 203.217/2017).
|
2309.05644 | Grid-based Hybrid 3DMA GNSS and Terrestrial Positioning | The paper discusses the increasing use of hybridized sensor information for
GNSS-based localization and navigation, including the use of 3D map-aided GNSS
positioning and terrestrial systems based on different geometric measurement
principles. However, both GNSS and terrestrial systems are subject to negative
impacts from the propagation environment, which can violate the assumptions of
conventionally applied parametric state estimators. Furthermore, dynamic
parametric state estimation does not account for multi-modalities within the
state space leading to an information loss within the prediction step. In
addition, the synchronization of non-deterministic multi-rate measurement
systems needs to be accounted.
In order to address these challenges, the paper proposes the use of a
non-parametric filtering method, specifically a 3DMA multi-epoch Grid Filter,
for the tight integration of GNSS and terrestrial signals. Specifically, the
fusion of GNSS, Ultra-wide Band (UWB) and vehicle motion data is introduced
based on a discrete state representation. Algorithmic challenges, including the
use of different measurement models and time synchronization, are addressed. In
order to evaluate the proposed method, real-world tests were conducted on an
urban automotive testbed in both static and dynamic scenarios.
We empirically show that we achieve sub-meter accuracy in the static scenario
by averaging a positioning error of $0.64$ m, whereas in the dynamic scenario
the average positioning error amounts to $1.62$ m.
The paper provides a proof-of-concept of the introduced method and shows the
feasibility of the inclusion of terrestrial signals in a 3DMA positioning
framework in order to further enhance localization in GNSS-degraded
environments. | Paul Schwarzbach, Albrecht Michler, Oliver Michler | 2023-09-11T17:35:13Z | http://arxiv.org/abs/2309.05644v1 | # Grid-based Hybrid 3DMA GNSS and Terrestrial Positioning
###### Abstract
The paper discusses the increasing use of hybridized sensor information for GNSS-based localization and navigation, including the use of 3D map-aided GNSS positioning and terrestrial systems based on different geometric measurement principles. However, both GNSS and terrestrial systems are subject to negative impacts from the propagation environment, which can violate the assumptions of conventionally applied parametric state estimators. Furthermore, dynamic parametric state estimation does not account for multi-modalities within the state space leading to an information loss within the prediction step. In addition, the synchronization of non-deterministic multi-rate measurement systems needs to be accounted.
In order to address these challenges, the paper proposes the use of a non-parametric filtering method, specifically a 3DMA multi-epoch Grid Filter, for the tight integration of GNSS and terrestrial signals. Specifically, the fusion of GNSS, Ultra-wide Band (UWB) and vehicle motion data is introduced based on a discrete state representation. Algorithmic challenges, including the use of different measurement models and time synchronization, are addressed. In order to evaluate the proposed method, real-world tests were conducted on an urban automotive testbed in both static and dynamic scenarios.
We empirically show that we achieve sub-meter accuracy in the static scenario by averaging a positioning error of \(0.64\,\mathrm{m}\), whereas in the dynamic scenario the average positioning error amounts to \(1.62\,\mathrm{m}\).
The paper provides a proof-of-concept of the introduced method and shows the feasibility of the inclusion of terrestrial signals in a 3DMA positioning framework in order to further enhance localization in GNSS-degraded environments.
## I Introduction
The need for immersive localization systems based on a variety of technological solutions and their respective market potential is ever-growing (European Union Agency for the Space Programme, 2020). Since stand-alone GNSS solutions typically do not meet the accompanying performance requirements, such as accuracy, availability and integrity, the use of additional sensor information is commonly applied (Grejner-Brzezinska et al., 2016). Next to high-technology solutions including the incorporation of optical sensors for applications like automated driving, the use of map data and cooperative sensor information has greatly increased. This leads to a general hybridization of available augmentation inputs (Egea-Roca et al., 2022).
Due to the challenges for GNSS-based positioning in harsh urban environments, 3D map-aided (3DMA) GNSS positioning has received a lot of research attention in the past years (Groves and Adjrad, 2019). Simultaneously, connected devices and corresponding infrastructure are rapidly emerging, enabling location-aware communication systems and a variety of location-based services. Based on different geometric measurement principles, such as time of arrival (ToA), angle of arrival (AoA) or time difference of arrival (TDoA), these terrestrial systems can also greatly benefit GNSS localization in dense or even GNSS-denied environments by applying hybrid or collaborative positioning (Medina et al., 2020; Zhang et al., 2021b).
Next to 3DMA GNSS, cooperative respectively collaborative positioning has been addressed in recent years (Calatrava et al., 2023), also including the integration into smartphones (Minetto et al., 2022). By applying GNSS-only cooperative positioning, a correlation between observations can occur (Zhang et al., 2021), potentially hurting positioning performance. However, since cooperative positioning is already based on the assumption of a radio connection between devices, the potential of using these radio signals for further augmentation arises. Examples include the fusion of GNSS with V2X DSRC radio (Yan et al., 2022), 5G (Bai et al., 2022) or UWB (Huang et al., 2022). As previous studies have already discussed the benefits of hybrid GNSS and terrestrial positioning, e.g. by analyzing the geometric constellation (Huang et al., 2016), the conceptualization of collaborative localization frameworks already includes the idea of additional radio information, e.g. in (Raviglione et al., 2022). Furthermore, hybrid GNSS and terrestrial localization also enables the exploitation of demanding applications, such as seamless indoor/outdoor localization (Bai et al., 2022).
The key contribution of the paper is expanding tight integration of GNSS and terrestrial signals based on a multi-epoch Grid Filter (Schwarzbach et al., 2020). The approach uses a dynamic model, which allows the incorporation of vehicle dynamics into the state estimation. The applied algorithmic toolchain will be detailed within the paper. In unison with GNSS, terrestrial systems are prone to the negative impacts of the propagation environment (Schwarzbach et al., 2021), including multipath and non-line-of-sight reception, leading to a violation of the assumptions of conventionally applied parametric state estimators, such as the Extended Kalman Filter (EKF). However, the usage of non-parametric filtering, such as the Grid Filter, allows a less stringent criteria formulation for the provided input data. Furthermore, parametric filtering approaches are prone to linearization errors as described in (Julier and Uhlmann, 2004), which is further amplified in local terrestrial systems due to the geometric constellation of reference points and rovers.
Therefore, grid-based methods can handle both non-gaussian observation residuals and multi-modalities within the state space. This is emphasized in fig. 1, which depicts the influence of measurement outliers (e.g. caused by NLOS reception) on both the positioning domain (right-skewed residual distribution) and the state space (multi-modality). In addition, the influence of the non-mitigated observation on parametric state estimation is shown.
The paper presents the capabilities for a tight integration of heterogeneous terrestrial measurements, including ToA, AoA and TDoA observations, with the concept of 3DMA GNSS positioning (Schwarzbach and Michler, 2020). This is done by evolving a multi-epoch, 3DMA, grid-based Bayesian Filter, including an algorithmic generalization for the tightly coupled data fusion of the aforementioned geometric relations provided from terrestrial systems. Since the given data fusion problem formulation is reduced to a technology-independent integration of geometric relations based on spatial map data, a synergistic foundation for the future integration of opportunistic radio signals, for example including 5G-based TDoA or AoA observations or WiFi Fine Timing Measurements (FTM), is provided.
In addition to addressing the measurement model and adapting it in accordance with hybrid GNSS and terrestrial Grid Filtering (selection of sampling probabilities and their parameter settings), we also present the integration of the motion step within the grid state space representation, allowing the incorporation of vehicle dynamics within the Grid Filter. This allows the propagation of a higher entropy of the estimated state distribution within the recursive filtering structure. Furthermore, time synchronization for tightly coupled integration is discussed, as a multi-rate sequential filtering approach is presented.
Figure 1: Problem formulation for NLOS reception in terrestrial ranging and corresponding challenges for state estimation: **(a)** Simulated ranging measurements (black) with 1 NLOS measurement (gray) and reference position (green); **(b)** Resulting right-skewed, non-gaussian ranging residuals corresponding to (a); **(c)** Resulting state-space multi-modalities and non-mitigated parametric estimation result (red).
In addition to conceptual work and a formal description of the approach, a real-world study in both a dynamic and a static scenario is presented in order to empirically evaluate the proposed method. Here, a local Ultra-Wideband (UWB) real-time localization system is deployed in order to augment multi-constellation GNSS pseudorange observations. The study was conducted on a testbed for automated and connected driving in an urban environment in Germany.
The rest of the paper is structured as follows: section II introduces the concept of the hybrid GNSS and terrestrial Grid Filter, including the discussion of potential radio technologies and respective measurement principles as well as the theory behind the hybrid Grid Filter and the integration of different geometric relations. In addition, a grid-based prediction step and a coping mechanism for multi-rate measurement synchronization is presented. The implementation of the provided theoretical background is detailed in section III. Subsequently, section IV presents the conducted measurement campaign, including both a static and a dynamic scenario. Based on this, a detailed evaluation of the implemented method based on the surveyed data is provided. The paper concludes with a summary and an outlook for future research work.
## II Hybrid GNSS and terrestrial positioning
### Terrestrial Systems
Radio-based and, more generally, wireless positioning is an interdisciplinary engineering and scientific field. In the course of the years, different forms, interest groups and researchers have established themselves. However, there is no unified taxonomy for these systems in literature (Pascacio et al., 2021). This is due to the time difference of research and development works across different technologies. Nevertheless, it can be stated that a categorization of wireless localization systems is based on three pillars (Esposito and Ficco, 2011; Tariq et al., 2017), which are presented in fig. 2.
In unison with GNSS localization, the basis for determining location-related variables is the derivation of signal properties, more precisely properties of the signal transmission, e.g., connectivity or physical quantities of the signal. In this stage, a distinction between range-free and range-based (also referred to as geometric) can be made. This is further visualized in fig. 3, which also includes applicable measurement principles.
For geometry-free approaches, there is a strong (positive) correlation of the number of available infrastructure and end devices that participate in the positioning process. In general, the achievable positioning quality of these geometry-free approaches is comparatively low (Chowdhury et al., 2016).
Figure 3: Classification of wireless measurement principles as a basis for position determination.
Figure 2: Pillars of radio-based localization systems.
Range-based approaches on the other hand, allow the geometric interpretation of derived signal quantities. Applicable radio technologies and corresponding geometric measurement principles are mapped in table 1. This allows an overview on both system candidates and the discussion of advantages of the measurement principles, e.g. hardware availability, computational complexity or achievable accuracies.1 Further information can be found in (Zafari et al., 2019; Mendoza-Silva et al., 2019).
Footnote 1: ToA measurement principles are not applicable in terrestrial radio systems as a clock synchronization respectively a correction of the clock offsets between infrastructure and mobile devices is not given.
In this context, promising system candidates for hybrid GNSS terrestrial positioning for transportation and logistic applications as well as seamless indoor outdoor positioning are given by:
* WiFi-based Fine Timing Measurements (FTM) (Yu et al., 2020; Gentner et al., 2020);
* BLE-based AoA implemented from version 5.1 (Sambu and Won, 2022) and PoA Ranging (Zand et al., 2019);
* 5G-based AoA and TDoA integration (Talvitie et al., 2019; Xhafa et al., 2021);
* UWB-based Two-Way Ranging or TDoA (Chiasson et al., 2023).
As previously stated, the presented work focuses on the hybrid integration of terrestrial systems and GNSS based on a Grid Filter implementation. Therefore, individual quantities of different origin are abstracted to their geometric primitives and integrated on this level. By this tightly-coupled approach it is possible that no system-individual position solutions have to be available for an integration and the requirements concerning the available observation quantities can be reduced. Thus, individual observations can also be used to support the position information.
## 2 Hybrid Grid Filter
The use of a grid-based representation of the state space for probabilistic filtering is fundamentally suitable for a variety of localization tasks and heterogeneous inputs Thrun et al. (2005). In general, the proposed hybrid Grid Filter follows the well-studied prediction and observation structure of a Recursive Bayes Filter (Thrun et al., 2005), similar to the well-known Extended Kalman Filter or the Particle Filter. In this section we will present the general framework of a multi-epoch, hybrid Grid Filter and provide background information of identified challenges for hybrid Grid Filters: Generic measurement models, state propagation for dynamic applications and time synchronization for hybrid information fusion.
The general framework in accordance with the Recursive Bayes Filter (RBF) structure is given in fig. 4. The following section will incorporate the accompanying calculation steps and their theoretical background as well as the final implementation of the method.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & WiFi & BLE & ZigBee & UWB & 5G \\ \hline RSS & ✓ & ✓ & ✓ & ✓ & ✓ \\ AoA & ✓ & ✓ & ✓ & ✓ & ✓ \\ PoA & ✗ & ✓ & ✓ & ✗ & ✓ \\ RTT & ✓ & ✗ & ✗ & ✓ & ✗ \\ TDoA & ✗ & ✗ & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mapping of range-based measurement principles and available technologies for radio-based positioning.
Figure 4: Generic RBF structure (according to Schwarzbach et al. (2020)).
#### a) Measurement Model
As aforementioned, the focus of the work lies on a general integration of geometric inputs, complementary to global GNSS positioning. Similar to 3DMA likelihood-based localization frameworks, e.g. (Groves et al., 2020) or (Ng et al., 2020), a translation of available measurements in a probabilistic manner needs to be performed. For simplification reasons, the following explanations concerning the implementation of the measurement models are detailed using a local two-dimensional and equidistant grid. A generalization using digital maps can easily be done by using discrete, multidimensional and globally referenced geodata, such as digital surface models (DSM) (Schwarzbach and Michler, 2020). The basic computational steps are given in fig. 5:
First, the definition of a generic, discrete and finite state space according is required. The result of this initialization are \(i=1,\ldots,I\) grid points \(\mathbf{x}_{i}\) of arbitrary dimension. Given \(n=1,\ldots,N\) available reference points2\(\mathbf{x}^{n}F=[x^{n},y^{n},z^{n}]^{\intercal}\), the geometric relations between the defined state space \(\mathbf{x}_{i}\) and \(\mathbf{x}^{n}\) are considered by calculating the innovations \(\mathbf{y}_{i}^{n}\). This is done by calculating the difference of the present observations \(\mathbf{\mathcal{Z}}^{n}\) and the known relations between grid points and reference points \(\mathbf{\Gamma}_{i}^{n}\):
Footnote 2: Reference points can be given as global satellites, base stations of mobile radio, WiFi access points or sensor network anchors.
\[\mathbf{y}_{i}^{n}=\mathbf{\mathcal{Z}}^{n}-\mathbf{\Gamma}_{i}^{n}. \tag{1}\]
The most common relations applicable for radio-based positioning are summarized in table 2.
For the innovations obtained in eq. (1) for each grid point and each observation, a likelihood, is subsequently calculated under the assumption of a statistical model. This statistical model is available in the form of a probability density function (pdf) and characterizes the expected uncertainties of the respective observations. Given a generic statistical distribution (e.g. Gaussian) \(\mathcal{D}(0,\mathbf{\Sigma})\), we can obtain the Likelihood of each position candidate from Thrun et al. (2005):
\[\mathbf{p}_{i}^{n}(\mathbf{\mathcal{Z}}^{n}|\mathbf{x}_{i})\leftarrow\mathcal{D}( \mathbf{y}_{i}^{j},\mathbf{\Sigma})\, \tag{2}\]
where \(\mathbf{\Sigma}\) represents the scale parameter and therefore the statistical properties of the assumed pdf. In the case of Gaussian uncertainty, this represents the covariance matrix. The final combination of all observations respectively their likelihoods over all grid points is performed (here without consideration of the propagated likelihoods discussed in section II.2 b) by means of:
\[\mathbf{p}_{i}=\eta\sum_{j}\mathbf{p}_{i}^{n}(\mathbf{\mathcal{Z}}^{n}|\mathbf{x}_{i} )\, \tag{3}\]
where \(\eta\) represents the normalization factor according to Bayes' rule. The result of this calculation for the geometric relations given in table 2 are depicted in fig. 6. Here, the Likelihood obtained from four reference points (red) is given. The statistical model applied is a Gaussian distribution.
In addition, Grid Filtering easily allows a combination of different geometrical relations within one or consecutive measurement steps, which also enables localization systems with less required infrastructure respectively reference points. The integration of GNSS and terrestrial observations is further detailed in section III.
\begin{table}
\begin{tabular}{c c} \hline \hline Geometrical relation & Calculation \\ \hline Distance (ToA / RTT) & \(\mathbf{\Gamma}_{\text{distance}}^{i,j}=\left|\left|\mathbf{x}^{n}-\mathbf{x}_{i} \right|\right|_{2}\) \\ \hline Hyperbolic (TDoA) & \(\mathbf{\Gamma}_{\text{hyperbolic}}^{i,j}=\left|\left|\mathbf{x}^{n}-\mathbf{x}_{i} \right|\right|_{2}-\left|\left|\mathbf{x}^{j+1}-\mathbf{x}_{i}\right|\right|_{2}\) \\ \hline Angle (AoA) & \(\mathbf{\Gamma}_{\text{angle}}^{i,j}=\arctan\frac{y^{n}-y_{i}}{x^{n}-x_{i}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Calculation of geometric relations between reference and grid points.
Figure 5: Schematic approach to transform geometric relations into probabilistic grid-based state space.
#### 2.2.2 Prediction step
An important component of dynamic state estimation respectively multi-epoch filtering is the prediction step, which takes a motion model and/or sensor information about the object motion into account. For dynamic state estimation based on the EKF approach, velocities, direction of motion, accelerations, and others are considered in the course of mathematical modeling depending on the model degree Balzer et al. (2014). For the estimation of these motion quantities, the use of EKF approaches and their derivatives has become established, since higher-dimensional state vectors can be implemented in a computationally efficient manner and these sensors satisfy the strict requirements of parametric estimation methods. This has also been applied in previous works addressing multi-epoch 3DMA filtering in (Zhong and Groves, 2022b).
Nevertheless, an integration of rudimentary motion information within the prediction step of the Grid Filter is possible and absolutely necessary in the context of the possible asynchronicity of observations of heterogeneous origin (cf. section II.2.5). The approach of the grid-based computation of the prediction step is based on the determination of the translation likelihoods between the realizations (position candidates) \(\mathbf{x}_{i}\) present in the state space. For this purpose, a probability-based plausibility check of the transition from one system candidate to another is performed based on dynamics data of an object or starting from model assumptions.
The two basic quantities for characterizing the motion of an object within a grid-based state representation are the object velocity \(v_{k}\) and its heading \(\theta_{k}\). The representation of the motion in this form is called odometry-based information Thrun et al. (2005). The grid integration here is two-dimensional within the \(x\)-\(y\) plane of the object to be located.
The dynamics information is processed individually within the likelihood grid and then combined. The underlying geometric relations are to be applied in analogy to the likelihood transfer of cooperative measured quantities listed in table 2. Here, a velocity measurement represents a time-scaled distance determination, where \(\Delta T_{k}\) represent the temporal length of the prediction step as a function of the measurement rates and the availability of observation information. Furthermore, the heading represents an angular relation between the object orientation and a reference direction.
The peculiarity of the non-parametric estimation approach, compared to parametric estimators for dynamical systems (e.g. EKF), is that the motion prediction is performed not only for one position realization, in the case of parametric estimation this corresponds to the state vector consisting of the estimated mean values, but for all position hypotheses \(\mathbf{x}_{i}\). Thus, the motion prediction of non-parametric methods is associated with a comparatively significant increase in computational complexity. However, this also allows possibility to consider multi-modalities present in the state space. Thus, it is possible to obtain an increased information content within the state space in the prediction step compared to parametric approaches.
According to the definition of RBF, an existing velocity measurement or model assumption \(v\) is also considered as a probabilistic quantity and thus represents in its simplest form an average value of the existing velocity in the direction of motion. This is further subject to an uncertainty \(\sigma_{v}^{2}\), which can be parameterized empirically from available sensor data or model-based. The resulting prediction of motion is given as follows. First, for a candidate position \(\mathbf{x}_{i}\), the known distance \(\boldsymbol{d}_{i,j}\) to all other candidate positions \(\mathbf{x}_{j}\) is calculated:
\[\boldsymbol{d}_{i,j}=\boldsymbol{\Gamma}_{i,j}=\left|\left|\mathbf{x}_{j}- \mathbf{x}_{i}\right|\right|_{2}\quad\text{with}\quad i\neq j\;. \tag{4}\]
Figure 6: Likelihood representation of geometric position lines in an equidistant two-dimensional grid based on obtaining spatial relations in a radio network (anchor points red, value of Likelihood color coded): (**a**) Distances, (**b**) Hyperbolic und (**c**) Angle.
Subsequently, the determination of the residuals \(\mathbf{y}_{i,j}^{\text{v}}\) between the determined distances \(\mathbf{d}_{i,j}\) and the present velocity measurement \(v_{k}\) in combination with the time difference \(\Delta T_{k}\) is performed:
\[\mathbf{y}_{i,j}^{\text{v}}=v_{k}\cdot\Delta T_{k}-\mathbf{d}_{i,j}\;. \tag{5}\]
Thus, concentric residuals arise, which have their origin in the used position hypothesis and whose radius corresponds to the time-scaled distance equivalent of the velocity measurement. In analogy to the calculation of the likelihood of the observations, a probabilistic sampling of the residuals based on an assumed stochastic model is performed to determine the predicted likelihood \(\overline{\mathbf{p}}_{i,j}^{\text{v}}\), which is assumed to follow a Gaussian Distribution \(\mathcal{N}(0,\sigma_{v}^{2})\) :
\[\overline{\mathbf{p}}_{i,j}^{\text{v}}\leftarrow\mathcal{N}(\mathbf{y}_{i,j}^{ \text{h}},\sigma_{\text{v}}^{2})\;. \tag{6}\]
The given calculation steps are repeated for all grid points. As a final step a calculation of the total likelihood:
\[\overline{\mathbf{p}}_{i}^{\text{v}}=\sum_{j}\overline{\mathbf{p}}_{i,j}^{\text{v}}\;. \tag{7}\]
The integration of the _heading_ information is done in analogy considering the measured or assumed _heading_\(\theta_{k}\), its uncertainty \(\sigma_{2}^{\text{h}}\), the residuals of the _heading_\(\mathbf{y}_{i,j}^{\text{h}}\) and the predicted likelihood \(\overline{\mathbf{p}}_{i}^{\text{h}}\) by means of:
\[\mathbf{\alpha}_{i,j} =\arctan\frac{y_{j}-y_{i}}{x_{j}-x_{i}}\quad\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\
Due to the finite nature of the discrete state space representation, a system propagation for global positioning problems needs to also be accounted, as moving objects are able to leave the initially defined state space. A procedure for this task is detailed in (Zhong and Groves, 2022b) and therefore is not further addressed in this paper.
#### 3.3.3 Time Synchronization
Time synchronization for hybrid positioning methods, especially with multi-rate systems has recently been addressed for GNSS and 5G Bai et al. (2022) or GNSSS and UWB in Guo et al. (2023). Thereby, a possible synchronicity respectively handling of asynchronous observations especially considering different measurement rates is imperative, as this can lead to cumulative errors in the data fusion process. This concern is further enhanced in the presence of additional terrestrial systems due to the unknown time-offset of the individual systems and unequal, potentially non-deterministic update rates.
As pointed out in (Guo et al., 2023) it is essential, that observations at different rates need to be processed. In (Retscher et al., 2023), individual measurement models for GNSS and combined GNSS/UWB are defined. However, a simultaneous processing can only be realized if the time offset between the systems is compensated, especially in dynamic scenarios. Another major challenge poses the handling of out of sequence measurements (Muntzinger et al., 2010). However, since we only discuss the integration of radio based inputs, the present measurement rates are comparably low. Therefore, only a timing consistency check between the time stamps of the sensor information is performed. In addition, we opt to implement a multi-rate sequential filtering approach based on the sensor time stamps of the individual sensor systems.
Essentially, the multi-rate measurements are compensated by the introduced motion model, accounting for possible pose changes of the object within a measurement update window. This procedure is schematically presented in fig. 8. Since, a grid-based motion step is implemented (cf. section 2.2 b)) a high entropy about the systems state is stored within the state space. Another advantage of the given approach is that once a position fix is achieved, there is generally no limitation on the amount of available measurements to perform the described measurement step. This enables the utilization of system-individual observations at different measurement epochs, while also allowing the accounting of potential multi-modalities.
#### 3.3.4 State estimation
Given the calculation of an overall Likelihood consisting of the prediction step and hybrid measurement models, the final state estimation has to be performed in order to obtain the current receiver position. This can either be achieved by performing a maximum likelihood respectively for the presented Bayesian approach a maximum a-posteriori (MAP) estimation (Bar-Shalom et al., 2001) or by performing a weighted mean (WM) given all samples (Minetto, 2020). However, both approaches pose a variety of challenges for global positioning as summarized in table 3.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Advantages & Disadvantages \\ \hline MAP & Maximum in case of multimodal distribution in state space & accuracy of estimation correlated with grid resolution and can only correspond to defined position hypotheses \\ & Simple, efficient implementation & Potential loss of information, since spatial realization of likelihood is not taken into account. \\ \hline WM & Total information of the likelihood of the state space is considered & State space covers a large area for global positioning, therefore irrelevant areas are considered. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Advantages and disadvantages of conventional state estimators for grid-based estimation.
Figure 8: Schematic representation of fusion of multirate measurements as sequential data fusion based on the presented prediction step.
Therefore, similar to we apply a two-step approach for deriving the current state estimation \(\hat{\mathbf{\mathcal{X}}}_{k}\) at time step \(k\):
1. Calculation of MAP by maximizing the obtained likelihood function
2. Calculation of WM using the MAP as circle center given an user-defined radius \(\mathrm{R}\) including \(I_{\mathrm{R}}\) grid points.
This yields the state estimation \(\hat{\mathbf{x}}_{k}=[\hat{\mathbf{x}}_{k},\hat{\mathbf{y}}_{k},\hat{\mathbf{z}}_{k}]^{\intercal}\) with:
\[\hat{\mathbf{x}}_{k}=\frac{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i}(x_{i}|\mathbf{\mathcal{ Z}})\cdot\mathbf{x}_{i}}{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i}(x_{i}|\mathbf{\mathcal{ Z}})}\quad\hat{\mathbf{y}}_{k}=\frac{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i}(y_{i}|\mathbf{ \mathcal{Z}})\cdot\mathbf{y}_{i}}{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i}(y_{i}|\mathbf{ \mathcal{Z}})}\quad\hat{\mathbf{z}}_{k}=\frac{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i} (z_{i}|\mathbf{\mathcal{Z}})\cdot\mathbf{z}_{i}}{\sum_{i}^{I_{\mathrm{R}}}\mathbf{p}_{i}( z_{i}|\mathbf{\mathcal{Z}})}\, \tag{13}\]
## 3 Implementation
### Overview
The implementation of the approach is based on the theoretical models and design considerations outlined in section 2. An overview is provided in fig. 9. The sensor components are synchronized to match a common time frame. As for GNSS, sensor data consists of satellite ephemeris data (NAV) and estimates of pseudoranges, carrier-phase, doppler and carrier-to-noise-ratio (OBS). In a preprocessing step, satellite positions are calculated and pseudoranges are corrected for error components, which can be modeled. This includes satellite clock bias and atmospheric errors as described in basic GNSS literature (Kaplan et al., 2005). The GNSS measurement step estimates the likelihood of each grid point based on given satellite positions and corrected pseudoranges. The measurement model is described in detail in section 3.2.1). The respective UWB measurement model relies on given anchor positions and ranging measurements for likelihood estimation. Furthermore, a motion model is incorporated, which uses heading and velocity measurements for likelihood estimation. Those measurements can be based on IMU sensor data, odometry or other sensor hardware. In case of absence of these input information, generic motion models can be applied. The resulting likelihood of each measurement step is then used to update the grid probabilities. Furthermore, each GNSS or UWB step triggers the state estimation. With this, the state vector is updated based on the current grid probability space as discussed in section 3.2.2.
Figure 9: Flow chart of the implementation of a hybrid GNSS-UWB-motion Grid Filter.
## 2 Implementation of measurement models
The section details the implementation of the GNSS and terrestrial measurement model. Realizations of these individual Likelihoods are visualized in fig.10. The figures are obtained from the surveyed dataset described in section4. For both figures, the respective priors are ignored. The current state estimation based on the observation step and the corresponding reference position are also indicated.
#### 2.1.1 GNSS measurement model
The presented work utilizes the well-studied Between Satellite Single Differencing (Groves and Adjrad, 2019; Suzuki, 2019; Schwarzbach and Michler, 2020), suitable for code-based 3DMA positioning approaches using discrete state space representations. This is due to the removal of the receiver clock error, whose discretization is not feasible.
The generic model for a pseudorange \(\rho_{r}^{s}\) is given as (Langley et al., 2017):
\[\rho_{r}^{s}=d_{r}^{s}+c\cdot(\delta t_{r}-\delta t^{s})+\delta_{ \text{ion}}^{s}+\delta_{\text{tro}}^{s}+\varepsilon_{r}^{s}\, \tag{14}\]
where \(d_{r}^{s}=\left|\left|\mathbf{x}^{s}-\mathbf{x}_{r}\right|\right|_{2}=\sqrt{(x^{s}-x_{ r})^{2}+(y^{s}-y_{r})^{2}+(z^{s}-z_{r})^{2}}\) represents the euclidean distance between receiver \(r\) and satellite \(s\) consisting of the respective Cartesian coordinates. In addition, \(\delta t_{r}\) and \(\delta t^{s}\) represent the receiver and the satellite clock error respectively, \(\delta_{\text{ion}}^{s}\) and \(\delta_{\text{tro}}^{s}\) the atmospheric error influences and finally \(\varepsilon_{r}^{s}\) as the unmodeled, uncorrelated error terms for each pseudorange measurement. Following up on this, BSSD performs a differencing between respective observations. For the generic satellites \(s_{1}\) and \(s_{2}\) this yields:
\[\Delta\rho_{r}^{s_{1},s_{2}}=\rho_{r}^{s_{1}}-\rho_{r}^{s_{2}}. \tag{15}\]
Given eq.14 we obtain:
\[\Delta\rho_{r}^{s_{1},s_{2}} =d_{r}^{s_{1}}-d_{r}^{s_{2}}+c(\delta t_{r}-\delta t_{r})-c( \delta t^{s_{1}}-\delta t^{s_{2}})+\delta_{\text{ion}}^{s_{1}}-\delta_{\text {ion}}^{s_{2}}+\delta_{\text{tro}}^{s_{1}}-\delta_{\text{tro}}^{s_{2}}+\mathbf{ \sigma}_{r}^{s_{1},s_{2}} \tag{16}\] \[=d_{r}^{s_{1}}-d_{r}^{s_{2}}+c\Delta\delta t^{s_{1},s_{2}}+ \Delta\delta_{\text{ion}}^{s_{1},s_{2}}+\Delta\delta_{\text{top}}^{s_{1},s_{ 2}}+\mathbf{\sigma}_{r}^{s_{1},s_{2}}. \tag{17}\]
The obtained model corresponds to the hyperbolic geometrical model introduced in section2.2. Besides the elimination of the receiver-specific clock error, the observable combination mainly affects the unmodeled error terms. Due to the observation combination of BSSD and according to the error propagation with the assumption of \(\sigma_{r}^{s}=\sigma_{r}^{s_{1}}=\sigma_{r}^{s_{2}}\) the influence of the stochastic error terms is given as follows (Odijk and Wanninger, 2017):
\[\sigma_{r}^{s_{1},s_{2}}=\sigma_{\text{SD}}=\sqrt{2}\sigma_{r}^{s_{1},s_{2}}. \tag{18}\]
Figure 10: PDF after the measurement step. Reference vector (blue) and current estimate (red). Calculated Likelihoods are color coded.
Concerning the parametrization of \(\sigma_{r}^{s}\) a variety of procedures are applicable. At first, generic values according to system specifications can be set, e.g. using the GPS Signal in Space pseudorange error budget of the Standard Positioning Service Schwarzbach and Michler (2020). Furthermore, an empirical parameter setting based on available measurements can be obtained Groves et al. (2020). For simplification, we stick to the former possibility and assume a mean-free Gaussian distribution given a line-of-sight (LOS) standard deviation of \(\sigma_{r}^{s}=7.8\,\mathrm{m}\), leading to \(\sigma_{SD}=\sqrt{2}\sigma_{r}^{s}\).
In recent literature, a variety of algorithmic 3DMA approaches with a holistic focus on adequate NLOS handling based on 3D map data are discussed, e.g. Groves et al. (2020). As this not the sole focus of this work, we want to present a more general approach, which can also be implemented without the knowledge of building information or other spatial data apart from the utilized discrete state space.
BSSD approaches are implemented by selecting a pivot satellite assumed to be LOS, which is then differenced from all other observations. Depending on the estimated LOS/NLOS state of the observation, different stochastic models are applied. As presented in Schwarzbach and Michler (2020), a full set differencing between all observed satellites is also applicable, resulting in a symmetric error distribution. An example given the static dataset presented in section IV is given in fig. 12(a).
This empirical survey reveals that the differencing across all satellites produces characteristic effects:
* Case 1: The differencing of LOS measurements leads to a mean-free residual distribution.
* Case 2 and 3: Mixture densities for combined LOS/NLOS differencing depending on the associative law and if the LOS satellite is differenced from the NLOS one or vice versa. Case 2 corresponds to a positive mean, case 3 to a negative.
* Case 4: Measurement outliers, which do not fit one of the former.
In order to account for these effects, the modeling of a Gaussian mixture model (GMM) is suggested, allowing the exploitation of the full set differencing. In our case, a \(3\) component GMM is applied, whose parametrization is presented in section IV. At first, a visibility prediction of observed satellites based on the building boundary information Groves et al. (2020) is performed. Subsequently, full set differencing for all satellites \(i\) and \(j\) (\(i\neq j\)) is performed. The selection of the GMM component for sampling corresponds to the cases listed above:
* \(i\) and \(j\) are predicted as LOS satellites, GMM\({}_{1}\) is selected;
* \(j\) is predicted LOS and \(i\) is predicted NLOS, GMM\({}_{2}\) is selected;
* \(i\) is predicted LOS and \(j\) is predicted NLOS, GMM\({}_{3}\) is selected;
* For any other case, the differenced observations are ignored.
#### 4.2.2 Terrestrial Measurement model
In accordance with section II.1, a variety of technologies and accompanying measurement principles for positioning is available. The empirical validation of the method presented in section IV introduces an UWB real-time location system based on two-way ranging, which is a realization of RTT. Respective implementations are detailed in Sang et al. (2019), specified in the UWB standard and therefore available in commercial products. The underlying geometrical model corresponds to the distance measurement described in section II.2.1).
Unlike 3DMA approaches and due to the volatile nature of the propagation between terrestrial stationary reference points and mobile devices, geometry based NLOS identification cannot be easily applied. For this task however, a variety of algorithmic mitigation strategies exist Guvenc et al. (2007). The probabilistic error mitigation favored in this work uses a Likelihood mixture model, as originally presented in Fox et al. (2001). The mixture Likelihood uses a set of proposal distributions to sample from and combines them using a mixture ratio \(\phi\). For the example, of two proposal distributions \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) the mixture Likelihood \(\mathcal{D}_{\text{ML}}\) yields:
\[\mathcal{D}_{\text{ML}}=(1-\phi)\mathcal{D}_{2}+\phi\mathcal{D}_{1} \tag{19}\]
The parametrization is based on a statistical analysis of UWB ranging error distributions presented in Schwarzbach et al. (2021). Essentially, UWB observations and accompanying error influences \(\epsilon\) are categorized into three propagation scenarios:
1. LOS reception following a mean-free Gaussian distribution,
2. NLOS reception leading to a right-skewed residual distribution or
3. Measurement outliers and failures.
The individual error magnitudes and corresponding occurrence probabilities are further discussed in section IV.
#### 3.3.3 Motion model
The motion model is calculated as described in section II.2.2 b). Basically, there are two ways of implementing the motion model. On the one hand, model-based assumptions about the motion of the object to be located can be formulated. This usually includes a model assumption of the velocity, associated with a corresponding uncertainty. On the other hand, the use of dynamic data of the object is possible. A practical implementation is the odometry motion model (Thrun et al., 2005), which uses the velocity of the object (speed over ground) and its heading.
The information necessary for applying the odometry motion model can be obtained from a variety of sensory inputs. In this work, we simply use the velocity and heading information estimated from the reference receiver, which is introduced in section IV. In unison with the measurement step, the (expected or assumed) quality of information for the prediction step can be modeled in a probabilistic manner (cf. eq. (4) or 12). The results of the prediction sampling step for the velocity, heading and combined information is depicted in fig. 11.
## IV Data acquisition and evaluation
### Methodology
In order to evaluate the performance of the implemented approach, test runs are performed on the testbed for automated and connected driving in Dresden, Germany. Those runs consist of static and dynamic scenarios. The static test runs are conducted at fixed locations, which were geo-referenced with a survey-grade GNSS setup. The data from the static samples are used to analyze the distribution of GNSS pseudorange residuals. The parameters of the resulting multi-modal distribution are then applied in the parametrization of the final hybrid Grid Filter. This filter is evaluated in a dynamic driving test. In table 4 the conducted scenario, used sensors and reference receivers as well as the outcomes of each scenario are summarized.
The applied accuracy metric \(\mathcal{Q}\) corresponds to the three-dimensional L2-norm \(\|\cdot\|_{2}\) between the reference \(\mathbf{x}_{\text{ref},k}\) and estimated \(\hat{\mathbf{x}}_{k}\) positions at each time step \(k\):
\[\mathcal{Q}=||\mathbf{x}_{\text{ref},k}-\hat{\mathbf{x}}_{k}||_{2} \tag{20}\]
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & Test case & Description & Sensors & Outcomes & Reference \\ \hline I & static & 14 points on testbed & u-blox F9P GNSS receiver & GNSS PR Residuals & Leica GS15 \\ & & ZigPos UWB RTLS & UWB Ranging Residuals & \\ II & dynamic & Evaluation run & u-blox F9P GNSS receiver & & \\ & & ZigPos UWB RTLS & & Algorithmic evaluation & NovaTel PwrPak7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test cases on testbed for automated and connected driving.
Figure 11: Motion likelihood distribution for the most probable current candidate point (red dot). Motion vector (red) is added for clarity.
## 2 Data Set
For GNSS data collection, an u-blox F9P receiver was used on the vehicle. The recorded data include pseudorange estimates of GPS L1, Galileo E1 and Glonass L1 FDMA. As terrestrial radio system, a UWB system by Zigpos GmbH was used, which features two-way ranging based on DecaWave chip sets. For this purpose, 11 UWB anchors were placed and geo-referenced in order to be integrated with GNSS. To enable a comprehensive performance and error analysis, all measurements were referenced. The offset between UWB and GNSS antenna is corrected using the known baseline parameters between both. The static scenario consists of 4038 measurement epochs on 14 points on the testbed (cf, fig. 12). The aim is to analyze error distributions of the applied sensor systems given operation environment. For the static scenarios, a survey-grade Leica GS15 with VRS RTK correction data was used as reference receiver.
The dynamic scenario consists of a test run using a test vehicle depicted in fig. 12. The data set consists of 2066 raw measurement epochs, of which 1411 include GNSS and 655 include UWB data. The UWB measurement rate was set comparatively low to showcase the influence of terrestrial augmentation for GNSS. The reference was recorded using a Novatel PwrPak 7 with VRS RTK corrections.
## 3 Evaluation
At first, the static scenario and more specifically the quality of the available sensor information are evaluated. For this purpose, fig. 13 includes a histogram of the full set differencing approach in fig. 13a and the UWB ranging residuals in fig. 13b.
As aforementioned, the full set satellite differencing yields a symmetric residual distribution. As described in section III.2 a), the resulting mixture distribution can be approximated as GMM using a variable amount of \(C\) Gaussian components associated with different weights \(\omega_{c}\), means \(\mu_{c}\) and variances \(\Sigma_{c}\):
\[P\sim\sum_{c}^{C}w_{c}\cdot\mathcal{N}(\mu_{c},\Sigma_{c}) \tag{21}\]
For the surveyed data, the underlying GMM is estimated using the Python library scikit-learn (Pedregosa et al., 2011) assuming \(C=4\) components. The respective values for the BSSD GMM components are summarized in table 5. The estimated standard deviations of each sensor system is then applied in the parametrization of the measurement update step of the respective sensor.
In addition, the accuracy of the UWB ranging measurements is evaluated as shown in fig. 13b. Unlike the theoretical assumptions formulated in section III.2 b), the data does not contain any NLOS measurements, therefore this part of the mixture can be neglected. This effect can be reasoned due to the advantageous propagation conditions on the testbed. Apart from vegetation, LOS propagation is present. The parameters of the normal distribution shown in fig. 13b are:
\[\varepsilon_{\text{UWB}}\sim\mathcal{N}(\mu_{UWB},\sigma_{\text{ UWB}}^{2})\qquad\text{with}\qquad\mu_{\text{UWB}}=0.05\,\text{m}\quad\text{and} \quad\sigma_{\text{UWB}}=0.31\,\text{m}. \tag{22}\]
Figure 12: **(a)** Visualization of terrestrial pseudolites (red) and surveyed reference points: static points (black) and dynamic trajectory (blue); **(b)** Test vehicle with sensor setup. The antenna offset was corrected during post-processing using the known baseline between both systems.
In addition, a variety of measurement outliers occurred, accumulating to a total of approximately \(10\,\%\). Therefore, mixture ratios for the mixture Likelihood distribution can empirically be set to \(\phi=0.9\).
Applying the presented hybrid Grid Filter with these parameter settings for GNSS and UWB observations to the surveyed data, yields the qualitative positioning results shown in fig. 12(c). As summarized in table 6 a mean \(\bar{\mathcal{Q}}=0.64\,\mathrm{m}\) respectively a median \(\bar{\mathcal{Q}}=0.62\,\mathrm{m}\) positioning error is achieved.
A summary of descriptive statistics for the error distributions for the static and dynamic scenario is given in table 6. This includes the average error \(\bar{\mathcal{Q}}\) (root mean square error), the median error \(\bar{\mathcal{Q}}\) and the error variance \(\sigma_{\mathcal{Q}}^{2}\). In addition, the \(\sigma\) quantiles and the \(25,50,75\) percentiles of the error distributions are given.
Furthermore, the proposed method and parameter settings are also applied to the dynamic data set. A quantitative error analysis for both scenarios is given fig. 14. In general, the hybrid Grid Filter applied to the dynamic scenario achieves less positioning accuracy, accumulating a mean \(\bar{\mathcal{Q}}=1.62\,\mathrm{m}\) respectively a median \(\bar{\mathcal{Q}}=0.84\,\mathrm{m}\) positioning error associated with a higher error variance. This can be accounted to the fact, that the static case allows for a convergence on the correct position over a longer period of time due to the static motion model. The sequential motion update does not induce as much noise due to the constant position behavior of the system. This is further reflected in the comparably low variance of the positioning error with \(\sigma_{\mathcal{Q}}^{2}=0.21\,\mathrm{m}^{2}\). Apart from a single outlier, observable in fig. 12(c) (pink), which rapidly converges, the position results are comparably stable.
In contrast, the dynamic motion model allows for a smoothing of the trajectory, but does not always prevent jumps in the state space due to observation outliers. One approach to compensate for this would be a more restrictive parametrization of the motion model. However, a more loosely configured motion model facilitates recovery of biased measurements over multiple epochs. This dichotomy can be resolved by adaptive parametrization based on statistic evaluation of measurement residuals, as provided by integrity monitoring techniques such as in (Zhong and Groves, 2022a).
Figure 13: Histogram (blue) of the error distributions of the underlying sensor information: **(a)** Clock corrected GNSS pseudorange residuals and **(b)** UWB ranging residuals (kernel density estimation is given in red). **(c)** Orthofoto of qualitative positioning results for the static scenario.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(\omega_{c}\) & \(\mu_{c}\) & \(\Sigma_{c}\) \\ \hline \(1\) & \(0.42\) & \(0.25\) & \(13.06\) \\ \(2\) & \(0.24\) & \(13.09\) & \(20.37\) \\ \(3\) & \(0.24\) & \(-12.61\) & \(21.05\) \\ \(4\) & \(0.01\) & \(-0.3\) & \(142.89\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Estimated GMM parameter for the BSSD residuals obtained from the static scenario.
## V Conclusion
In conclusion, we identify great potential in the unified integration of both GNSS observations and radio-based terrestrial relations using a hybrid Grid Filter. This is exemplified by a test run on an automotive test bed, in which GNSS, UWB and motion data are fused using an implementation of such a filter. The parametrization of the measurement model is done by applying the measurement residuals extracted from a static test, which was conducted within the same reception environment. Furthermore, 3D models are applied to account for NLOS reception. Our test shows the general feasibility of the approach, resulting in a RMSE of \(1.62\,\mathrm{m}\) in the surveyed dynamic case. The results however also indicate a lot of potential for future work, including:
* Inclusion of additional sensor systems
* Application of additional geometric relations, e.g. terrestrial AoA or TDoA
* Fine-tuning of measurement models, integrated adaption of advanced 3DMA approaches as well as outlier detection for both GNSS and terrestrial observations
In addition to these algorithmic potentials, we also recognize different application potentials to further evaluate the presented method, e.g. by focusing on indoor-outdoor localization scenarios or possible technology hand-over scenarios. Additionally, a data fusion with opportunistic signals within intelligent transportation systems can be addressed.
## Acknowledgements
The authors want to thank ZigPos GmbH for supporting the UWB data acquisition. This work has been funded by the German Federal Ministry for Digital and Transport (BMDV) following a resolution of the German Federal Parliament within the projects IDEA (FKZ: 19OI22020C).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Measure} & \(\tilde{\mathcal{Q}}\) & \(\tilde{\mathcal{Q}}\) & \(\sigma_{\mathcal{Q}}^{2}\) & \multicolumn{3}{c}{Quantile \(\mathrm{[m]}\)} & \multicolumn{3}{c}{Percentile \(\mathrm{[m]}\)} \\ & \(\mathrm{[m]}\) & \(\mathrm{[m]}\) & \(\mathrm{[m]^{2}}\) & \(\sigma\) & \(2\sigma\) & \(3\sigma\) & \(25\) & \(50\) & \(75\) \\ \hline Static & \(0.64\) & \(0.62\) & \(0.21\) & \(0.82\) & \(1.28\) & \(1.72\) & \(0.28\) & \(0.62\) & \(0.91\) \\ Dynamic & \(1.62\) & \(0.84\) & \(6.56\) & \(1.31\) & \(4.82\) & \(15.60\) & \(0.54\) & \(0.86\) & \(1.64\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Quantitative evaluation of the static and dynamic scenario.
Figure 14: Statistical evaluation of the root squared error for both the static (blue) and dynamic (red) scenario: **a)** Violinplot and **b)** empirical cumulative density function (ECDF). |
2309.13998 | Linked shrinkage to improve estimation of interaction effects in
regression models | We address a classical problem in statistics: adding two-way interaction
terms to a regression model. As the covariate dimension increases
quadratically, we develop an estimator that adapts well to this increase, while
providing accurate estimates and appropriate inference. Existing strategies
overcome the dimensionality problem by only allowing interactions between
relevant main effects. Building on this philosophy, we implement a softer link
between the two types of effects using a local shrinkage model. We empirically
show that borrowing strength between the amount of shrinkage for main effects
and their interactions can strongly improve estimation of the regression
coefficients. Moreover, we evaluate the potential of the model for inference,
which is notoriously hard for selection strategies. Large-scale cohort data are
used to provide realistic illustrations and evaluations. Comparisons with other
methods are provided. The evaluation of variable importance is not trivial in
regression models with many interaction terms. Therefore, we derive a new
analytical formula for the Shapley value, which enables rapid assessment of
individual-specific variable importance scores and their uncertainties.
Finally, while not targeting for prediction, we do show that our models can be
very competitive to a more advanced machine learner, like random forest, even
for fairly large sample sizes. The implementation of our method in RStan is
fairly straightforward, allowing for adjustments to specific needs. | Mark A. van de Wiel, Matteo Amestoy, Jeroen Hoogland | 2023-09-25T10:03:39Z | http://arxiv.org/abs/2309.13998v1 | ###### Abstract
###### Abstract
We address a classical problem in statistics: adding two-way interaction terms to a regression model. As the covariate dimension increases quadratically, we develop an estimator that adapts well to this increase, while providing accurate estimates and appropriate inference. Existing strategies overcome the dimensionality problem by only allowing interactions between relevant main effects. Building on this philosophy, we implement a softer link between the two types of effects using a local shrinkage model. We empirically show that borrowing strength between the amount of shrinkage for main effects and their interactions can strongly improve estimation of the regression coefficients. Moreover, we evaluate the potential of the model for inference, which is notoriously hard for selection strategies. Large-scale cohort data are used to provide realistic illustrations and evaluations. Comparisons with other methods are provided. The evaluation of variable importance is not trivial in regression models with many interaction terms. Therefore, we derive a new analytical formula for the Shapley value, which enables rapid assessment of individual-specific variable importance scores and their uncertainties. Finally, while not targeting for prediction, we do show that our models can be very competitive to a more advanced machine learner, like random forest, even for fairly large sample sizes. The implementation of our method in RStan is fairly straightforward, allowing for adjustments to specific needs.
**Linked shrinkage to improve estimation of interaction effects in regression models**
Mark van de Wiel\({}^{1}\), Matteo Amestoy\({}^{1}\), Jeroen Hoogland\({}^{1}\)
\({}^{1}\)_Dep Epidemiology and Data Science, Amsterdam University medical centers, Amsterdam_
**Keywords**: Regression, Interactions, Shrinkage, Variable importance, Shapley values
## 1 Introduction
Adding interactions to a regression model is a classical problem in statistics which may lead to interesting insights on the joint effects of covariates (Afshartous and Preston, 2011). It comes at a price though, as the number of interaction terms \(q\) increases quadratically with the number of covariates \(p\). While one may argue that in very small \(p\) settings the plausibility of each two-way interaction may be considered separately, such a strategy is infeasible or unpractical for larger dimensions. At the other end of the spectrum, with \(p\) large - and hence \(q\) very large - compared to sample size \(n\), the hierarchical lasso (Bien et al., 2013) and variations thereof provide a computationally efficient sparse solution. The latter, however, focuses on selection, and does not come naturally with parameter inference. This leaves a gap for the middle spectrum, with \(p+q\) of a similar order of magnitude as \(n\), a setting which is fairly common in many biomedical or epidemiological studies. Our goal is to fill this gap using an interpretable solution that on one hand is able to deal with the large number of parameters, while on the other hand allows for inference. More specifically, our aim is threefold: 1) accurate estimation of parameters; 2) interpretation of variable importance scores in the context of our model; and 3) inference for those variable importance scores.
To achieve these goals, this study provides two novelties. First, a linked shrinkage model, which links local shrinkage of the interaction effects to that of the main effects. This extends
the Bayesian local shrinkage framework (Gelman et al., 2008). The latter provides flexible, differential shrinkage of small and large effects, which may benefit the accuracy of the parameter estimation in the same spirit as the adaptive lasso and non-negative garrotte do. In addition, we draw upon its good inferential properties (van de Wiel et al., 2023). The linked shrinkage model also includes a global shrinkage parameter for the interaction parameters to allow those to be weaker on average than the main effect parameters, thereby providing adaptivity. Second, we deduce a computationally efficient equation for Shapley values (Aas et al., 2021), which allows quantification and inference for those sample specific variable importance scores. Shapley values are popular in machine learning, and we argue that these scores can also be of great use for regression models with many interaction terms, as the presence of the latter impedes straightforward interpretation of the regression coefficients as variable importance scores (Afshartous and Preston, 2011).
As our problem is a classical one in statistics, a number of solutions already exist. Below we provide a list of reference methods that we compare our method to. Here, the first three do not focus on selection, the others do. First, simple ordinary least squares (OLS), which does not apply any shrinkage, and may therefore provide unstable estimates for some settings. Second, ridge regression with two tuned penalties (Wood, 2011), one for main effects, one for interactions: ridge2. Such global penalties likely improve the predictive abilities of the model, but do not well accommodate strong differences between parameter strengths within each of the two parameter sets. Third, Bayesian local shrinkage (Gelman et al., 2008) using a local Gaussian prior for each parameter, the standard deviations of which are endowed with a half-Cauchy prior: Bayloc. Gelman et al. (2008) argue that appropriate standardization (e.g. -1/2, 1/2 for binaries) "automatically applies more shrinkage to higher-order interactions". This is true, as such standardization causes two-way interactions (products) to be on a smaller scale than main effects. This does not, however, link the shrinkage of an interaction to that of its corresponding main effects nor adapt it globally to the data. Nevertheless, this model is an important basis for ours. Fourth, a two-step approach that only include interactions of significant main effects: 2step. While popular in practice, it may render very unstable results, as the inclusion of interactions depends on a hard threshold for the main effects. Fifth, a lasso regression with only a penalty for the interaction terms: lassoint. This type of global shrinkage may suffer from the same drawbacks as ridge2. And sixth, hierarchical lasso for interactions (hlasso), a state-of-the-art methodology that applies the same reasoning as 2step, but formalizes it in one fitting procedure (Bien et al., 2013; Lim and Hastie, 2015; Du et al., 2021). It is mostly designed for computational efficiency to handle large \(p\). While it has proven its use for variable (and parameter) selection, formal inference is far from straightforward (Lim and Hastie, 2015), requiring strong assumptions on the underlying sparsity or extensive resampling.
We compare our linked shrinkage model, termed Bayint, to those methods as well as to several variations of Bayint which differ in how they encode the linked shrinkage. For this, we use the OLS estimates of a very large data set as a benchmark. We study the results for two outcomes (systolic blood pressure and cholesterol), and a mix of continuous, binary and categorical covariates. In addition, we provide several illustrations to support interpretation of the model and the covariates, including those with Shapley values and their uncertainties. While we do not focus on prediction, we perform a short comparison with random forest. This illustrates that even for large sample sizes Bayint can be very competitive to such a machine learner in terms of out-of-bag predictive performance. We end by discussing the implementation, scalability and potential extensions.
Approach
The model, called Bayint, combines ideas behind the hierarchical lasso, which considers interactions of strong main effects to be more important, with those of Bayesian local shrinkage, the hierarchical set-up of which allows a softer link between the interactions and main effects.
### The linked shrinkage model
For simplicity, we assume linear response \(Y_{i},i=1,\ldots,n\), but it can easily be reformulated in a generalized linear model or Cox regression context. For sample \(i\), the \(j\)th covariate is denoted by \(x_{ij},j=1,\ldots,p.\) Then, the proposed model is:
\[Y_{i} =\alpha+\sum_{j=1}^{p}\beta_{j}x_{ij}+\sum_{j,k:j\neq k}\beta_{jk }x_{ij}x_{ik}+\epsilon_{i} \tag{1}\] \[\alpha \sim N(0,10^{2})\] \[\beta_{j} \sim N(0,\sigma^{2}\tau_{j}^{2})\] \[\beta_{jk} \sim N(0,\sigma^{2}\tau_{j}\tau_{k}\tau_{int})\] \[\epsilon_{i} \sim N(0,\sigma^{2})\] \[\tau_{j} \sim C^{+}(0,1)\] \[\tau_{int} \sim U(0.01,1)\] \[\sigma^{2} \sim IG(1,0.001)\]
Several of the components in (1) are in line with conventional Bayesian modelling, including the half-Cauchy prior on the (relative) standard deviations \(\tau_{j}\)(Gelman et al., 2008). We add linked shrinkage to the model by including the product \(\tau_{j}\tau_{k}\) in the prior of \(\beta_{jk}\). This product renders a symmetric handling of strong and weak main effects (corresponding to large \(\tau_{j}\) and small \(\tau_{j}\), respectively), whereas it is on the same scale as each of the components when they are. In addition, \(\tau_{int},0.01\leq\tau_{int}\leq 1\) is a shrinkage parameter shared by all interactions that models the prior believe that interaction parameters might, on average, be weaker than the main effect parameters. The lower bound avoids complete shrinkage to 0 of all interaction effects, as this may be undesirable in a sparse setting. Note that when categorical covariates are present, the summation over \(j,k\) in the regression model in (1) is adjusted such that interactions between their levels are excluded.
### Alternative linked shrinkage models
We discuss a few variations of Bayint (1) that may be relevant for other settings or foci. First, BayOint, which does not apply shrinkage to the main main effects (non-informative Gaussian priors). This may be useful when one thinks of our model as (a simplification of) a general quadratic form, for which shrinkage of the main effects towards 0 is not necessarily logical. A potential disadvantage is that one looses the link between shrinkage of the two types of effects. Second, Bayintadd, which replaces \(\tau_{j}\tau_{k}\) by \((\tau_{j}^{2}+\tau_{k}^{2})/2\), which lets the strongest main effect dominate the shrinkage of the interaction. That is, if any of the two main effects is strong, this leads to relatively little shrinkage (large prior variance) of \(\beta_{jk}\).
If one is particularly interested in detecting interactions, a third alternative may be attractive: Bayint*, which sets \(\tau_{int}=1\). This model does usually not compete with Bayint
in terms of prediction accuracy, as the latter has a global shrinkage parameter \(\tau_{int}\) that can adapt to the interactions being weaker (on average) than the main effects for most problems. The downside of including \(\tau_{int}\), though, is that relatively strong interactions may be overshrunken, which is why Bayint* may be better at detecting those. Comparisons with these alternative models are provided further on.
## 3 Results
Here, we assess model (1) in a broad sense by considering parameter estimation and inference, interval coverage, interpretation and prediction. We focus on a realistic data setting for which we may assume (nearly) true values to be known. When appropriate, performance is compared to that of several competitors. These are discussed in the Introduction; more details on their implementations are provided in the Supplementary Material.
### Data
The main data we use throughout the manuscript is obtained from the Helius study (Snijder et al., 2017). We use this data set, because it reflects a fairly standard epidemiological study and contains a mix of binary, continuous and categorical covariates. We consider both systolic blood pressure (log scale; SBP) and cholesterol as response, and age, gender, ethnicity (5 levels; coded with 4 dummies), smoking (yes/no), packyears, coffee consumption (yes/no), BMI and 4 simulated standard normal noise variables as covariates, rendering \(p=14\) covariates. All two-way interactions are considered, except those between the 4 dummy variables representing the categorical covariate, rendering \(q=\binom{14}{2}-\binom{4}{2}=91-6=85\) interaction parameters.
The entire data set, referred to as the'master set', consists of \(N=21,570\) samples. Therefore, OLS estimates based on the master set are very precise, and hence safely used as benchmarks. As a verification, we confirm that i) the estimated coefficients of the noise variables are indeed very close to zero; and ii) the coefficients estimated by (adaptive) lasso are very close to the OLS estimates (Suppl. Fig. 1 and 2).
Continuous covariates were centered and scaled, that is standardized. On the centering, we follow the advise by Afshartous and Preston (2011), as the centering (largely) removes collinearity between main effects and two-way interactions. Scaling is generally applied in shrinkage settings, and also helps to interpret the coefficients and estimation errors relative to one another. Binary covariates were (contrast) coded as -1, 1, which renders them standardized in the balanced setting. For interpretation, we prefer to use the same coding for all binaries. The categorical variable, ethnicity, was contrast-coded with levels -1,0,1.
### Parameter estimation
We evaluate parameter estimation of any \(\beta=\beta_{j}\) or \(\beta=\beta_{jk}\) by the root Mean Squared Error (rMSE), defined as
\[\text{rMSE}=\sqrt{\frac{1}{B}\sum_{b=1}^{B}(\hat{\beta}^{(b)}-\beta)^{2}},\]
with \(\hat{\beta}^{(b)}\) the estimator of \(\beta\) for the \(b\)th training set. We used \(B=25\) (nearly non-overlapping) training sets of size \(n=1,000\), and set the 'true' \(\beta\) to the OLS estimate from the large master
data set. For Bayesian methods, the posterior mean was used as a point estimate for \(\beta\). Figures 1 and 2 compares the results of Bayint with other methods (see Introduction) for cholesterol and SBP as outcome, respectively. The bold line demarcates the main effects and interactions; the thin lines separate the strong effects from the weaker ones, as defined by significance in the master set (\(p<0.01\)).
Overall, we observe that Bayint shows good performance. The upper displays show that Bayint and Bayloc perform better than OLS across the entire range. Bayint is competitive to Bayloc for the strongest interactions, and superior for most other parameters. The latter situation reverses for the comparison with ridge2, which is competitive to Bayint for most parameters, but not for the important sub-group of strongest interactions, probably due to over-penalization by the global interaction penalty parameter. The latter conclusion is similar for the comparison with lassoint, although here the differences with Bayint are fairly small for SBP (Fig 2). For the latter, the hierarchical lasso, hlasso, is fairly competitive to Bayint on estimating interactions, but lags behind for estimating main effects. Here, we should note that hlasso is more tailored to covariate screening for large \(p\), and less so to estimation. For cholesterol (Fig 1) there is an additional small edge for Bayint for the estimation of strong interactions. Finally, 2step performs inferior to Bayint for a substantial number of main effects and interactions, and not superior for the others.
Supplementary Figure 3 connects the _true_ main effects and interactions (estimated from the master set), to provide insight on why linked shrinkage has a benefit for both outcomes. Indeed, we observe that strong interactions tend to link relatively frequently to strong main effects, and that this tendency is somewhat stronger for cholesterol, explaining the slightly larger benefit of linked shrinkage for this outcome compared to SBP.
Finally, we provide a short comparison of Bayint with two aforementioned alternatives, BayOint and Bayintadd, for the cholesterol model only. From Supp. Fig. 4 we observe that particularly Bayint and Bayintadd are very competitive, with the latter slightly worse for the very non-significant interactions. BayOint may pick up the strong interactions slightly better, but seems somewhat inferior for main effects and less important interactions.
### Inference and interpretation
Here, we discuss several techniques and visualisations to perform inference and interpret results from the Bayint model. We focus on the model that uses cholesterol as outcome. For inference, we limit the comparison to OLS, given its ubiquitous use and well-known inferential properties, which are less well established for most of the other methods.
#### 3.3.1 Parameters
We first briefly discuss inference on the parameters, before extending it to more general variable importance metrics. Our model renders credible intervals that may be used for this purpose. We previously showed the coverage of Bayesian local shrinkage - on which our shrinkage model is based - to be rather good (van de Wiel et al., 2023) in low dimensional settings, although this will depend on the \(p:n\) ratio and the amount of collinearity. Here, we focus on detection, but refrain from a formal comparison with OLS or other frequentist methods (in terms of fixing type I error), given the different perspectives on (multiple) testing.
Supplementary Figure 5 plots the detections by OLS (criteria: \(p\leq 0.05,p\leq 0.01\)) against those by Bayint (criterion: 95% credible interval contains 0) and Bayint* (which fixes \(\tau_{int}^{2}=1\)
in (1); same criterion). As expected, OLS produces many detections at \(p<0.05\), also for effects that are (very) small in the master set (top figure). For \(p<0.01\), OLS seems to align better with the results of Bayint (middle figure), in particular for the weak effects and strong interactions, and less so for two strong main effects that are more frequently detected by Bayint. The bottom figure shows that Bayint* indeed detects strong interactions more often than Bayint, at the cost of detecting two main effects (which are involved in those interactions).
Additionally, to be less dependent on the choice of the cut-off, we plot the sensitivities of the methods for specificities ranging from 90% to 98%. For that, we define positives as those significant in the master set at cut-off \(p\leq 0.05/99\) (Bonferroni correction) and negatives as those that either correspond to a noise covariate or are non-significant at cut-off 0.05. Given the sheer size of the master set the latter assures that such effects are either very small or completely absent. This defines 12 positives, and 79 negatives; the remaining 99 - 12 - 79 = 8 effects are indeterminate, which are therefore not used for calculating the sensitivities and specificities. Supplementary Figure 6 shows these, averaged over 500 subsets, where the curves are parameterized by the thresholds used for detection, either on \(p\)-values (OLS) or on coverage percentage of the credible interval (Bayint and Bayint*). It clearly shows that for this data set the latter two are better able to separate the true effects from the false ones, given the higher sensitivities across the specificity range.
#### 3.3.2 Variable importance
The credible intervals are an important tool to assess the relevance of interaction terms. Interpretation and inference for the main effect parameters is hampered though by the presence of those interactions, as the effect of one unit change of a covariate depends on the values of the other covariates. Therefore, technically, \(\beta_{j}=0\) only means that for a (fictive) person with average values for all other covariates (given centering is applied), covariate \(j\) has no effect. That is, it only quantifies a _conditional_ main effect. Afshartous and Preston (2011) propose several useful alternatives, such as determining the 'range of significance'. For this, one plots the confidence/credible intervals for \(\beta_{j}+\beta_{jk}x_{ik}\) - the effect of one unit change of \(x_{ij}\) when interacting with one covariate \(x_{ik}\) - against \(x_{ik}\). Alternatively, one may compute \(E_{ij}=\beta_{j}+\sum_{k\neq j}\beta_{jk}x_{ik}\), i.e. a 'personalized unit change effect' which accounts for all interactions. Our MCMC samples easily provide the posteriors of \(E_{ij}\), allowing to plot its uncertainty as well. A hybrid of the latter two solutions is a plot of \(E_{ij}\) against \(x_{ik}\) (when continuous) or for color-coded levels of \(x_{ik}\) to see whether one unit change of \(x_{ij}\) (say age or BMI) has a different effect on the outcome, e.g. for \(x_{ik}=-1,1\) (say female/male), while accounting for the other interacting covariates as well. Supplementary Figure 7 plots \(E_{ij}\) and its uncertainty for 100 random test individuals (Bayint model fitted on 1,000 training samples), with \(x_{ij}\) and \(x_{ik}\) representing age and gender, respectively. We clearly observe a different effect of age increase between genders, but also within gender due to interactions of age with other covariates.
Alternatively, Shapley values (Aas et al., 2021) may be considered. A Shapley value \(\phi_{ij}\) quantifies the average contribution of the \(j\)th feature to the prediction of the \(i\)th sample, fixing \(x_{ij}=x_{ij}^{*}\). Here, 'average' refers to a weighted average over subsets \(\mathcal{S}\) that contains all other covariates \((x_{ik})_{k\in\mathcal{S}}\) that actively impact the prediction by fixing \(x_{ik}=x_{ik}^{*}\) (called the 'players'). Predictions are marginalized over the complement, \(\mathcal{S}^{\prime}\), which defines the non-players
\((x_{i\ell})_{\ell\in\mathcal{S}^{\prime}}\), which are considered random. Here, the weights are chosen such that different sizes of \(\mathcal{S}\) have an equal impact on the Shapley value. A formal definition is given in the Supplementary Material. Shapley values are popular in machine learning nowadays, because they uniquely possess several nice properties: efficiency, symmetry, dummy player and linearity (Aas et al., 2021). Obtaining its exact value is usually computationally very demanding, let alone computing uncertainties. For our model, however, it is feasible to compute Shapley values and their uncertainties efficiently, if one is willing to use the common convention that the marginalization ignores the dependency between the players and the non-players (Lundberg and Lee, 2017), an approach referred to as 'interventional Shapley value' (Aas et al., 2021). For a linear regression model with two-way interactions and centered covariates it equals
\[\phi_{ij}=\beta_{j}x_{ij}^{*}+\frac{1}{2}\bigg{(}\sum_{k:k\neq j}\beta_{jk}x_{ ij}^{*}x_{ik}^{*}-\sum_{k:k\neq j}\beta_{jk}E[x_{ij}x_{ik}]\bigg{)}, \tag{2}\]
when the \(j\)th covariate is continuous or binary. A proof is provided in the Supplementary Material, which also includes expressions for the non-centered setting and categorical covariates. Note that (the posterior of) \(\phi_{ij}\) is straightforward to compute after estimating \(E[x_{ij}x_{ik}]\) by the sample covariance. Again, we illustrate results for 100 random test samples and the Bayint model trained on 1,000 random training samples. Figure 3 shows Shapley values and their credible intervals for age and Noise.1. The latter is a useful negative control as we observe that, as desired, all credible intervals cover 0. Note that centering of the covariates implies that Shapley values are expected to center around 0. Yet, we observe that age is an important covariate for the majority of samples as most intervals do not cover 0. Supplementary Figures 9 and 10 provide empirical evidence that these intervals, as computed from the output of Bayint, provide appropriate coverage for the majority of covariates and individuals. Clearly, \(\phi_{ij}=\phi_{ij}^{\text{main}}+\phi_{ij}^{\text{int}}\), which denote the contributions of the main effect and that of all interactions with covariate \(j\). Supplementary Figure 11 displays the Shapley values (posterior means), and its two contributors, for all covariates. Alternatively, Fig. 4 shows the conventional variable importance derived from Shapley values, \(I_{j}=1/n\sum_{i=1}^{n}|\phi_{ij}|\), and analogously defined, \(I_{j}^{\text{main}}\) and \(I_{j}^{\text{int}}\). Note that, in general, \(I_{j}\neq I_{j}^{\text{main}}+I_{j}^{\text{int}}\). Still, plotting both \(I_{j}^{\text{main}}\) and \(I_{j}^{\text{int}}\) renders insight on how relevant the main effects and interactions are for each covariate. While the main effects show the strongest importance scores for most covariates (except BMI), we do observe that interactions are also relevant for a fair share of the covariates.
### Model assessment by \(R^{2}\)
Although our focus lies on parameter estimation, it is useful to compare the overall fit of Bayint with those of i) a basic regression model without interactions; and ii) with more advanced machine learners, such as the random forest. The first comparison allows to judge the additive value of the interactions for improving test sample fit. The second one is relevant, because the random forest holds the promise to capture interactions well and to provide adequate predictions, so it provides a useful benchmark. As \(p=14\) is small relative to \(n=1,000\), we simply used OLS for the main effects model; random forest was either fit using the defaults of the rfsrc function in the randomForestSRC package (RF) or with hyperparameters (mtry: number of features considered per split and nodesize: minimum node size) tuned for optimal predictive performance using the tune.rfsrc function (RFtune).
For the comparison, we compute for each model and for all \(b=1,\ldots,25\) training sets the out-of-bag coefficient of determination, \(R_{b}^{2}=1-\sum_{i\in\mathcal{T}_{b}}(y_{i}-\hat{y}_{i,b})^{2})/\sum_{i}(y_{i}- \bar{y})^{2}\), with \(\mathcal{T}_{b}\) the set of all out-of-bag samples for training \(b\), and \(\hat{y}_{i,b}\) the prediction for test sample \(i\) by the \(b\)th model. Figure 5 shows the results. In-bag predictive performance is shown as well, to illustrate potential overfitting.
For cholesterol as outcome, we observe that Bayint provides a substantial gain in terms of \(R^{2}\) as compared to the main effects only model. Out-of-bag predictive performance is somewhat better than that of RF, and marginally better than that of RFtune. The latter two overfit substantially, as observed from comparing the in-bag and out-of-bag performances. For SBP as outcome, differences between Bayint and the main effects model are small, whereas both beat RF and RFtune by a fair margin. Note that the relative performance of the random forest improves for \(n=5,000\) (Supplementary Figure 8), rendering it competitive to Bayint in terms of prediction.
## 4 Implementation and data availability
Our linked shrinkage model was implemented in RStan (v 2.21.8) (Stan Development Team, 2022). We chose to use a general purpose sampler for several reasons. First, it allows the user to adjust the model without much extra effort in terms of fitting or inference. This includes variations on modelling the shrinkage, as illustrated, but also adjusting the likelihood to allow binary or survival outcome, as RStan accommodates these as well. We provide an example for logistic regression in the code. Second, RStan provides several diagnostic tools, such as trace plots, to check the convergence of the MCMC sampler. Our scripts are available at [https://github.com/markvdwiel/ThinkInteractions/](https://github.com/markvdwiel/ThinkInteractions/), which also contains a synthetic version of the data set. The covariates in the synthetic data are generated by imputation as described in van de Wiel et al. (2023). Then, responses (Cholesterol and SBP) are generated by drawing from normal distributions with means equal to the OLS (as fitted on the master set) predictions based on the synthetic covariates, and error variance equal to the residual error variance of the OLS model. We verified that the synthetic set renders qualitatively similar results on the regression models as those presented here (cf. Figure 1 and Supp. Fig. 12; Figure 2 and Supp. Fig. 13). Finally, as an indication: running time for our example data sets of \(n=1,000,p=14,q=85\) is around 3min for 25,000 MCMC samples using a single core from a PC with a 1.30 GHz processor and 16Gb RAM.
## 5 Discussion
We demonstrated the potential of linked shrinkage for improving parameter estimating in fairly large regression models that include all two-way interactions. Naturally, the benefit depends on the data set and the relevance of those interactions, in particular in connection to the main effects. A limitation of our main model, Bayint, is that is not good in discovering interactions for which none of the main effects are relevant. Therefore, we offer an alternative, Bayint0, which suits such a setting better while still providing a parsimonious model for the shrinkage of interaction effects. Moreover, if one knows about such an interaction, it can be taken apart, in terms of shrinkage. Another limitation might be the linear scale of the covariates in the model. As regression comes with many tools for model diagnostics, such a model miss-specification can be diagnosed. In principle, it is straightforward to extend our
model by adding non-linear transformations (log, quadratic) of a few covariates, each with their own local penalty. Optimizing the model by including such covariate transformations after viewing the data may come at the cost of invalid inference. Possibly, for large \(n\), the cost of this form of data snooping may be limited when only the main effects are considered, but this requires further study. In addition, note also that if the model is very wrong its predictive performance is likely compromised, as compared to more flexible machine learners. This was not the case for our data, at least in comparison to the random forest.
A natural extension is the inclusion of higher-order interaction. In principle, this is easily achieved with our code, and may be a viable option when only few covariates are present. Note, however, that the number of three-way interactions increases cubicly with the number of covariates, and one may argue that regression models with higher-order interaction are hardly any easier to interpret than many machine learners.
As mentioned, we coded our method in RStan, because its flexibility allows the user to extend the framework to one's own needs, such as the various illustrated shrinkage links and non-continuous outcomes (survival, binary). A potential disadvantage of using such a general purpose sampler is that it is likely too slow for large \(p\), large \(n\) settings. RStan provides variational Bayes approximations, but we experienced that both the mean-field and the (Gaussian) full-rank approximations do not provide satisfactory results (compared to sampling) for our model. Alternatively, the RStan manual suggests to use a thin QR-decomposition of the design matrix for large scale regression problems, but this did not speed up computations in our setting, probably due to the non-exchangeable prior for the regression coefficients. Dedicated sampling or variational Bayes techniques such as proposed for the horseshoe prior in Makalic and Schmidt (2015) and Busatto and van de Wiel (2023) may provide more scalable alternatives, but this requires more research.
While certainly simpler than many machine learners, the interpretation of a regression model with many two-way interactions is not trivial (Afshartous and Preston, 2011). Inference for parameters and variable importance scores aids such interpretation. Therefore, we showed that our methods provides a good basis for inference, which we extend to variable importance scores, in particular the personalized unit change and the Shapley value. For the interventional version of the latter, we derived the computational efficient formula (2). If one prefers to account for dependency when marginalizing, approaches as in (Aas et al., 2021) may be explored, but likely at a high computational cost. For other regressions, e.g. logistic or Cox, formula (2) is only valid if one evaluates the prediction on the level of the linear predictor. This may be reasonable as the regression coefficients are also on that scale.
While it is straightforward to define global variable importance scores from the personalized ones, including Shapley, it less obvious how to perform inference for these, both from a technical and philosophical perspective. As for the first: one may average absolute or squared scores, but the result lacks a natural null. As for the second: in such models, the importance of a covariate depends on the values of the other ones, and is hence individual specific. Possibly, an informal argument, such as: the credible intervals should not contain '0' for at 10% of the individuals, may be reasonable in practice, but this needs further research.
Finally, we illustrated that the addition of two-way interactions, as in bayint, may improve predictive performance with respect to a main effect only regression model. In fact, bayint can be very competitive to more advance machine learners, such as random forest. Of course, such comparative results depend strongly on the data, sample size and true complexity of the associations between covariates and response. If prediction is an important aim of the study, possibly alongside interpretation, we recommend comparing the predictive performance
of bayint with more flexible machine learners. In the eventual case of inferior predictive performance of the former, one should balance this loss against the improved interpretability.
All-in-all, bayint is an attractive alternative for existing strategies to handle interactions in epidemiological and/or clinical studies, as its linked local shrinkage can improve parameter accuracy, provides appropriate inference and interpretation, and may compete well with less interpretable machine learners in terms of prediction.
## 6 Acknowledgements
The Amsterdam University Medical Centres and the Public Health Service of Amsterdam (GGD Amsterdam) provided core financial support for HELIUS. The HELIUS study is also funded by research grants of the Dutch Heart Foundation (Hartstichting; grant no. 2010T084), the Netherlands Organization for Health Research and Development (ZonMw; grant no. 200500003), the European Integration Fund (EIF; grant no. 2013EIF013) and the European Union (Seventh Framework Programme, FP-7; grant no. 278901).
|
2309.13740 | Complex Vasquez invariant | In 1970 Vasquez proved that to every finite group $G$ we can assign a natural
number $n(G)$ with the property that every flat manifold with holonomy $G$ is a
total space of a fiber bundle, with the fiber being a flat torus and the base
space -- a flat manifold of dimension less than or equal to $n(G)$. In
particular, this means that the characteristic algebra of any flat manifold
with holonomy $G$ vanishes in dimension greater than $n(G)$. We define a
complex analog of Vasquez invariant, in which finite groups are considered as
holonomy groups of compact flat K\"ahler manifolds. | Anna Gąsior, Rafał Lutowski | 2023-09-24T20:09:25Z | http://arxiv.org/abs/2309.13740v2 | # Complex Vasquez invariant
###### Abstract
In 1970 Vasquez proved that to every finite group \(G\) we can assign a natural number \(n(G)\) with the property that every flat manifold with holonomy \(G\) is a total space of a fiber bundle, with the fiber being a flat torus and the base space - a flat manifold of dimension less than or equal to \(n(G)\). In particular, this means that the characteristic algebra of any flat manifold with holonomy \(G\) vanishes in dimension greater than \(n(G)\). We define a complex analog of Vasquez invariant, in which finite groups are considered as holonomy groups of compact flat Kahler manifolds.
+
Footnote †: _Keywords and phrases._ Flat manifolds, Kähler manifolds, hyperelliptic manifolds, Chern classes, Vasquez number
## 1 Introduction
Let \(X\) be an \(n\)-dimensional _flat manifold_, i.e. a closed connected Riemannian manifold with vanishing sectional curvature. It is well known (see [14]) that a torsion-free group \(\Gamma:=\pi_{1}(X)\) defines a short exact sequence
\[0\longrightarrow L\longrightarrow\Gamma\stackrel{{ p}}{{ \longrightarrow}}G\longrightarrow 1, \tag{1}\]
where the free abelian group \(L\), of rank \(n\), is the unique maximal abelian normal subgroup of \(\Gamma\) and \(G\) is a finite group. We shall call \(\Gamma\) a _Bieberbach group_ of dimension \(n\) with the _holonomy group_\(G\). By conjugation in \(\Gamma\), the above extension defines a \(G\)-lattice structure on \(L\). The corresponding representation \(h\colon G\to\operatorname{GL}(L)\) is called the _integral holonomy representation_ of \(\Gamma\).
In [15], Vasquez (see also [5] and [13]) assigned to every finite group \(G\) a natural number \(n(G)\). He proved that if \(\Gamma\) is a Bieberbach group with holonomy \(G\), as in (1), then there exists a normal subgroup \(N\) of \(\Gamma\) such that \(N\subset L\) and the quotient group \(\Gamma/L\) is a Bieberbach group of dimension less than or equal to \(n(G)\).
Vasquez showed that \(n(G)=1\) for every cyclic group \(G\) of a prime order. In [5] Cliff and Weiss proved that if \(G\) is a \(p\)-group, then \(n(G)=\sum_{C\in\mathcal{X}}[G:H]\), where \(\mathcal{X}\) is a set of representatives of the conjugacy classes of subgroups of \(G\) of prime order. Moreover \(n(A_{5})=16\), see [6]. The articles [13] and [8] give the full classification of finite groups with Vasquez invariant equal to \(1\) and \(2\), respectively. In [4] the authors consider the Vasquez invariant for elementary abelian groups.
The goal of the paper is a description of the complex analog of the Vasquez invariant.
A _compact flat Kahler_ or _generalized hyperelliptic_ manifold \(X\) of dimension \(n\) is defined
as a quotient of a compact complex \(n\)-torus by a free action of a finite group. The fundamental group \(\Gamma=\pi_{1}(X)\) is a Bieberbach group of dimension \(2n\). In particular, \(\Gamma\) may be realized as a subgroup of \(U(n)\ltimes\mathbb{C}^{n}\subset O(2n)\ltimes\mathbb{R}^{2n}\), see [14, Proposition 7.1]. The classes of generalized hyperelliptic and aspherical Kahler manifolds coincide (see [2, Theorem 1]). We will follow [1] and call fundamental groups of compact Kahler flat manifolds _aspherical Kahler groups_. It is well known that any finite group is a holonomy group of a generalized hyperelliptic manifold (see Remark 3.4).
_Remark_.: Unless stated otherwise, whenever we say about the dimension of an aspherical Kahler group, we mean its complex dimension.
**Theorem 1**.: _Let \(G\) be a finite group. Then there is an integer \(n_{\mathbb{C}}(G)\) such that if \(\Gamma\) is an aspherical Kahler group of dimension \(n\) and with holonomy group \(G\), then the maximal abelian subgroup \(L\subset\Gamma\) contains a subgroup \(M\), normal in \(\Gamma\), such that \(\Gamma/M\) is an aspherical Kahler group of dimension less than or equal to \(n_{\mathbb{C}}(G)\)._
Although it is not as direct as in the real case, using the above theorem we can also formulate a result concerning characteristic classes of compact flat Kahler manifolds.
**Theorem 2**.: _Let \(X\) be a generalized hyperelliptic manifold with the holonomy group \(G\). Then for every integer \(i>n_{\mathbb{C}}(G)\) the \(i\)-th Chern class of \(X\) is zero._
The structure of the paper is as follows. In Section 2 we provide a modified proof of the original Vasquez result, which was suggested by Cliff and Weiss in [5]. This gives us a better estimate of the invariant and allows us to understand the idea standing behind its complex analog. The next section deals with essentially complex modules. Although the topic was presented in [11], we give a slightly different approach here, suited for the proof of Theorem 1, presented along with some examples in Section 4. The algebraic approach from this section is necessary, but not sufficient if one would like to consider characteristic classes of holomorphic tangent bundles. Hence in Section 5, we give a criterion for a smooth map that arises from algebraic construction to be a holomorphic one. This condition may require the complex structure to be changed. Section 6 describes how to make this change while keeping the holomorphic tangent bundle unchanged (up to isomorphism of course). These results are then applied to show Theorem 2 in the last section of the article.
## 2 Modified Vasquez construction
Let \(\Gamma\) be a Bieberbach group defined by the short exact sequence (1). A cohomology class \(\alpha\in H^{2}(G,L)\) that corresponds to this extension is called _special_. In fact, a faithful \(G\)-lattice and a special element make all that is needed to define a Bieberbach group. Hence, we can formulate Vasquez's theorem in the module-theoretic setting:
**Theorem 2.1** ([15, Theorem 3.6]).: _For any finite group \(G\) there exists a number \(n(G)\) such that if \(L\) is a faithful \(G\)-lattice with a special element \(\alpha\) then there exists a \(\mathbb{Z}\)-pure submodule \(N\) of \(L\) such that:_
1. \(\operatorname{rk}_{\mathbb{Z}}(L/N)\leq n(G)\)_,_
2. \(\nu_{*}(\alpha)\) _is special,_
_where \(\nu\colon L\to L/N\) is the natural homomorphism._
_Remark 2.2_.: We consider quotient \(G\)-lattices which are free abelian groups. In other words, we demand from sublattices to be \(\mathbb{Z}\)-pure submodules (see [7, (16.15)]). It is easy to check that an intersection of a finite number of \(\mathbb{Z}\)-pure sublattices of a given lattice is again \(\mathbb{Z}\)-pure sublattice.
In his proof, Vasquez focused on giving an upper bound on \(n(G)\). In [5] Cliff and Weiss, by using different methods, achieved a better estimate of it. They remarked that a slight modification of the proof of Vasquez could be used to get their result. Since this is of much importance to our further considerations, we give a proof of Vasquez's theorem using the hint given by Cliff and Weiss.
Before giving the proof, let us note that estimating \(n(G)\) from above is not very precise. It is natural to demand from \(n(G)\) to be as small as possible. Hence we give the following definition of the Vasquez number.
**Definition 2.3**.: For any finite group \(G\) a _Vasquez number_\(n(G)\) is the smallest natural number which satisfies the conclusion of Theorem 2.1.
Proof of Theorem 2.1.: Recall that \(\alpha\in H^{2}(G,L)\) is special if and only if its restriction to every nontrivial subgroup of \(G\) is non-zero. Since the restriction on chains of subgroups of \(G\) is transitive, using in addition the standard action of \(G\) on those restrictions (see [14, page 65], [3, page 168]) one easily gets that \(\alpha\) is special if and only if
\[\forall_{H\in\mathcal{X}}\operatorname{res}_{H}\alpha\neq 0,\]
where \(\mathcal{X}\) is a set of representatives of conjugacy classes of subgroups of \(G\) of prime order.
Now take \(H\in\mathcal{X}\). Since \(\operatorname{res}_{H}\alpha\neq 0\), we get that as an \(H\)-module, \(L\) has a direct summand \(L_{0}\) of rank \(1\), which admits the trivial \(H\)-action. Hence we have
\[\operatorname{res}_{H}L=L_{0}\oplus L^{\prime}_{0}. \tag{2}\]
Furthermore, this decomposition can be taken in such a way that if
\[\operatorname{res}_{H}\alpha=\alpha_{0}+\alpha^{\prime}_{0}\in H^{2}(H,L_{0}) \oplus H^{2}(H,L^{\prime}_{0})\]
then \(\alpha_{0}\neq 0\). Now, \(L^{\prime}_{0}\) may be not a \(G\)-submodule of \(L\), but
\[L^{\prime}_{H}:=\cap_{g\in G}gL^{\prime}_{0}\]
is one. Moreover, since \(gL^{\prime}_{0}\) is a \(\mathbb{Z}\)-pure submodule of the free \(\mathbb{Z}\)-module \(L\) (see [7, Theorem 16.16]), by Remark 2.2\(L/L^{\prime}_{H}\) is a free abelian group. By the first isomorphism theorem for modules, there exists the unique map \(r\), such that the following diagram commutes
(3)
where \(p^{(H)}\colon L\mapsto L/L^{\prime}_{H}\) is the natural mapping and \(\pi\) is the projection corresponding to decomposition (2). Hence, if
\[\operatorname{res}_{H}p^{(H)}_{*}(\alpha)=(\operatorname{res}_{H}p^{(H)})_{*}( \operatorname{res}_{H}\alpha)=0,\]
then
\[\pi_{*}(\operatorname{res}_{H}\alpha)=\alpha_{0}=0,\]
which contradicts our assumptions.
In the Frobenius reciprocity
\[\operatorname{Hom}_{H}(\operatorname{res}_{H}L,L_{0})\cong\operatorname{Hom}_{G }(L,\operatorname{ind}_{H}^{G}L_{0}),\]
the kernel of the map corresponding to \(\pi\) is equal to \(L^{\prime}_{H}\). Hence we get
\[\operatorname{rk}_{\mathbb{Z}}(L/L^{\prime}_{H})\leq\operatorname{rk}_{ \mathbb{Z}}\operatorname{ind}_{H}^{G}L_{0}=[G:H].\]
Summarizing, for a group \(H\in\mathcal{X}\) we have constructed a \(G\)-sublattice \(L^{\prime}_{H}\) of \(L\) such that \(L/L^{\prime}_{H}\) is again a \(G\)-lattice of \(\mathbb{Z}\) rank bounded by the index of \(H\) in \(G\) and that the restriction to \(H\) of class induced by \(\alpha\) is nonzero.
Let \(N:=\cap_{H\in\mathcal{X}}L^{\prime}_{H}\). This is a \(G\)-sublattice of \(L\) and since \(\mathcal{X}\) is finite, by Remark 2.2, \(L/N\) is torsion-free. Let \(\nu\colon L\to L/N\) be the natural homomorphism. Making diagrams similar to (3) we find that \(\nu_{*}(\alpha)\) is special. Moreover, we have
\[\operatorname{rk}_{\mathbb{Z}}(L/N)\leq\sum_{H\in\mathcal{X}}\operatorname{rk }_{\mathbb{Z}}(L/L^{\prime}_{H})\leq\sum_{H\in\mathcal{X}}[G:H].\]
Therefore, we obtain an estimate of \(n(G)\) given in [5, Corollary on page 125]:
\[n(G)\leq\sum_{H\in\mathcal{X}}[G:H].\]
## 3 Essentially complex modules
Throughout this section \(G\) will denote a finite group, and \(K\) - the ring \(\mathbb{Z}\), or the field \(\mathbb{Q}\) or \(\mathbb{R}\).
_Remark 3.1_.: Since the extending of the ring/field of scalars will be frequently used throughout the paper, for any subfield \(F\) containing \(K\) we introduce the following notation:
\[L^{F}:=F\otimes_{K}L.\]
**Definition 3.2**.: Let \(V\) be a \(KG\)-module. _Complex structure_ on \(V\) is a map \(J\in\operatorname{End}_{\mathbb{R}G}(V^{\mathbb{R}})\) such that \(J^{2}=-id\). A module admitting a complex structure is called _essentially complex_.
By Johnson, we have the following criterion for a module being essentially complex for the case \(K=\mathbb{R}\).
**Theorem 3.3** ([11, Proposition 3.1]).: _Let \(V\) be an \(\mathbb{R}G\)-module. The following are equivalent:_
1. \(V\) _is essentially complex._
2. _Every homogeneous component of_ \(V\) _is essentially complex._
3. _Every absolutely irreducible component of_ \(V\) _occurs with even multiplicity._
_Remark 3.4_.: Let \(G\) be a finite group. By the Auslander and Kuranishi theorem, \(G\) is a holonomy group of some Bieberbach group (see [3, Theorem III.5.2]). Hence there exists a faithful \(G\)-lattice \(L\) and a special element \(\alpha\in H^{2}(G,L)\). Then \(L\oplus L\) is also a faithful \(G\) lattice, which has a special element, for example \(\alpha=\alpha+0\in H^{2}(G,L)\oplus H^{2}(G,L)\). Hence, by [14, Proposition 7.1] every finite group is a holonomy group of an aspherical Kahler group.
Let \(\operatorname{Irr}(G)\) be the set of complex irreducible characters of \(G\) and \(\chi\in\operatorname{Irr}(G)\). The Frobenius-Schur indicator
\[\nu_{2}(\chi)=\frac{1}{|G|}\sum_{g\in G}\chi(g^{2}),\]
which takes values in \(\{-1,0,1\}\) establishes a well-known connection between \(\mathbb{R}G\) and \(\mathbb{C}G\)-modules (see [12, Section II.13.2]). We put it here in a more general context, first stating the following lemma. For characters \(\chi_{1},\chi_{2}\) of the group \(G\), by \((\chi_{1},\chi_{2})\) we denote the usual inner product of \(\chi_{1}\) and \(\chi_{2}\).
**Lemma 3.5**.: _Let \(V\) be a simple \(KG\)-module with the character \(\chi_{V}\). Let \(\chi_{s}\in\operatorname{Irr}(G)\) be such that \((\chi_{V},\chi_{s})\neq 0\). Then_
\[(\chi_{V},\chi)\neq 0\Rightarrow\nu_{2}(\chi)=\nu_{2}(\chi_{s})\]
_for every \(\chi\in\operatorname{Irr}(G)\)._
Proof.: Let \(\chi\) be as in the statement of the lemma. By [10, Corollary (10.2)] there exists an automorphism \(\sigma\in\operatorname{Gal}(K(\chi)/K)\) such that \(\chi=\sigma\chi_{s}\), where \(K(\chi)\) is the extension of \(K\) by values of \(\chi\). We get
\[\nu_{2}(\chi)=\frac{1}{|G|}\sum_{g\in G}\sigma\chi_{s}(g^{2})=\sigma\left( \frac{1}{|G|}\sum_{g\in G}\chi_{s}(g^{2})\right)=\sigma(\nu_{2}(\chi_{s}))= \nu_{2}(\chi_{s}),\]
since \(\nu_{2}(\chi_{s})\) is an integer.
The above lemma justifies the following definition.
**Definition 3.6**.: Let \(V\) be a \(KG\)-module with character \(\chi_{V}\). Let \(\chi_{s}\in\operatorname{Irr}(G)\) be any character such that \((\chi_{V},\chi_{s})\neq 0\). We say that \(V\) is of
* \(\operatorname{real}/\mathbb{R}\) type if \(\nu_{2}(\chi_{s})=1\),
* \(\operatorname{complex}/\mathbb{C}\) type if \(\nu_{2}(\chi_{s})=0\),
* \(\operatorname{quaternionic}/\mathbb{H}\) type if \(\nu_{2}(\chi_{s})=-1\).
_Remark 3.7_.: Note that an irreducible \(\mathbb{R}G\)-module \(V\) is of \(F\)-type if and only if we have an isomorphism of \(\mathbb{R}\)-algebras:
\[\operatorname{End}_{\mathbb{R}G}(V)\cong F.\]
Hence by Theorem 3.3 and Lemma 3.5 irreducible \(KG\)-modules of type \(\mathbb{C}\) or \(\mathbb{H}\) are essentially complex.
Using Lemma 3.5 again, we state the following definition.
**Definition 3.8**.: Let \(V\) be an irreducible \(\mathbb{Q}G\)-module with character \(\chi_{V}\). Let \(\chi_{s}\in\operatorname{Irr}(G)\) with \((\chi_{V},\chi_{s})\neq 0\). Define
\[m(V):=m_{\mathbb{Q}}(\chi_{s}),\]
where \(m_{\mathbb{Q}}(\chi_{s})\) is the Schur index of \(\chi_{s}\) over the rationals. For an irreducible \(G\)-lattice \(L\) we define
\[m(L):=m(L^{\mathbb{Q}}).\]
We immediately get a complex structure criterion on irreducible lattices.
**Proposition 3.9**.: _Let \(L\) be an irreducible \(G\)-lattice. The following are equivalent:_
1. \(L\) _is essentially complex._
2. \(L^{\mathbb{Q}}\) _is essentially complex._
3. \(L\) _is of type_ \(\mathbb{C},\mathbb{H}\) _or_ \(m(L)\) _is even._
Proof.: Equivalence of 1 and 2 is by definition. By Remark 3.7 it is enough to consider the case when \(V:=L^{\mathbb{Q}}\) is of real type. Let \(m=m(V)\), \(\chi_{V}\) be the character of \(V\), \(\chi_{s}\in\operatorname{Irr}(G)\) be one character such that \((\chi_{V},\chi_{s})\neq 0\) and \(\mathcal{G}\) be the class of Galois conjugates of \(\chi_{s}\) in \(\operatorname{Gal}(\mathbb{Q}(\chi_{s})/\mathbb{Q})\). We get
\[\chi_{V}=\sum_{\chi\in\mathcal{G}}m\chi. \tag{4}\]
Since every \(\chi\in\mathcal{G}\) is a character of an absolutely irreducible real representation, by Theorem 3.3 we get that \(m\) is an even number.
In the spirit of [11, Theorem 3.3] we can state the following proposition.
**Theorem 3.10**.: _A \(G\)-lattice \(L\) is essentially complex if and only if every simple component \(V\) of \(L^{\mathbb{Q}}\) of type \(\mathbb{R}\) with odd \(m(V)\) occurs with even multiplicity._
Proof.: It is enough to note that if a simple module \(V\) with odd \(m=m(V)\) occurs in \(L^{\mathbb{Q}}\) with multiplicity \(k\), then \(kV\) is isomorphic to a homogeneous component of \(L^{\mathbb{Q}}\) and the formula (4) implies that
\[k\cdot\chi_{V}=\sum_{\chi\in\mathcal{G}}km\chi.\]
Hence, using Theorem 3.3, if we want \(L\), or equivalently \(kV\), to be essentially complex, we need \(k\) to be even.
## 4 Complex Vasquez invariant
In this section, we will prove Theorem 1. Similar as in the real case, we state first the definition which follows from the theorem.
**Definition 4.1**.: For any finite group \(G\) a _complex Vasquez number_\(n_{\mathbb{C}}(G)\) is the smallest natural number which satisfies the conclusion of Theorem 1.
Proof of Theorem 1.: Let \(\Gamma\) be defined by the short exact sequence (1). By Theorem 2.1 there exists a \(G\)-submodule \(N\subset L\), i.e. a normal subgroup \(N\) of \(\Gamma\), such that \(\Gamma/N\) is a Bieberbach group of dimension less than or equal to \(n(G)\).
Let
\[L^{\mathbb{Q}}/N^{\mathbb{Q}}=m_{1}V_{1}\oplus\ldots\oplus m_{k}V_{k}\]
be a decomposition such that \(V_{1},\ldots,V_{k}\) are irreducible, pairwise non-isomorphic \(\mathbb{Q}G\)-modules. Assume that for some \(1\leq i\leq k\), \(m_{i}V_{i}\) is not essentially complex. This means that \(V_{i}\) is of real type and that both \(m_{i}^{\prime}\) and \(m(V_{i})\) are odd. Since the multiplicity of \(V_{i}\) in \(L\) is even, \(V_{i}\) must be a composition factor of \(N^{\mathbb{Q}}\). This shows that we can find a maximal submodule \(M^{\prime}\) of \(N^{\mathbb{Q}}\) such that both \(M^{\prime}\) and \(L^{\mathbb{Q}}/M^{\prime}\) are essentially complex. In particular, we get
\[L^{\mathbb{Q}}/M^{\prime}=n_{1}^{\prime}V_{1}\oplus\ldots\oplus n_{k}^{\prime }V_{k}, \tag{5}\]
where
\[n_{i}^{\prime}=\left\{\begin{array}{ll}m_{i}+1&\mbox{if $m_{i}^{\prime}V_{i}$ is not essentially complex},\\ m_{i}&\mbox{otherwise}.\end{array}\right. \tag{6}\]
We conclude that
\[\dim_{\mathbb{Q}}(L^{\mathbb{Q}}/M^{\prime})\leq 2\dim_{\mathbb{Q}}(L^{ \mathbb{Q}}/N^{\mathbb{Q}})\leq 2n(G). \tag{7}\]
Now, let
\[M:=L\cap M^{\prime}.\]
We easily get that \(M\) is a \(\mathbb{Z}\)-pure submodule of \(L\) with \(\mathbb{Z}\)-rank equal to \(\dim_{\mathbb{Q}}(M^{\prime})\) (see [7, (16.19)]). Obviously \(M\subset L\cap N^{\mathbb{Q}}=N\) and we have
\[\Gamma/N\cong\Gamma/M\Big{/}N/M.\]
Let \(\gamma\in\Gamma\setminus M\) be such that \(\gamma^{k}\in M\) for some positive integer \(k\). Then \(\gamma\not\in N\) since \(N/M\), as a subgroup of \(L/M\), is torsion-free. We get that \(\gamma\in\Gamma\setminus N\), but \(\gamma^{k}\in N\), so \(\gamma N\) is an element of finite order in \(\Gamma/N\), which contradicts our assumptions on \(\Gamma/N\). We get that \(\Gamma/M\) is torsion-free of real dimension \(\operatorname{rk}_{\mathbb{Z}}(L/M)=\dim_{\mathbb{Q}}(L^{\mathbb{Q}}/M^{ \prime})\), hence using inequality (7) we prove our claim, showing in particular that
\[n_{\mathbb{C}}(G)\leq n(G).\]
With Theorem 1 we have got an upper bound for complex Vasquez number. We can in fact show more:
**Lemma 4.2**.: _Let \(G\) be a finite group. Then_
\[n(G)/2\leq n_{\mathbb{C}}(G).\]
Proof.: Let \(G\) be a finite group for which \(n_{\mathbb{C}}(G)<n(G)/2\). Let \(\Gamma\) be a Bieberbach group, defined by the short exact sequence (1). The lattice \(L\) does not have to be essentially complex, but - arguing as in the proof of Theorem 1 - there exists a \(G\)-lattice \(L^{\prime}\) of minimal \(\mathbb{Z}\)-rank such that \(L\oplus L^{\prime}\) admits a complex structure. In particular we have \(\operatorname{rk}_{\mathbb{Z}}L^{\prime}\leq\operatorname{rk}_{\mathbb{Z}}L\). Moreover, if \(\alpha\in H^{2}(G,L)\) defines the group \(\Gamma\), then
\[\alpha+0\in H^{2}(G,L)\oplus H^{2}(G,L^{\prime})\]
defines an aspherical Kahler group \(\Gamma^{\prime}\). By our assumption there exists an essentially complex \(G\)-submodule \(M\subset L\oplus L^{\prime}\), such that \(\Gamma^{\prime}/M\) is torsion-free of real dimension
\[\operatorname{rk}_{\mathbb{Z}}(L\oplus L^{\prime})/M\leq 2n_{\mathbb{C}}(G)<n(G).\]
The minimality of \(L^{\prime}\) implies that it is not an essentially complex module, hence \(M^{\mathbb{Q}}\cap L^{\mathbb{Q}}\neq 0\) and in particular \(M\cap L\neq 0\). Now we can argue as in the proof of Theorem 2.1. By Remark 2.2\((M\cap L)\oplus(M\cap L^{\prime})\) is a \(G\)-sublattice and a pure \(\mathbb{Z}\)-submodule of \(L\oplus L^{\prime}\). Using a composition of maps
\[L\oplus L^{\prime}\to L/(M\cap L)\oplus L^{\prime}/(M\cap L^{\prime}) \rightarrow(L\oplus L^{\prime})/M\]
we get that the image of \(\alpha\) in \(H^{2}(G,L/(M\cap L))\) is special and hence \(\Gamma/(M\cap L)\) is a Bieberbach group of dimension
\[\operatorname{rk}_{\mathbb{Z}}L/(M\cap L)\leq\operatorname{rk}_{\mathbb{Z}}(L \oplus L^{\prime})/N<n(G),\]
which contradicts the minimality of Vasquez number (see Definition 2.3).
In some cases, the following lemma can give us a better estimate of complex Vasquez invariant than the proof of Theorem 1.
**Lemma 4.3**.: _Let \(G\) be a finite group and \(\operatorname{Irr}_{1}(G):=\{x\in\operatorname{Irr}(G):\nu_{2}(\chi)=1\}\). Then_
\[n_{\mathbb{C}}(G)\leq\frac{1}{2}\left(n(G)+\sum_{\chi\in\operatorname{Irr}_{1} (G)}m_{\mathbb{Q}}(\chi)\chi(1)\right).\]
Proof.: Let \(\mathcal{L}\) be the set of representatives of isomorphism classes of irreducible \(\mathbb{Q}G\)-modules of real type. Equations (5) and (6) show that
\[2n_{\mathbb{C}}(G)\leq n(G)+\sum_{L\in\mathcal{L}}\dim_{\mathbb{Q}}L.\]
Since every simple \(\mathbb{C}G\)-module can be a component of only one of the modules \(L^{\mathbb{C}}\), for \(L\in\mathcal{L}\), we get that
\[\sum_{L\in\mathcal{L}}\dim_{\mathbb{Q}}L=\sum_{\chi\in\operatorname{Irr}_{1}( G)}m_{\mathbb{Q}}(\chi)\chi(1).\]
To sum up, we have the following bounds on complex Vasquez invariant.
**Proposition 4.4**.: _Let \(G\) be a finite group. Then_
1. \(n(G)/2\leq n_{\mathbb{C}}(G)\)_._
2. \(n_{\mathbb{C}}(G)\leq n(G)\)_._
3. \(n_{\mathbb{C}}(G)\leq\frac{1}{2}\left(n(G)+\sum_{\chi\in \operatorname{Irr}_{1}(G)}m_{\mathbb{Q}}(\chi)\chi(1)\right)\)_._
**Corollary 4.5**.: _Let \(G\) be a group of odd order. Then_
\[n_{\mathbb{C}}(G)=\lfloor(n(G)+1)/2\rfloor.\]
Proof.: It is enough to note that in the case of odd order group \(G\), the set \(\operatorname{Irr}_{1}(G)\) consists only of the trivial character and then use lower and upper bounds of Proposition 4.4.
_Example 4.6_.: Let \(G=C_{3}^{k}\) be an elementary abelian \(3\)-group of rank \(k\). By [5, Corollary on page 126]\(n(G)=3^{k-1}(3^{k}-1)/2\) and by Corollary 4.5 we get that
\[n_{\mathbb{C}}(C_{3}^{k})=\left\{\begin{array}{ll}n(C_{3}^{k})/2&\quad\text{ if $k$ is even,}\\ (n(C_{3}^{k})+1)/2&\quad\text{ if $k$ is odd.}\end{array}\right.\]
By the above example we get, that the lower bound for the complex Vasquez invariant is sharp. The following one shows that one of the upper bound also has this property.
**Proposition 4.7**.: _Let \(G\) be an elementary abelian \(2\)-group of rank \(k\geq 2\). Then_
\[n_{\mathbb{C}}(G)=\frac{1}{2}\left(n(G)+\sum_{\chi\in\operatorname{lrr}_{1}(G )}m_{\mathbb{Q}}(\chi)\chi(1)\right)=2^{k-1}+2^{k-2}(2^{k}-1).\]
Proof.: Let \(\mathcal{X}\) denote the set of non-trivial elements of \(G\). Let
\[S=\bigoplus_{a\in\mathcal{X}}\operatorname{ind}_{(a)}^{G}\mathbb{Z}.\]
By [5, Theorem 2] there exists special element \(\alpha\in H^{2}(G,S)\) with the property that
\[\nu_{*}(\alpha)\text{ is not special} \tag{8}\]
for any non-trivial submodule \(M\) of \(S\), where \(\nu\colon S\to S/M\) is the natural mapping. In particular
\[n(G)=\operatorname{rk}_{\mathbb{Z}}(S)=2^{k-1}(2^{k}-1).\]
For every \(a\in\mathcal{X}\) let \(\chi_{a}\) denote the trivial character of the group \(\langle a\rangle\). The character \(\chi_{S}\) of the \(G\)-module \(S\) is given by the formula
\[\chi_{S}=\sum_{a\in\mathcal{X}}\operatorname{ind}_{\langle a\rangle}^{G} \chi_{a}.\]
Let \(\chi_{G}\) be the trivial character of \(G\). If by \(\mathcal{K}\) we denote the set of subgroups of \(G\) of index \(2\), then
\[\operatorname{Irr}(G)=\operatorname{Irr}_{1}(G)=\{\chi_{G}\}\cup\{\chi_{K}:K \in\mathcal{K}\},\]
where \(\chi_{K}\) is the irreducible character of \(G\) with kernel \(K\in\mathcal{K}\).
By the Frobenius reciprocity, we get
\[(\chi_{G},\chi_{S})=\sum_{a\in\mathcal{X}}(\chi_{G},\operatorname{ind}_{(a)}^ {G}\chi_{a})=\sum_{a\in\mathcal{X}}(\operatorname{res}_{\langle a\rangle}\chi _{G},\chi_{a})=|\mathcal{X}|=2^{k}-1\]
and for \(K\in\mathcal{K}\)
\[(\chi_{K},\chi_{S})=\sum_{a\in\mathcal{X}}(\chi_{K},\operatorname{ind}_{ \langle a\rangle}^{G}\chi_{a})=\sum_{a\in\mathcal{X}}(\operatorname{res}_{ \langle a\rangle}\chi_{K},\chi_{a})=|K|-1=2^{k-1}-1.\]
The above calculations show that
\[\chi_{S}=(2^{k}-1)\chi_{G}+\sum_{K\in\mathcal{K}}(2^{k-1}-1)\chi_{K},\]
hence \(S\) does not admit any complex structure, but if \(R\) is the regular \(G\)-module, then
\[L:=S\oplus R,\]
with character
\[\chi_{L}=2^{k}\chi_{G}+\sum_{K\in\mathcal{K}}2^{k-1}\chi_{K}\]
is an essentially complex \(G\)-lattice. The cohomology class
\[\alpha=\alpha+0\in H^{2}(G,S)\oplus H^{2}(G,R)\]
is of course special.
Assume that \(M\neq 0\) is such a \(G\)-submodule of \(L\) that \(L/M\) is essentially complex and the image of \(\alpha\) in \(H^{2}(G,L/M)\) is special. Obviously, \(M\) is essentially complex itself. Arguing as in the proof of Lemma 4.2, we get that
\[M\cap S\neq 0\]
is a \(\mathbb{Z}\)-pure submodule of \(S\) and the projection of \(\alpha\) to \(H^{2}(G,S/M\cap S)\) gives a special element, which contradicts property (8).
To sum up, \(L\) and \(\alpha\in H^{2}(G,L)\) define an aspherical Kahler group \(\Gamma^{\prime}\), of dimension \(2^{k-1}+2^{k-2}(2^{k}-1)\), such that for every non-trivial and normal subgroup \(M\) of \(\Gamma^{\prime}\) with the property that \(M\subset L\), \(\Gamma^{\prime}/M\) is not an aspherical Kahler group. This shows that
\[n_{\mathbb{C}}(G)\geq 2^{k-1}+2^{k-2}(2^{k}-1)=\frac{1}{2}\left(n(G)+\sum_{ \chi\in\operatorname{Irr}_{1}(G)}m_{\mathbb{Q}}(\chi)\chi(1)\right).\]
Applying Proposition 4.4.3 finishes the proof.
## 5 Holomorphic maps
From the algebraic point of view what we have got so far is an epimorphism of one fundamental group to another. This gives us a _continuous_ map between complex manifolds. In this section, we will investigate the situation in which the induced map is in fact holomorphic.
Let us start, as usual, with \(G\)-lattice \(L\) and its \(\mathbb{Z}\)-pure sublattice \(M\). The natural homomorphism induces an \(\mathbb{R}G\)-epimorphism \(f\colon M^{\mathbb{R}}\to(M/L)^{\mathbb{R}}\), given by
\[(1\otimes l)\mapsto 1\otimes(l+M)\]
for \(l\in L\). Note that we skip the subscript \(\mathbb{Z}\) in the notation of the tensor product.
**Lemma 5.1**.: \(\ker f=M^{\mathbb{R}}\)_._
Proof.: Take \(m\in M\). Since
\[f(1\otimes m)=1\otimes(m+M)=0,\]
hence \(M^{\mathbb{R}}\subset\ker f\). We go to the conclusion, noting that
\[\dim_{\mathbb{R}}M^{\mathbb{R}}+\dim_{\mathbb{R}}(L/M)^{\mathbb{R}}=\dim_{ \mathbb{R}}L^{\mathbb{R}}.\]
Assume that \(L\) and \(L/M\) are essentially complex with complex structures \(J\) and \(J^{\prime}\) respectively.
**Definition 5.2**.: The map \(f\) is _holomorphic_ if
\[fJ=J^{\prime}f.\]
**Lemma 5.3**.: _Assume that \(f\) is holomorphic. Then:_
1. _Kernel of_ \(f\) _is_ \(J\)_-invariant._
2. \(J^{\prime}\) _is uniquely determined by_ \(J\)_._
Proof.: Let \(v\in\ker f\). Then
\[fJ(v)=J^{\prime}f(v)=J^{\prime}0=0\]
and \(J(v)\in\ker f\). Now take \(w\in(L/M)^{\mathbb{R}}\) and \(v\in L^{\mathbb{R}}\) such that \(f(v)=w\). We have
\[J^{\prime}(w)=J^{\prime}f(v)=fJ(v).\]
**Corollary 5.4**.: _The map \(f\) is holomorphic if and only if \(\ker f\) is \(J\)-invariant._
Proof.: Existence of \(J^{\prime}\) is given by the isomorphism theorem. Moreover, for \(w=f(v)\in(L/M)^{\mathbb{R}}\) we have
\[J^{\prime 2}(w)=J^{\prime 2}f(v)=J^{\prime}fJ(v)fJ^{2}(v)=f(-v)=-f(v)=-w.\]
The following example shows that the algebraic construction given by Theorem 1 does not give us holomorphic maps in general. We deal with this problem in the next section.
_Example 5.5_.: Let the Bieberbach group \(\Gamma\subset\operatorname{Iso}(\mathbb{R}^{6})\) be generated by
\[(I,e_{1}),\dots,(I,e_{6}),\left(-I_{2}\oplus I_{4},\frac{1}{2}e_{6}\right).\]
\(\Gamma\) fits into the short exact sequence
\[0\to\mathbb{Z}^{6}\to\Gamma\to C_{2}\to 1.\]
Since the holonomy representation of \(\Gamma\) splits into two homogeneous components, every complex structure on \(\Gamma\) will be a direct sum of the two ones, corresponding to the splitting. Let us focus on the four-dimensional part of the representation, since the other one is quotient out by the construction. Using the notation of the proof of Theorem 2.1, we have \(L=\mathbb{Z}^{4}\) with the trivial \(G\)-action, \(L_{0}=\{(0,0,0,d)^{T}:d\in\mathbb{Z}\}\) and \(L^{\prime}_{0}=\{(a,b,c,0)^{T}:a,b,c\in\mathbb{Z}\}\). Assume that \(L\) has the following complex structure:
\[J=\begin{bmatrix}-1&-\frac{\sqrt{3}}{3}&-1&0\\ 3+\sqrt{3}&0&0&\sqrt{3}\\ 1-\sqrt{3}&\frac{\sqrt{3}}{3}&1&-1\\ 1+\sqrt{3}&1&1+\sqrt{3}&0\end{bmatrix}.\]
Our goal is to find a rank \(2\) submodule \(M\) of \(L^{\prime}_{0}\) such that \(M^{\mathbb{R}}\) is \(J\)-invariant. Assume that such an \(M\) exists and \(v=(a,b,c,0)^{T}\) is a non-zero element of \(M\). We get
\[Jv=\begin{bmatrix}-\frac{1}{3}(3a+\sqrt{3}b+3c)\\ (3+\sqrt{3})a\\ \frac{1}{3}((3-3\sqrt{3})a+\sqrt{3}+3c)\\ (1+\sqrt{3})a+b+(1+\sqrt{3})c\end{bmatrix}.\]
Since the last coordinate of \(Jv\) is zero it follows that \(b=a+c=0\), \(v=(a,0,-a,0)\) and \(M\) is of rank at most \(1\), a contradiction.
## 6 Changing complex structures
As in the previous sections, denote by \(K\) the ring of integers or the field of real or complex numbers. Let \(V\) be a \(KG\)-module with a complex structure \(J\). We have obvious decomposition of the \(\mathbb{C}G\)-module \(V^{\mathbb{C}}\)
\[V^{\mathbb{C}}=V^{1,0}_{J}\oplus V^{0,1}_{J},\]
where \(V^{1,0}_{J}\) and \(V^{0,1}_{J}\) are the eigenspaces of the action of \(J\) with the eigenvalues \(i\) and \(-i\) respectively. Motivating by geometry, we will call them _holomorphic_ and _antiholomorphic_ parts of \(V\). In the case when \(J\) is fixed we will drop the subscript in the notation.
Note that if we have two complex structures on \(V\), say \(J\) and \(J^{\prime}\), then \(V^{1,0}_{J}\) and \(V^{1,0}_{J^{\prime}}\) are isomorphic as \(\mathbb{R}G\)-modules. However, they may be non-isomorphic as \(\mathbb{C}G\)-modules. We will deal with this problem now in the case \(K=\mathbb{R}\). Note that it is enough to consider homogeneous modules, since they are preserved by all complex structures.
**Lemma 6.1**.: _Let \(V\) be a homogeneous \(\mathbb{R}G\)-module with complex structures \(J\) and \(J^{\prime}\). Then \(V^{1,0}_{J}\) and \(V^{1,0}_{J^{\prime}}\) are isomorphic in one of the following cases:_
1. _the simple component of_ \(V\) _is of real or quaternionic type;_
2. _the complex structures are conjugated in_ \(\operatorname{Aut}_{\mathbb{R}G}(V)\)_._
Proof.: The case (a) is obvious, since then \(V^{\mathbb{C}}\) is homogeneous. Now assume that \(J^{\prime}=AJA^{-1}\) for some \(A\in\operatorname{Aut}_{\mathbb{R}G}(V)\). It is easy to check that \(V^{1,0}_{J^{\prime}}=AV^{1,0}_{J}\).
_Remark 6.2_.: Note that in the case (b) of the above lemma it is somehow easy to determine the conjugacy class of a complex structure \(J\) in \(\operatorname{Aut}_{\mathbb{R}G}(V)\), since if \(S\) is a simple component of \(V\) and it is of multiplicity \(n\), then
\[\operatorname{Aut}_{\mathbb{R}G}(V)\cong\operatorname{GL}_{n}(\operatorname{ End}_{\mathbb{R}G}(S))\cong\operatorname{GL}_{n}(\mathbb{C}).\]
Identifying \(J\) as an element of \(\operatorname{GL}_{n}(\mathbb{C})\) it is enough to count its eigenvalues, which are of course \(\pm i\).
**Proposition 6.3**.: _Let \(V\) be a homogeneous \(\mathbb{R}G\)-module with a complex structure \(J\). Let \(W\) be an essentially complex submodule of \(V\). There exists a complex structure \(J^{\prime}\), such that \(J^{\prime}W=W\) and \(V^{1,0}_{J}\cong V^{1,0}_{J^{\prime}}\)._
Proof.: Let \(V=W\oplus W^{\prime}\) and let \(S\) be a simple component of \(V\). If \(S\) is of real or quaternionic type, by Lemma 6.1 it is enough to take \(J^{\prime}=J_{W}\oplus J_{W^{\prime}}\) where \(J_{W}\) and \(J_{W^{\prime}}\) are any complex structures on \(W\) and \(W^{\prime}\) respectively.
Assume that \(S\) is of complex type. Denote by \(n\) and \(k\) the multiplicity of \(S\) in \(V\) and \(W\) respectively. We get that
\[W=\bigoplus_{i=1}^{k}S\text{ and }W^{\prime}=\bigoplus_{i=k+1}^{n}S. \tag{9}\]
Identifying \(\operatorname{Aut}_{\mathbb{R}G}(V)\) with \(\operatorname{GL}_{n}(\mathbb{C})\) as in Remark 6.2, assume that a Jordan form of \(J\) is \(\operatorname{diag}(a_{1},\ldots,a_{n})\), where \(a_{i}=\pm i\) for \(i=1,\ldots,n\). By the same remark and the form (9) of \(W\) and \(W^{\prime}\) it is enough to take \(J^{\prime}=\operatorname{diag}(a_{1},\ldots,a_{n})\).
**Corollary 6.4**.: _Let \(L\) be a \(G\)-module with a complex structure \(J\). Assume that \(M\) is an essentially complex \(\mathbb{Z}\)-pure sublattice of \(L\). There exists a complex structure \(J^{\prime}\) on \(L\) such that \(M^{\mathbb{R}}\) is \(J^{\prime}\) invariant and \(L^{1,0}_{J}\cong L^{1,0}_{J^{\prime}}\). In particular we have \(M^{1,0}_{J^{\prime}}\subset L^{1,0}_{J^{\prime}}\)._
Proof.: Let
\[L^{\mathbb{R}}=L_{1}\oplus\ldots\oplus L_{k}\]
be the decomposition into homogeneous components. We get that
\[M^{\mathbb{R}}=M_{1}\oplus\ldots\oplus M_{k}\]
is also a decomposition into homogeneous components and
\[J=J_{1}\oplus\ldots\oplus J_{k},\]
where for every \(1\leq i\leq k\), \(J_{i}\) is a complex structure of \(L_{i}\) and \(M_{i}=M^{\mathbb{R}}\cap L_{i}\). By Proposition 6.3 for every \(1\leq i\leq k\) there exists a complex structure \(J^{\prime}_{i}\) of \(L_{i}\), giving isomorphic holomorphic part and for which \(J^{\prime}_{i}M_{i}=M_{i}\). Taking \(J^{\prime}=J^{\prime}_{1}\oplus\ldots\oplus J^{\prime}_{k}\) and observing that
\[L^{1,0}_{J}=(L_{1})^{1,0}_{J_{1}}\oplus\ldots\oplus(L_{k})^{1,0}_{J_{k}}\]
(similar for \(J^{\prime}\)) we get desired result.
## 7 Holomorphic tangent bundles
Let an aspherical Kahler group \(\Gamma\) of dimension \(n\) be given by the short exact sequence (1) and \(X=\mathbb{R}^{2n}/\Gamma\). By proof of [15, Proposition 1.1] the tangent bundle of \(X\) is given by
\[TX=(\tilde{X}\times L^{\mathbb{R}})/\Gamma,\]
where the action of \(\Gamma\) on \(\tilde{X}\times L^{\mathbb{R}}\) is given by
\[\gamma(x,v)=(\gamma\cdot x,d\gamma\cdot v),\]
for \(\gamma\in\Gamma,x\in\tilde{X},v\in L^{\mathbb{R}}\). Note that the universal cover \(\tilde{X}\) equals \(\mathbb{R}^{2n}\) and for \(\gamma=(A,a)\subset\operatorname{GL}(2n,\mathbb{R})\ltimes\mathbb{R}^{2n},d\gamma=A\), so the action of \(\Gamma\) on \(L^{\mathbb{R}}\) comes exactly from the \(G\)-module \(L\). Let \(J\) be a complex structure on \(X\). Denote by \(X_{J}\) the corresponding
generalized hyperelliptic manifold. By [9, Proposition 2.6.4], up to isomorphism of complex vector bundles, the holomorphic tangent bundle of \(X_{J}\) is given by
\[TX_{J}^{1,0}=(\tilde{X}\times L_{J}^{1,0})/\Gamma.\]
Let \(M\subset L\) be an essentially complex submodule such that \(\Delta=\Gamma/M\) is torsion-free. By [15, Main Theorem 2.3] we get a submersion \(f\colon X\to Y\), given by the natural homomorphism \(\Gamma\to\Delta\), where \(\pi_{1}(Y)=\Delta\). Moreover, by [15, Lemmas 2.6 and 2.7] we have the short exact sequence of real vector bundles
\[0\longrightarrow\ker\rho\longrightarrow TX\stackrel{{\rho}}{{ \longrightarrow}}f^{*}(TY)\longrightarrow 0,\]
where \(\ker\rho=(\tilde{X}\times M^{\mathbb{R}})/\Gamma\) is a pullback of a vector bundle over \(Y\).
_Remark 7.1_.: Note that if \(f\) is holomorphic, then by Lemma 5.3 the complex structure on \(Y\) is fixed. For the sake of making notation as clear as possible we will not give any new symbol to it.
**Theorem 7.2**.: _Let \(X\) be a flat manifold with a complex structure \(J\). There exists a complex structure \(J^{\prime}\) on X and a compact flat Kahler manifold \(Y\) such that the following hold:_
1. _There exist a holomorphic submersion_ \(f\colon X_{J^{\prime}}\to Y\)_._
2. \(TX_{J}^{1,0}\) _and_ \(TX_{J^{\prime}}^{1,0}\) _are isomorphic complex vector bundles._
3. \(TX_{J^{\prime}}^{1,0}\) _is isomorphic to a pullback of a complex vector bundle over_ \(Y\)_._
Proof.: We will keep the notation of the discussion preceding the statement of the theorem. By Corollary 5.4, the map \(f\) will be holomorphic for such a complex structure \(J^{\prime}\) that \(M^{\mathbb{R}}\) is \(J^{\prime}\)-invariant. Using Corollary 6.4 we get that not only such \(J^{\prime}\) exists, but it gives us isomorphism of \(TX_{J}^{1,0}\) and \(TX_{J^{\prime}}^{1,0}\). Moreover, since \(M_{J^{\prime}}^{1,0}\subset L_{J^{\prime}}^{1,0}\), we have a short exact sequence of complex vector bundles
\[0\longrightarrow\ker\pi\longrightarrow TX_{J^{\prime}}^{1,0}\stackrel{{ \pi}}{{\longrightarrow}}f^{*}(TY^{1,0})\longrightarrow 0,\]
where \(\ker\pi=(\tilde{X}\times M_{J^{\prime}}^{1,0})/\Gamma=\ker\rho^{1,0}\), hence it is a pullback of some complex vector bundle over \(Y\). By the construction, \(TX_{J^{\prime}}^{1,0}\) is isomorphic to the vector bundle \((\tilde{X}\times M^{1,0}\oplus L^{1,0}/M^{1,0})/\Gamma\) which is exactly the Whitney sum of \(\ker\pi\) and \(f^{*}(TY^{1,0})\). This finishes the proof.
The proof of the second main theorem of the paper is now formality.
Proof of Theorem 2.: Keeping notation of this section, for \(i\in\mathbb{N}\) we have
\[c_{i}(X)=f^{*}(c_{i}(E)),\]
where \(E\) is a complex vector bundle over \(Y\). For \(i>n_{\mathbb{C}}(G)\geq\dim_{\mathbb{C}}(Y)\) we have
\[c_{i}(E)\in H^{2i}(Y,\mathbb{Z})=0\]
and the result follows.
_Remark 7.3_.: In the language of [2] we say that \(L\) with complex structures \(J\) and \(J^{\prime}\) have the same Hodge type. Using description of the space of complex structures on \(X\), given for example in [1], we can show more: \(J^{\prime}\) may be constructed in a way that there is a continuous path of complex structures on \(X\) between \(J\) and \(J^{\prime}\). Using deformation theory gives another way of showing that \(TX_{J}^{1,0}\) and \(TX_{J^{\prime}}^{1,0}\) are isomorphic.
## Acknowledgments
The authors would like to thank Andrzej Szczepanski for helpful discussions.
|
2309.08406 | Constraint-Free Structure Learning with Smooth Acyclic Orientations | The structure learning problem consists of fitting data generated by a
Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this
context, differentiable approaches constrain or regularize the optimization
problem using a continuous relaxation of the acyclicity property. The
computational cost of evaluating graph acyclicity is cubic on the number of
nodes and significantly affects scalability. In this paper we introduce COSMO,
a constraint-free continuous optimization scheme for acyclic structure
learning. At the core of our method, we define a differentiable approximation
of an orientation matrix parameterized by a single priority vector. Differently
from previous work, our parameterization fits a smooth orientation matrix and
the resulting acyclic adjacency matrix without evaluating acyclicity at any
step. Despite the absence of explicit constraints, we prove that COSMO always
converges to an acyclic solution. In addition to being asymptotically faster,
our empirical analysis highlights how COSMO performance on graph reconstruction
compares favorably with competing structure learning methods. | Riccardo Massidda, Francesco Landolfi, Martina Cinquini, Davide Bacciu | 2023-09-15T14:08:09Z | http://arxiv.org/abs/2309.08406v1 | # Constraint-Free Structure Learning with Smooth Acyclic Orientations
###### Abstract
The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this context, differentiable approaches constrain or regularize the optimization problem using a continuous relaxation of the acyclicity property. The computational cost of evaluating graph acyclicity is cubic on the number of nodes and significantly affects scalability. In this paper we introduce cosmo, a constraint-free continuous optimization scheme for acyclic structure learning. At the core of our method, we define a differentiable approximation of an orientation matrix parameterized by a single priority vector. Differently from previous work, our parameterization fits a smooth orientation matrix and the resulting acyclic adjacency matrix without evaluating acyclicity at any step. Despite the absence of explicit constraints, we prove that cosmo always converges to an acyclic solution. In addition to being asymptotically faster, our empirical analysis highlights how cosmo performance on graph reconstruction compares favorably with competing structure learning methods.
## 1 Introduction
Directed Acyclic Graphs (DAGs) are a fundamental tool in several fields to represent probabilistic or causal information about the world (Koller and Friedman, 2009; Pearl, 2009). A fundamental problem in this context concerns the retrieval of the underlying structure between a set of variables, i.e., the problem of identifying which arcs exist between nodes associated to the variables of interest (Spirtes et al., 2000). In recent years, applications of structure learning to causal discovery led to growing interest in tackling the problem using gradient-based methods that optimize a smooth representation of a DAG (Vowels et al., 2022). For instance, while not suitable for causal discovery _per se_(Reisach et al., 2021), acyclic structure learners are fundamental components of most state-of-the-art continuous causal discovery algorithms (Lachapelle et al., 2020; Brouillard et al., 2020; Lorch et al., 2022). A well-established technique, popularized by Notears(Zheng et al., 2018), consists of computing the trace of the matrix-exponential of the adjacency matrix, which is differentiable and provably zero if and only if the corresponding graph is acyclic. However, despite their widespread adoption, notears-like acyclicity constraints impose a cubic number of operations in the number of nodes per optimization step and substantially prevent scalable and applicable continuous discovery algorithms.
In this context, we propose a novel formulation and optimization scheme for learning acyclic graphs that avoids evaluating the acyclicity of the solution in any optimization step. Notably, our proposal does not sacrifice theoretical guarantees of asymptotic convergence to acyclic solutions which apply to existing structure learning methods (Ng et al.,
2023). At the core of our scheme lies a novel definition of _smooth_ orientation matrix, i.e., a differentiable approximation of an orientation matrix parameterized by a priority vector on the graph nodes. The priority vector represents a discrete orientation where each node has an outgoing arc to all nodes with higher priority. We define our _smooth_ orientation matrix by applying a tempered sigmoid to the pair-wise priority differences, which equals the discrete orientation in the limit of the sigmoid temperature to zero. By annealing temperature during training, we prove that the we are effectively decreasing an upper-bound on the acyclicity of the solution. Further, we show that the parameterization represents the space of Directed Acyclic Graph (DAGs) as a differentiable function of a directed graph and our smooth orientation matrix. Since our approach only requires a quadratic number of operations per optimization step, its constraint-free scheme can be used as a direct and faster replacement for the notears constrained optimization problem. Overall, we propose a methodology to perform constraint-free structure learning with smooth acyclic orientations, which we name cosmo (Figure 1).
Contributions.We summarize the key contributions of this paper as follows:
* We introduce a differentiable relaxation of an acyclic orientation matrix, which we name _smooth_ orientation matrix (Definition 1). The matrix depends on a temperature value that controls the approximation of the discrete orientation matrix. We prove that we can represent all and only DAGs as the element-wise multiplication of a weighted adjacency matrix and our novel _smooth_ orientation matrix (Theorem 1).
* We propose cosmo, the first unconstrained optimization approach that learns a DAG entirely avoiding acyclicity constraints (Section 4.2). cosmo represents DAGs through a _smooth_ orientation matrix and requires solving a unique optimization problem while annealing the temperature. Since reconstructing the DAG requires a number of operations quadratic on the number of nodes, cosmo is an order of magnitude faster than cubic-expensive constrained methods in literature.
* We connect our proposed scheme to existing constrained approaches and prove that annealing the temperature during training effectively decreases an upper bound on the acyclicity of the _smooth_ orientation matrix (Theorem 2).
* We perform a thorough experimental comparison of cosmo and competing structure learning approaches (Section 5). The empirical results report how cosmo achieves comparable structure recovery performances in significantly less time. Further, we highlight how cosmo consistently outperforms previous partially unconstrained structure learning proposals.
Figure 1: cosmo frames acyclic structure learning as an unconstrained optimization problem. To optimize an acyclic adjacency matrix (_right_), we propose to learn a directed graph (_left_) and a priority vector on the nodes (_top_). To optimize the priority vector, we introduce a _smooth_ acyclic orientation (_center_) where each lower-priority node feeds into each higher-priority node. Gray dashed arrows denote arcs with approximately zero weight. We prove that by annealing the temperature, the smooth acyclic orientation \(S_{t,\varepsilon}\) converges to a discrete orientation and, consequently, to a DAG.
In the following, we discuss related works in Section 2 and report the necessary background on graph theory and structure learning in Section 3. Then, we introduce cosmo and our original contributions in Section 4. Finally, we report and discuss our empirical analysis in Section 5.
## 2 Related Works
In this section, we report related works aiming to improve, approximate, or avoid altogether the constrained optimization scheme introduced by notears(Zheng et al., 2018) for acyclic structure learning. We summarize the comparison of our proposal with the existing literature in Table 1.
Constraint Reformulation.nobears(Fang et al., 2020) proposes to estimate the acyclicity constraint by approximating the spectral radius of the adjacency matrix. Given a maximum number \(k\) of iterations, the constraint can then be evaluated on a graph with \(d\) nodes in \(O(kd^{2})\) time. Similarly, tmpi(Zhang et al., 2022) proposes an iterative approximation of the constraint that also results in \(O(kd^{2})\) computational complexity. Finally, dagma(Bello et al., 2022) reformulates the acyclicity constraint as the log-determinant of a linear transformation of the adjacency matrix. While still asymptotically cubic in complexity, the use of log-determinant is significantly faster in practice because of widespread optimizations in common linear algebra libraries (Bello et al., 2022, pp.19).
Low-Rank Approximation.Several works extended notears by assuming that the adjacency matrix of the underlying graph does not have full rank either to reduce the number of trainable parameters (Fang et al., 2020) or to improve computational complexity (Lopez et al., 2022). In this work, we deal with possibly full-rank matrices and do not directly compare with low-rank solutions.
Unconstrained DAG Learning.Few works learn DAGs without explicitly constraining the graph. Charpentier et al. (2022) proposes vp-dag, an unconstrained and differentiable strategy to sample DAGs that can be easily integrated in probabilistic causal discovery approaches. Instead, we propose a more general optimization scheme for learning acyclic graphs that it is not immediately comparable. In the context of causal discovery, enco(Lippe et al., 2022) decouples a DAG in an adjacency matrix and an edge orientation matrix. However, the authors explicitly parameterize the orientation matrix and prove that it converges to an acyclic orientation whenever the training dataset contains a sufficient number of interventions. Our structure learning proposal tackles instead non-intervened datasets and ensures acyclicity by construction. Similarly to us, nocurl(Yu et al., 2021) proposes a model that decouples the topological ordering from the adjacencies of an acyclic graph. However, the proposed optimization schemes are significantly different. Firstly, their approach extracts the nodes ordering from a preliminary solution obtained by partially solving the notears constrained optimization problem. Then, they fix the ordering and unconstrainedly optimize only the direct adjacency matrix. On the other hand, cosmo jointly learns priorities and adjacencies avoiding entirely acyclicity evaluations. We report further discussion on the theoretical comparison with enco and nocurl in Appendix D and we carefully empirically compare with nocurl in Section 5.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Complexity & Constraint \\ \hline notears(Zheng et al., 2018) & \(O(d^{3})\) & Exact \\ dagma(Bello et al., 2022) & \(O(d^{3})\) & Exact \\ nobears(Lee et al., 2019) & \(O(kd^{2})\) & Approximated \\ tmpi(Zhang et al., 2022) & \(O(kd^{2})\) & Approximated \\ nocurl(Yu et al., 2021) & \(O(d^{2})\dagger\) & Partial \\
**cosmo** & \(\mathbf{O(d^{2})}\) & **None** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary comparison of our proposal, cosmo, with competing approaches. To the best of our knowledge, we are the first to propose a parameterization that enables unconstrained learning of an acyclic graph without trading off on the adjacency matrix rank, the exactness of the acyclicity constraint, or the assumption of observational data. To express computational complexity, we define \(d\) as the number of nodes and \(k\) as the maximum length of iterative approaches. [\(\dagger\)]: nocurl requires a preliminary solution obtained by partially solving a cubic-expensive constrained optimization problem.
## 3 Background
Graph Theory.A _directed graph_ is a pair \(D=(V,A)\) of vertices \(V=\{1,\ldots,d\}\) and arcs between them \(A\subseteq V\times V\). A directed _acyclic_ graph (DAG) is a directed graph whose arcs follow a strict partial order on the vertices. In a DAG, the _parents_ of a vertex \(v\in V\) are the set of incoming nodes such that \(\mathrm{pa}(v)=\{u\in V\mid(u,v)\in A\}\)(Bondy and Murty, 2008). We represent a directed graph as a binary adjacency matrix \(\mathbf{A}\in\{0,1\}^{d\times d}\), where \(\mathbf{A}_{uv}\neq 0\iff(u,v)\in A\). Similarly, we define a weighted adjacency matrix as the real matrix \(\mathbf{W}\in\mathds{R}^{d\times d}\), where \(\mathbf{W}_{uv}\neq 0\iff(u,v)\in A\).
Structure Learning.A Structural Equation Model (SEM) models a data-generating process as a set of functions \(f=\{f_{1},\ldots,f_{d}\}\), where \(f_{v}:\mathds{R}^{|\mathrm{pa}(v)|}\to\mathds{R}\) for each variable \(v\in V\) in the DAG (Pearl, 2009). Given a class of functions \(\mathbf{F}\) and a loss \(\mathcal{L}\), notears (Zheng et al., 2020) formalizes non-linear acyclic structure learning through the following constrained optimization problem
\[\min_{f\in\mathbf{F}}\mathcal{L}(f)\;\;\text{s.t.}\;\;\;\mathrm{tr}(e^{W(f) \circ W(f)})-d=0, \tag{1}\]
where \(W(f)\in\mathds{R}^{d\times d}\) is the adjacency matrix representing parent relations between variables in \(f\). In particular, the constraint equals zero if and only if the adjacency matrix \(W(f)\) is acyclic. The authors propose to solve the problem using the Augmented Lagrangian method (Nocedal and Wright, 1999), which in turn requires to solve multiple unconstrained problems and to compute the constraint value at each optimization step. Notably, any causal interpretation of the identified arcs depends on several assumptions on the function class \(\mathbf{F}\) and the loss function \(\mathcal{L}\)(van de Geer and Buhlmann, 2013; Loh and Buhlmann, 2014), which we do not explore in this work.
## 4 Learning Acyclic Orientations with Cosmo
In Subsection 4.1, we propose to parameterize a weighted adjacency matrix as a function of a direct matrix and a smooth orientation matrix. In this way, we effectively express the discrete space of DAGs in a continuous and differentiable manner. Then, in Subsection 4.2, we introduce cosmo, our unconstrained optimization approach to learn acyclic DAGs. Furthermore, we prove an upper bound on the acyclicity of the smooth orientation matrix that connects our formulation to constrained approaches. To ease the presentation, we initially assume linear relations between variables. By doing so, the weighted adjacency matrix is the unique parameter of the problem. However, as with previous structure learning approaches, our proposal easily extends to non-linear models by jointly optimizing a non-linear model and an adjacency matrix either weighting or masking variables dependencies. We report one possible extension of cosmo to non-linear relations in Appendix C.3.
### Smooth Acyclic Orientations
To continuously represent the space of DAGs with \(d=|V|\) nodes, we introduce a priority vector \(\mathbf{p}\in\mathds{R}^{d}\) on its vertices. Consequently, given the priority vector \(\mathbf{p}\) and a strictly positive threshold \(\varepsilon>0\), we define the following strict partial order \(\prec_{\mathbf{p},\varepsilon}\) on the vertex set \(V\)
\[\forall(u,v)\in V\times V\colon\quad u\prec_{\mathbf{p},\varepsilon}v\iff \mathbf{p}_{v}-\mathbf{p}_{u}\geq\varepsilon. \tag{2}\]
In other terms, a vertex \(u\) precedes another vertex \(v\) if and only if the priority of \(v\) is sufficiently larger than the priority of the vertex \(u\). Notably, with a zero threshold \(\varepsilon=0\), the relation would be symmetric and thus not a strict order. On the other hand, whenever \(\varepsilon\) is strictly positive, we can represent a subset of all strict partial orders sufficient to express all possible DAGs.
**Lemma 1**.: _Let \(\mathbf{W}\in\mathds{R}^{d\times d}\) be a real matrix. Then, for any \(\varepsilon>0\), \(\mathbf{W}\) is the weighted adjacency matrix of a DAG if and only if it exists a priority vector \(\mathbf{p}\in\mathds{R}^{d}\) and a real matrix \(\mathbf{H}\in\mathds{R}^{d\times d}\) such that_
\[\mathbf{W}=\mathbf{H}\circ\mathbf{T}\prec_{\mathbf{p},\varepsilon}, \tag{3}\]
_where \(\mathbf{T}\prec_{\mathbf{p},\varepsilon}\in\{0,1\}^{d\times d}\) is a binary orientation matrix such that_
\[\mathbf{T}\prec_{\mathbf{p},\varepsilon}[uv]=\begin{cases}1&\text{if }u\prec_{ \mathbf{p},\varepsilon}v\\ 0&\text{otherwise,}\end{cases} \tag{4}\]
_for any \(u,v\in V\)._
Proof.: We report the proof in Appendix A.1.
While priority vectors enable the representation of strict partial orders in a continuous space, the construction of the orientation matrix still requires the non-differentiable evaluation of the inequality between priority differences from Equation 2. To this end, we approximate the comparison of the difference against the threshold \(\varepsilon\), using a _tempered_ sigmoidal function. We refer to such approximation of the orientation matrix as the _smooth_ orientation matrix.
**Assumption 1** (Smooth Orientation Matrix).: _Let \(\mathbf{p}\in\mathds{R}^{d}\) be a priority vector, \(\varepsilon>0\) be a strictly positive threshold, and \(t>0\) be a strictly positive temperature. Then, the smooth orientation matrix of the strict partial order \(\prec_{\mathbf{p},\varepsilon}\) is the real matrix \(S_{t,\varepsilon}(\mathbf{p})\in\mathds{R}^{d\times d}\) such that, for any \(u,v\in V\), it holds_
\[S_{t,\varepsilon}(\mathbf{p})_{uv}=\sigma_{t,\varepsilon}(\mathbf{p}_{v}- \mathbf{p}_{u}), \tag{5}\]
_where \(\sigma_{t,\varepsilon}\) is the \(\varepsilon\)-centered tempered sigmoid, defined as_
\[\sigma_{t,\varepsilon}(x)=\frac{1}{1+e^{-(x-\varepsilon)/t}}. \tag{6}\]
Intuitively, the threshold \(\varepsilon\) shifts the center of the sigmoid and breaks the symmetry whenever two variables _approximately_ have the same priority. The temperature parameter \(t>0\) regulates instead the steepness of the sigmoid. Because of the asymmetry introduced by the threshold, in the limit of the temperature to zero, the zero-entries of a smooth orientation matrix coincide with the zero-entries of the corresponding orientation matrix (Figure 2). Therefore, we prove that any directed acyclic graph can be represented as the element-wise product of a directed adjacency matrix and a smooth orientation. Further, any directed graph resulting from this decomposition is acyclic.
**Theorem 1**.: _Let \(\mathbf{W}\in\mathds{R}^{d\times d}\) be a real matrix. Then, for any \(\varepsilon>0\), \(\mathbf{W}\) is the weighted adjacency matrix of a DAG if and only if it exists a priority vector \(\mathbf{p}\in\mathds{R}^{d}\) and a real matrix \(\mathbf{H}\in\mathds{R}^{d\times d}\) such that_
\[\mathbf{W}=\mathbf{H}\circ\lim_{t\to 0}S_{t,\varepsilon}(\mathbf{p}), \tag{7}\]
_where \(S_{t,\varepsilon}(\mathbf{p})\) is the smooth orientation matrix of \(\prec_{\mathbf{p},\varepsilon}\)._
Proof.: We report the proof in Appendix A.2.
### Learning Adjacencies and Orientations
Given our definition of smooth acyclic orientation, we can effectively parameterize the space of DAGs as a continuous function of a weighted adjacency matrix \(\mathbf{H}\in\mathds{R}^{d\times d}\) and a priority vector \(\mathbf{p}\in\mathds{R}^{d}\). Therefore, the computational complexity of our solution reduces to the construction of the adjacency matrix \(\mathbf{W}=\mathbf{H}\circ S_{t,\varepsilon}(\mathbf{p})\), which can be achieved in \(O(d^{2})\) time and space per optimization step by computing each arc as \(\mathbf{W}_{uv}=\mathbf{H}_{uv}\cdot\sigma((\mathbf{p}_{v}-\mathbf{p}_{u}- \varepsilon)/t)\). In the literature, \(\textsc{socurl}\) proposed a slightly similar model motivated by Hodge Theory (Hodge, 1989) where each arc is modeled as \(\mathbf{W}_{uv}=\mathbf{H}_{uv}\cdot\mathrm{ReLU}(\mathbf{p}_{v}-\mathbf{p}_{u})\). To avoid a significant performance drop, their formulation requires a preliminary solution from a constrained optimization problem and does not jointly learn the parameters corresponding to our adjacencies and priorities. In the following, we describe how \(\textsc{cosmo}\) effectively reduces to an unconstrained problem and avoids evaluating acyclicity altogether.
Figure 2: (_left_) With infinite temperature, the sigmoid function is constant and connects all vertices. (_center_) Given two nodes, for positive temperatures the smooth orientation matrix has larger values on the arcs respecting the priority ordering. (_right_) In the limit of the temperature to zero, the smooth orientation matrix contains non-zero entries if and only if the arc respects the order, i.e., it directs a node to another with sufficiently higher priority.
Temperature Annealing.The smooth orientation matrix \(S_{t,\varepsilon}(\mathbf{p})\) represents an acyclic orientation only in the limit of the temperature \(t\to 0\). Nonetheless, the gradient loss vanishes whenever the temperature tends to zero. In fact, for an arbitrary loss function \(\mathcal{L}\), we can decompose the gradient of each component \(\mathbf{p}_{u}\) of the priority vector as follows
\[\frac{\partial\mathcal{L}(\mathbf{W})}{\partial\mathbf{p}_{u}} =\sum_{v\in V}\frac{\partial\mathcal{L}(\mathbf{W})}{\partial \mathbf{W}_{uv}}\cdot\frac{\partial\mathbf{W}_{uv}}{\partial\mathbf{p}_{u}}+ \frac{\partial\mathcal{L}(\mathbf{W})}{\partial\mathbf{W}_{vu}}\cdot\frac{ \partial\mathbf{W}_{vu}}{\partial\mathbf{p}_{u}}, \tag{8}\] \[\frac{\partial\mathbf{W}_{uv}}{\partial\mathbf{p}_{u}} =-\frac{\mathbf{H}_{uv}}{t}\sigma_{t,\varepsilon}(\mathbf{p}_{v}- \mathbf{p}_{u})(1-\sigma_{t,\varepsilon}(\mathbf{p}_{v}-\mathbf{p}_{u}))\] (9) \[\frac{\partial\mathbf{W}_{vu}}{\partial\mathbf{p}_{u}} =\frac{\mathbf{H}_{vu}}{t}\sigma_{t,\varepsilon}(\mathbf{p}_{v}- \mathbf{p}_{u})(1-\sigma_{t,\varepsilon}(\mathbf{p}_{v}-\mathbf{p}_{u})). \tag{10}\]
Therefore, by property of the sigmoidal function \(\sigma_{t,\varepsilon}\) it holds that both \(\partial\mathbf{W}_{vu}/\partial\mathbf{p}_{u}\) and \(\partial\mathbf{W}_{vu}/\partial\mathbf{p}_{u}\) tend to zero for \(t\to 0\). To handle this issue, we tackle the optimization problem by progressively reducing the temperature during training. In practice, we perform cosine annealing from an initial positive value \(t_{\text{start}}\) to a significantly lower target value \(t_{\text{end}}\approx 0\). We further motivate our choice by showing the existence of an upper bound on the acyclicity of the orientation matrix that is a monotone increasing function of the temperature. Therefore, temperature annealing effectively decreases the acyclicity upper bound during training of the _smooth_ orientation and, consequently, of the adjacencies.
**Theorem 2**.: _Let \(\mathbf{p}\in\mathds{R}^{d}\) be a priority vector, \(\varepsilon>0\) be a strictly positive threshold, and \(t>0\) be a strictly positive temperature. Then, given the smooth orientation matrix \(S_{t,\varepsilon}(\mathbf{p})\in\mathds{R}^{d\times d}\), it holds_
\[h(S_{t,\varepsilon}(\mathbf{p}))\leq e^{d\alpha}-1, \tag{11}\]
_where \(h(S_{t,\varepsilon}(\mathbf{p}))=\operatorname{tr}(e^{S_{t,\varepsilon}( \mathbf{p})})-d\) is the notears acyclicity constraint and \(\alpha=\sigma(-\varepsilon/t)\)._
Proof.: We report the proof in Appendix B.
Direct Matrix Regularization.To contrast the discovery of spurious arcs we perform feature selection by applying L1 regularization on the adjacency matrix \(\mathbf{H}\). Further, during the annealing procedure, even if a vertex \(u\) precedes \(v\) in the partial order \(\prec_{\mathbf{p},\varepsilon}\), the weight of the opposite arc \(v\to u\) in the smooth orientation matrix will only be approximately zero. Therefore, sufficiently large values of the weighted adjacency matrix \(\mathbf{H}\), might still lead to undesirable cyclic paths during training. To avoid this issue, we regularize the L2-norm of the non-oriented adjacency matrix.
Priority Vector Regularization.Other than for small temperature values, the partial derivatives in Equations 9 and 10 tend to zero whenever the priorities distances \(|\mathbf{p}_{v}-\mathbf{p}_{u}|\) tend to infinity. Therefore, we regularize the L2-norm of the priority vector. For the same reason, we initialize each component from the normal distribution \(\mathbf{p}_{u}\sim\mathcal{N}(0,\varepsilon^{2}/2)\), so that each difference follows the normal distribution \(\mathbf{p}_{v}-\mathbf{p}_{u}\sim\mathcal{N}(0,\varepsilon^{2})\). We provide further details on initialization in Appendix A.3.
Optimization Problem.We formalize cosmo as the differentiable and unconstrained problem
\[\min_{\mathbf{H}\in\mathds{R}^{d\times d},\mathbf{p}\in\mathds{R}^{d}} \mathcal{L}(\mathbf{H}\circ S_{t,\varepsilon}(\mathbf{p}))+\lambda_{1}\| \mathbf{H}\|_{1}+\lambda_{2}\|\mathbf{H}\|_{2}+\lambda_{p}\|\mathbf{p}\|_{2}, \tag{12}\]
where \(\lambda_{1},\lambda_{2},\lambda_{p}\) are the regularization coefficients for the adjacencies and the priorities. As the regularization coefficients \(\boldsymbol{\lambda}=\{\lambda_{1},\lambda_{2},\lambda_{p}\}\), the initial temperature \(t_{\text{start}}\), the target temperature \(t_{\text{end}}\), and the shift \(\varepsilon\) are hyperparameters of our proposal. Nonetheless, Theorem 2 can guide the choice of the final temperature value and the shift to guarantee a maximum tolerance on the acyclicity of the smooth orientation matrix.
## 5 Experiments
We present an experimental comparison of cosmo against related acyclic structure learning approaches. Our method operates on possibly full-rank graphs and ensures the exact acyclicity of the solution. Therefore, we focus on algorithms providing the same guarantees and under the same conditions. Namely, we confront with the structure learning performance and execution time of notears (Zheng et al., 2018), nocurl(Yu et al., 2021), and dagma(Bello et al., 2022). As previously discussed, nocurl proposes a similar model with a substantially different optimization scheme. To highlight the importance of both our parameterization and optimization scheme, we also compare with an entirely unconstrained variant of the algorithm where we directly train the variables ordering without any preliminary solution. In the results, we refer to this variant as nocurl-u.
We base our empirical analysis on the testbed originally introduced by Zheng et al. (2018) and then adopted as a benchmark by all followup methods. In particular, we test continuous approaches on randomly generated Erdos-Renyi (ER) and scale-free (SF) graphs of increasing size and for different exogenous noise types. For each method, we perform structure learning by minimizing the Mean Squared Error (MSE) of a model on a synthetic dataset using the Adam optimizer (Kingma and Ba, 2015). In Appendix C, we report further details on the implementation of cosmo, the baselines, and the datasets.
### Evaluation Overview
In line with previous work, we retrieve the binary adjacency matrix by thresholding the learned weights against a small threshold \(\omega=0.3\)(Zheng et al., 2018). While cosmo guarantees the solution to be acyclic, we maintain the thresholding step to prune correctly oriented but spurious arcs. Then, we measure the Normalized Hamming Distance (NHD) between the solution and the ground-truth as the sum of missing, extra, or incorrect edges divided by the number of nodes. In general, testing weights against a fixed threshold might limit the retrieval of significant arcs with small coefficients in the true model (Xu et al., 2022). For this reason, we also compute the Area under the ROC curve (AUC), which describes the trade-off between the True Positive Rate (TPR) and the False Positive Rate (FPR) for increasing values of the weight threshold (Heinze-Deml et al., 2018). Due to space limitations, we only report in the main body the AUC results, which is the most comprehensive score. Detailed results for other metrics, including NHD, are provided in the Appendices.
### Results Discussion
By looking at the AUC of the learned graphs, we observe that cosmo consistently achieves results that are comparable and competitive with those from constrained-optimization solutions such as dagma or notears across different graph sizes and noise types (Table 2). This empirically confirms the approximation properties of cosmo, which can reliably discover DAGs without resorting to explicit acyclicity constraints.
Furthermore, cosmo performs better than nocurl on most datasets. We recall that the latter is the only existing structure learning approach combining constrained and unconstrained optimization. As pointed out in Yu et al. (2021), we also observe that the discovery performance of nocurl drops when optimizing the variable ordering instead of inferring it from a preliminary solution. The fact that cosmo outperforms nocurl-u on all datasets highlights the substantial role and effect of our _smooth_ orientation formulation and our optimization scheme for learning the topological
\begin{table}
\begin{tabular}{c l c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Gauss} & \multicolumn{2}{c}{Exp} & \multicolumn{2}{c}{Gumbel} \\ \cline{3-10} \(d\) & Algorithm & AUC & Time & AUC & Time & AUC & Time \\ \hline \multirow{4}{*}{30} & cosmo & _0.984 \(\pm\) 0.02_ & **88 \(\pm\)** & **2** & **0.989 \(\pm\) 0.01** & **89 \(\pm\)** & **3** & 0.914 \(\pm\) 0.10 & **87 \(\pm\)** & **2** \\ & dagma & **0.985 \(\pm\) 0.01** & 781 \(\pm\) 192 & _0.986 \(\pm\) 0.02_ & 744 \(\pm\) 75 & _0.973 \(\pm\) 0.02_ & 787 \(\pm\) 86 \\ & nocurl & 0.967 \(\pm\) 0.01 & 822 \(\pm\) 15 & 0.956 \(\pm\) 0.02 & 826 \(\pm\) 24 & 0.915 \(\pm\) 0.04 & 826 \(\pm\) 17 \\ & nocurl-u & 0.694 \(\pm\) 0.06 & _226 \(\pm\)_ 5 & 0.694 \(\pm\) 0.05 & _212 \(\pm\)_ 5 & 0.678 \(\pm\) 0.05 & _212 \(\pm\)_ 5 \\ & notears & 0.973 \(\pm\) 0.02 & 5193 \(\pm\) 170 & 0.966 \(\pm\) 0.03 & 5579 \(\pm\) 284 & **0.981 \(\pm\) 0.01** & 5229 \(\pm\) 338 \\ \hline \multirow{4}{*}{100} & cosmo & 0.961 \(\pm\) 0.03 & **99 \(\pm\)** & **2** & _0.985 \(\pm\) 0.01_ & **99 \(\pm\)** & **2** & _0.973 \(\pm\) 0.01_ & **98 \(\pm\)** & **1** \\ & dagma & **0.982 \(\pm\) 0.01** & 660 \(\pm\) 141 & **0.986 \(\pm\) 0.01** & 733 \(\pm\) 109 & **0.986 \(\pm\) 0.01** & 858 \(\pm\) 101 \\ & nocurl & 0.962 \(\pm\) 0.01 & 1664 \(\pm\) 14 & 0.950 \(\pm\) 0.02 & 1655 \(\pm\) 28 & 0.962 \(\pm\) 0.01 & 1675 \(\pm\) 34 \\ & nocurl-u & 0.682 \(\pm\) 0.05 & _267 \(\pm\) 10_ & 0.693 \(\pm\) 0.05 & _242 \(\pm\)_ 47 & 0.663 \(\pm\) 0.04 & _247 \(\pm\)_ 9** \\ & notears & _0.963 \(\pm\) 0.01_ & 11000 \(\pm\) 339 & 0.972 \(\pm\) 0.01 & 10880 \(\pm\) 366 & 0.969 \(\pm\) 0.00 & 11889 \(\pm\) 343 \\ \hline \multirow{4}{*}{500} & cosmo & _0.933 \(\pm\) 0.01_ & **436 \(\pm\) 81** & **0.986 \(\pm\) 0.00** & **390 \(\pm\) 102** & **0.982 \(\pm\) 0.01** & **410 \(\pm\) 106** \\ & dagma & **0.980 \(\pm\) 0.00** & 2485 \(\pm\) 365 & _0.984 \(\pm\) 0.01_ & 2575 \(\pm\) 469 & _0.980 \(\pm\) 0.00_ & 2853 \(\pm\) 218 \\ \cline{1-1} & nocurl-u & 0.683 \(\pm\) 0.05 & _1546 \(\pm\) 304_ & 0.715 \(\pm\) 0.03 & _1488 \(\pm\) 249_ & 0.728 \(\pm\) 0.05 & _1342 \(\pm\) 209_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results on linear ER-4 acyclic graphs with different noise terms and sizes. For each algorithm, we report mean and standard deviation over five independent runs of the AUC metric and the time in seconds. We highlight in bold the **best** result and in italic bold the _second best_ result. The reported duration of nocurl includes the time to retrieve the necessary preliminary solution using an acyclicity constraint. We denote as nocurl-u the quadratic version of nocurl. Complete results on additional metrics and graph types are in Appendix E.
ordering of variables from data in an unconstrained way. Overall, our proposal achieves, on average, the best or the second-best result for the AUC metric across all the analyzed datasets and correctly classifies arcs also for large graphs (Figure 4). Further, as we extensively report in Appendix E for different graph size and noise terms, our non-linear extension obtains comparable performances with dagma-mlp in significantly less time.
Unsurprisingly, due to its quadratic computational complexity, cosmo is significantly faster than constrained methods on all datasets, especially for increasing graph sizes. Notably, despite employing early stopping conditions for convergence, all competing methods incur in the cost of solving multiple optimization problems with higher computational cost per step (Figure 3). In particular, while the unconstrained variant nocurl-u has a comparable per-epoch average time cost, for a substantially worse graph recovery performance, nocurl overall pays the need for a preliminary solution computed with an acyclicity constraint. Therefore, already for graphs with 500 nodes, only cosmo, dagma, and nocurl-u return a solution before hitting our wall time limit. Finally, we observe that the cubic computational complexity of dagma significantly emerges when dealing with large graphs. Therefore, despite the effective underlying optimizations on the log-determinant computation, dagma's acyclicity constraint still affects scalability.
Given the proposed parameterization, cosmo requires particular care in the choice of the regularization hyperparameters. In particular, we carefully considered the importance of regularizing the priority vector, which constitutes one of our main differences with previous structure learning approaches. We found that our hyperparameter validation procedure consistently returned relatively low priority regularization values (\(\lambda_{p}\approx\) 1e-3). However, while it might benefit structure learning for larger graphs, ablating priority regularization results in a non-negligible performance drop for smaller graphs (Table 3).
## 6 Conclusion
We introduced cosmo, an unconstrained and continuous approach for learning an acyclic graph from data. Our novel definition of _smooth_ orientation matrix ensures acyclicity of the solution without requiring the evaluation of computationally expensive constraints. Furthermore, we prove that annealing the temperature of our smooth acyclic orientation corresponds to decreasing an upper bound on the widely adopted acyclicity relaxation from notears. Overall, our empirical analysis showed that cosmo performs comparably to constrained methods in significantly less time. Notably, our proposal significantly outperforms the only existing work _partially_ optimizing in the space of
Figure 4: Visualization of the weighted adjacency matrix learned by cosmo (ER4, Gaussian noise, 100 nodes) against the ground truth. We also report the difference between the ground-truth and the learned weights. By thresholding the learned weighted adjacency matrix, cosmo correctly classifies most true (TPR = 0.96) and non-existing arcs (FPR = 0.01), resulting in a limited number of errors (NHD = 0.93) due to the narrow difference in the retrieved weights.
Figure 3: Average duration of a training epoch for an increasing number of nodes on five independent runs on random ER-4 DAGs.
DAGs, nocurl, and its completely unconstrained variant nocurl-u. Therefore, the analysis highlights the role of our parameterization, which does not incur the necessity of preliminary solutions and provably returns a DAG without ever evaluating acyclity.
In recent years, several authors debated using continuous acyclic learners as full-fledged causal discovery algorithms (Reisach et al., 2021; Kaiser and Sipos, 2022; Ng et al., 2023). In this context, our empirical analysis of cosmo shares the same limitations of existing baselines and, exactly like them, might not be significant in the causal discovery scenario. However, acyclic optimization techniques are a fundamental component for continuous discovery approaches (Brouillard et al., 2020; Lorch et al., 2022). By reducing by an order of magnitude the necessary time to optimize an acyclic causal graph, cosmo opens up more scalable continuous causal discovery strategies, without sacrificing -- as demonstrated in this work -- the theoretical guarantees on DAGs approximation capabilities.
|
2301.03371 | Learning Optimal Phase-Shifts of Holographic Metasurface Transceivers | Holographic metasurface transceivers (HMT) is an emerging technology for
enhancing the coverage and rate of wireless communication systems. However,
acquiring accurate channel state information in HMT-assisted wireless
communication systems is critical for achieving these goals. In this paper, we
propose an algorithm for learning the optimal phase-shifts at a HMT for the
far-field channel model. Our proposed algorithm exploits the structure of the
channel gains in the far-field regions and learns the optimal phase-shifts in
presence of noise in the received signals. We prove that the probability that
the optimal phase-shifts estimated by our proposed algorithm deviate from the
true values decays exponentially in the number of pilot signals. Extensive
numerical simulations validate the theoretical guarantees and also demonstrate
significant gains as compared to the state-of-the-art policies. | Debamita Ghosh, Manjesh K. Hanawal, Nikola Zlatanov | 2022-12-12T12:43:45Z | http://arxiv.org/abs/2301.03371v1 | # Learning Optimal Phase-Shifts of Holographic Metasurface Transceivers
###### Abstract
Holographic metasurface transceivers (HMT) is an emerging technology for enhancing the coverage and rate of wireless communication systems. However, acquiring accurate channel state information in HMT-assisted wireless communication systems is critical for achieving these goals. In this paper, we propose an algorithm for learning the optimal phase-shifts at a HMT for the far-field channel model. Our proposed algorithm exploits the structure of the channel gains in the far-field regions and learns the optimal phase-shifts in presence of noise in the received signals. We prove that the probability that the optimal phase-shifts estimated by our proposed algorithm deviate from the true values decays exponentially in the number of pilot signals. Extensive numerical simulations validate the theoretical guarantees and also demonstrate significant gains as compared to the state-of-the-art policies.
Holographic Metasurface Transceivers, Channel State Information, Uniform Exploration
## I Introduction
Future wireless network technologies, namely beyond-5G and 6G, have been focused on millimeter wave (mmWave) and TeraHertz (THz) communications technologies as possible solutions to the ever growing demands for higher data rates and lower latency. However, mmWave and THz communications have challenges that need to be addressed before this technology is adopted [1, 2]. One such major challenge is signal deterioration due to reflections and absorption.
A possible solution for the signal deterioration are base stations (BSs) with massive antennas arrays that can provide large beamforming gains and thereby compensate for the signal deterioration [3]. However, implementing a BS with a massive antenna array is itself challenging due to the high hardware costs. Holographic Metasurface Transceivers (HMTs) are introduced as a promising solution for building a massive antenna array [4, 5]. A HMT is comprised of a large number of metamaterial elements densely deployed into a limited surface area in order to form a spatially continuous transceiver aperture. These metamaterial elements at the HMT acts as phase-shifting antennas, where each phase-shifting element of the HMT can change the phase of transmitting/receiving signal and thereby beamform towards desired directions where the users are allocated [6]. Due to these continuous apertures, HMTs can be represented as an extension of the traditional massive antenna arrays with discrete antennas to continuous reflecting surfaces [6].
In this paper, we consider the HMT-assisted wireless systems illustrated in Fig. 1, where a HMT acts as a BS that serves multiple users. The performance of this system is dependent on channel state information (CSI) estimates at the HMT, which are used for accurate beamforming towards the users. The authors in [7] and [8] have studied the effect of HMT-assisted systems on enhancing the communication performance under the assumption of perfect CSI. However, perfect CSI is not available in practice. In practice, the CSI has to be estimated via pilot signals, which results in inaccurate CSI estimates at the HMT.
The aim of this paper is to obtain accurate CSI estimates at the HMT, which in turn is used to set the optimal phase-shifts at the HMT that maximize the data rate to the users when the users are located in the far-field. To this end, we exploit the structure of the far-field channel model between the HMT and the users to show that the optimal phase-shifts at the HMT can be obtained from five samples of the received pilot signals at the HMT in a noiseless environment. We then use this approach to develop a learning algorithm that learns the optimal phase-shifts from the received pilot signals at the HMT in a noisy environment. Finally, we provide theoretical guarantees for our learning algorithm. Specifically, we prove that the probability of the phase-shifts generated by our algorithm to deviate by more than \(\epsilon\) from the optimal phase-shifts is small and decays as the number of pilot symbols increases. The error analysis is based on tail probabilities of the non-central Chi-squared distribution.
In summary, our main contributions are as follows:
* We propose an efficient learning algorithm for estimating the optimal phase-shifts at an HMT in the presence of noise for the case when the users that the HMT is serving are located at the far-field region.
* We prove that the probability of the phase-shifts generated by our algorithm to deviate by more than \(\epsilon\) from the optimal phase-shifts is small and decays exponentially as the number of pilots used for estimation increases.
* We show numerically that the performance of the proposed algorithm significantly outperforms existing CSI estimation algorithms.
### _Related Works_
Several channel estimation schemes, which are proposed for the massive antenna arrays, are also applicable to the considered HMT including exhaustive search [9], hierarchical search [10, 11], and compressed sensing (CS) [11]. As the exhaustive search in [9] significantly increases the training
overhead, the authors in [10] and [11] proposed the hierarchical search based on a predefined codebook as an improvement over the exhaustive search. The hierarchical schemes, in general, may incur high training overhead and system latency since they require non-trivial coordination among the transmitter and the user [11]. On the other hand, the proposed CS-based channel estimation scheme in [11] provides trade-offs between accuracy of estimation and training overhead at different computational costs.
On the other hand, CSI estimation schemes developed specifically for HMTs can be found in [12] and [13]. The authors in [12] proposed the least-square estimation based approach to study the channel estimation problem for the uplink between a single user and the BS equipped with the holographic surface with a large number of antennas. However, the authors require an additional knowledge of antennas array geometry to reduce the pilot overhead required by the channel estimation, and hence the computational complexity scales up with the number of antennas at the BS. In [13], the authors proposed a scheme for the estimation of the far-field channel between a HMT and a user that requires only five pilots for perfect estimation in the noise-free environment. In the noisy case, the authors of [13] proposed an iterative algorithm that efficiently estimates the far-field channel. Unlike the existing works, the training overhead and the computational cost of the proposed scheme in [13] does not scale with the number of phase-shifting elements at the HMT. The iterative algorithm in [13] significantly outperforms the hierarchical and CS based schemes. However, the authors in [13] did not provide any theoretical guarantees on their proposed algorithm. Motivated by [13], in this work, we propose an algorithm which outperforms the one in [13], and, in addition, we also provide theoretical guarantees for our proposed algorithm.
This paper is organized as follows. The system and channel models for the HMT communication system are given in Sec. II. The proposed algorithm for learning the optimal phase-shifts is given in Sec. III and its theoretical guarantee is provided in Sec. IV. Numerical evaluation of the proposed algorithm is provided in Sec. V. Finally, Sec. VI concludes the paper.
## II System and Channel Models
We consider a HMT-assisted wireless communication system, shown in Fig. 1, where an HMT communicates with multiple users in the mmWave band. We assume that there is a Line of Sight (LoS) between the HMT and each user. As a result, when modeling the far-field channel, we only take into account the LoS path since its power is order of magnitude higher than non-line-of-sight (NLoS) paths [14]. The NLoS components are incorporated in the noise. We assume that the users send orthogonal pilots to the HMT for channel estimation. Based on the estimated CSI at the HMT to each user, the HMT sends data to the users. Hence, the data rate from the HMT to the users is directly dependent on the accuracy of the CSI estimates at the HMT. Since in this paper our main goal is the accurate CSI estimation at the HMT to each user, which in turn send orthogonal pilots to the HMT, in the rest of the paper, we will focus on the CSI estimation between the HMT and a typical user.
### _HMT Model_
The HMT has a rectangular surface of size \(L_{x}\times L_{y}\), where \(L_{x}\) and \(L_{y}\) are the width and the length of the surface, respectively. The HMT's surface is comprised of a large number of sub-wavelength phase-shifting elements, where each elements is assumed to be a square of size \(L_{e}\times L_{e}\) and can change the phase of the transmit/receive signal independently from rest of the elements. Let \(d_{r}\) be the distance between two neighboring phase-shifting elements. The total number of phase-shifting elements of the HMT is given by \(M=M_{x}\times M_{y}\), where \(M_{x}=L_{x}/d_{r}\) and \(M_{y}=L_{y}/d_{r}\). Without loss of generality, we assume that the HMT lies in the \(x-y\) plane of a Cartesian coordinate system, where the center of the surface is at the origin of the coordinate system. Assuming \(M_{x}\) and \(M_{y}\) are odd numbers, the position of the \((m_{x},m_{y})^{th}\) phase-shifting element in the Cartesian coordinate system is given as \((x,y)=(m_{x}d_{r},m_{y}d_{r})\), where \(m_{x}\in\left\{-\frac{M_{x}-1}{2},\ldots,\frac{M_{y}-1}{2}\right\}\) and \(m_{y}\in\left\{-\frac{M_{y}-1}{2},\ldots,\frac{M_{y}-1}{2}\right\}\). When \(M_{x}\) or \(M_{y}\) is even, the position of the \((m_{x},m_{y})^{th}\) element can be appropriately defined.
### _Channel Model_
Consider the channel between the \((m_{x},m_{y})^{th}\) phase-shifting element at the HMT and the typical user. Let the beamforming weight imposed by the \((m_{x},m_{y})^{th}\) phase-shifting element at the HMT be \(\Gamma_{m_{x}m_{y}}=e^{i\beta_{m_{x}m_{y}}}\), where \(\beta_{m_{x}m_{y}}\) is the phase shift at the \((m_{x},m_{y})^{th}\) element. Let \(\lambda\) denote the wavelength of the carrier frequency, \(k_{0}=\frac{2\pi}{\lambda}\) be the wave number, \(d_{0}\) be the distance between the user and the center of the HMT and let \(F_{m_{x}m_{y}}\) denote the effect of the size and power radiation pattern of the \((m_{x},m_{y})^{th}\) phase-shifting element on the channel coefficient [15]. Due to the far-field assumptions, the radiation pattern of all the phase-shifting elements of the HMT are identical, i.e., \(F_{m_{x}m_{y}}=F\), \(\forall m_{x},m_{y}\) holds. Finally, let \(\theta\) and \(\phi\) denote the elevation and azimuth angles of the impinging wave from the user to the center of the HMT, see Fig. 2.
Now, if the phase-shift imposed by the \((m_{x},m_{y})^{th}\) element, \(\beta_{m_{x},m_{y}}\), is set to
\[\beta_{m_{x}m_{y}}=-\mod(k_{0}d_{r}(m_{x}\beta_{1}+m_{y}\beta_{2}),2\pi), \forall m_{x},m_{y},\]
Fig. 1: The HMT-assisted wireless communication system [13].
where \(\beta_{1}\) and \(\beta_{2}\) are the phase-shift parameters [13, 16, 17], which are the only degrees of freedom within the phase-shift \(\beta_{m_{s},m_{y}}\), then the HMT-user channel in the far-field is approximated accurately by [13, 16, 17]
\[H(\beta_{1},\beta_{2})=\left(\frac{\sqrt{F}\lambda e^{-j_{R} \omega_{0}}}{4\pi\alpha_{0}}\right)L_{x}L_{y}\times\text{sinc}\left(K_{x}\pi( \alpha_{1}-\beta_{1})\right)\] \[\times\text{sinc}\left(K_{y}\pi(\alpha_{2}-\beta_{2})\right), \tag{1}\]
where \(K_{x}=\frac{L_{x}}{\lambda}\), \(K_{y}=\frac{L_{y}}{\lambda}\), \(\alpha_{1}=\sin(\theta)\cos(\phi),\alpha_{2}=\sin(\theta)\sin(\phi)\), and \(\text{sinc}(x)=\frac{\sin(x)}{x}\). Please note that \(\alpha_{1}\in[-1,1]\) and \(\alpha_{2}\in[-1,1]\), and their values depend on the location of the user, i.e., on \(\theta\) and \(\phi\).
From (1), it is clear that the absolute value of the HMT-user channel is maximized when the two sinc functions attain their maximum values, which occurs when the phase-shifting parameters, \(\beta_{1}\) and \(\beta_{2}\), are set to \(\beta_{1}=\alpha_{1}\) and \(\beta_{2}=\alpha_{2}\), where \((\alpha_{1},\alpha_{2})\) are unknown to the HMT since they depend on the location of the user. Therefore, in the far-field case, the problem of finding the optimal phase-shifts of the elements at the HMT reduces to estimating the two parameters, \(\alpha_{1}\) and \(\alpha_{2}\) at the HMT.
**Remark 1**.: Fig. 3 shows an example of \(|H(\beta_{1},\beta_{2})|\) as a function of \((\beta_{1},\beta_{2})\). As can be seen from Fig. 3, the graph of \(|H(\beta_{1},\beta_{2})|\) hits zero periodically and has several lobes. The optimal value \((\alpha_{1},\alpha_{2})=(0.68,-0.45)\) is attained at the central lobe which has the highest peak and is attained for \((\beta_{1}^{*},\beta_{2}^{*})=(\alpha_{1},\alpha_{2})=(0.68,-0.45)\).
## III Proposed Channel Estimation Strategy
In this section, we propose an algorithm that estimates the optimal phase-shifting parameters \(\beta_{1}\) and \(\beta_{2}\) that maximize \(|H(\beta_{1},\beta_{2})|\) in (1) in the presence of noise.
### _Problem Formulation_
In the channel estimation procedure, the user sends a pilot symbol \(x_{p}=\sqrt{P}\) to the HMT, where \(P\) is the pilot transmit power. Then, the received signal at the HMT for fixed phase-shifting parameters \((\beta_{1},\beta_{2})\), denoted by \(y(\beta_{1},\beta_{2})\), is given by
\[y(\beta_{1},\beta_{2})=\sqrt{P}\times H(\beta_{1},\beta_{2})+\zeta, \tag{2}\]
where \(\zeta\) is the complex-valued additive white Gaussian noise (AWGN) with zero mean and variance \(\sigma^{2}\) at the HMT. The received signal in (2) is then squared in order to obtain the received signal squared, denoted by \(r(\beta_{1},\beta_{2})\), and given by
\[r(\beta_{1},\beta_{2})=|y(\beta_{1},\beta_{2})|^{2}=\left|\sqrt{P}\times H( \beta_{1},\beta_{2})+\zeta\right|^{2}. \tag{3}\]
**Objective:** Our goal is to identify the optimal phase-shifting parameters, denoted by \((\beta_{1}^{*},\beta_{2}^{*})\), at the HMT that maximizes \(r(\beta_{1},\beta_{2})\) given by (3). Specifically, we aim to solve the following optimisation problem
\[(\beta_{1}^{*},\beta_{2}^{*})=\underset{\begin{subarray}{c}\beta_{1}\in[-1,1 ]\\ \beta_{1}\in[-1,1]\end{subarray}}{\arg\max}r(\beta_{1},\beta_{2}). \tag{4}\]
The expected value of \(r(\beta_{1},\beta_{2})\), denoted by \(\mu(\beta_{1},\beta_{2})\), is given by
\[\mu(\beta_{1},\beta_{2}) =\mathbb{E}\left[r(\beta_{1},\beta_{2})\right]\] \[=\left|\sqrt{P}\times H(\beta_{1},\beta_{2})\right|^{2}+\sigma^{ 2}. \tag{5}\]
Using (5), the optimization problem in (4) can be written equivalently as
\[(\beta_{1}^{*},\beta_{2}^{*})=\underset{\begin{subarray}{c}\beta_{1}\in[-1,1 ]\\ \beta_{1}\in[-1,1]\end{subarray}}{\arg\max}\mu(\beta_{1},\beta_{2}). \tag{6}\]
In order to obtain an intuition on how to solve (6), we first assume that \(\mu(\beta_{1},\beta_{2})\) in (5) is known perfectly at the HMT for five specific values of the pair \((\beta_{1},\beta_{2})\). Later, we use the same intuition to solve (6) when \(\mu(\beta_{1},\beta_{2})\) are not known perfectly but can be estimated.
### _The Optimal Phase-Shifting Parameters When \(\mu(\beta_{1},\beta_{2})\) Are Known In Advance_
For notational convenience, let us define the set \(\mathcal{B}\) as
\[\mathcal{B}=\left\{(\beta_{1}^{0},\beta_{2}^{0}),(\beta_{1}^{0}+v,\beta_{2}^{ 0}),(\beta_{1}^{0}-v,\beta_{2}^{0}),\right.\]
\[\left.(\beta_{1}^{0},\beta_{2}^{0}+w),(\beta_{1}^{0},\beta_{2}^{0}-w)\right\}. \tag{7}\]
The set \(\mathcal{B}\) is comprised of five pairs of the phase-shifting parameters \((\beta_{1},\beta_{2})\), where \(\beta_{1}^{0}\) and \(\beta_{2}^{0}\) are some initial arbitrarily selected phase-shifting parameters, \(v\) and \(w\) are numbers chosen such that \(K_{x}v\in\mathbb{N}\) and \(K_{y}w\in\mathbb{N}\) hold, where \(\mathbb{N}\) is the set of natural numbers. Please note that for a selected \((\beta_{1}^{0},\beta_{2}^{0})\) and a
Fig. 2: Distance between the \((m_{x},m_{y})\)-th phase-shifting element at the HMT and the user [13].
chosen \(v\) and \(w\), if \(\left|\beta_{1}^{0}\pm v\right|\geq 1\) then we set \(\left|\beta_{1}^{0}\pm v\right|=1\). In the same way, if \(\left|\beta_{2}^{0}\pm w\right|\geq 1\) then we set \(\left|\beta_{2}^{0}\pm w\right|=1\).
**Theorem 1**.: _If the HMT can obtain \(\mu(\beta_{1}^{0},\beta_{2}^{0})\), \(\mu(\beta_{1}^{0}+v,\beta_{2}^{0})\), \(\mu(\beta_{1}^{0}-v,\beta_{2}^{0})\), \(\mu(\beta_{1}^{0},\beta_{2}^{0}+w)\) and \(\mu(\beta_{1}^{0},\beta_{2}^{0}-w)\), i.e., obtain \(\mu(\beta_{1},\beta_{2})\) for the five phase-shifting parameters in \((\beta_{1},\beta_{2})\in\mathcal{B}\) given in (7), then the optimal phase-shifting parameters \(\beta_{1}^{*}\) and \(\beta_{2}^{*}\), which are the solutions of (6), are given by_
\[\beta_{1}^{*}=\left\{\frac{\alpha_{1}^{(i)}+\alpha_{1}^{(j)}}{2}: \min_{i\in\{1,2\},j\in\{3,4\}}\left|\alpha_{1}^{(i)}-\alpha_{1}^{(j)}\right| \right\}, \tag{8}\]
_where_
\[\alpha_{1}^{(1)/(2)}=\beta_{1}^{0}+\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu\left(\beta_{1}^{0}+v, \beta_{2}^{0}\right)-\sigma^{2}}}}\] \[\alpha_{1}^{(3)/(4)}=\beta_{1}^{0}-\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu\left(\beta_{1}^{0}-v, \beta_{2}^{0}\right)-\sigma^{2}}}}\]
_and_
\[\beta_{2}^{*}=\left\{\frac{\alpha_{2}^{(i)}+\alpha_{2}^{(j)}}{2}: \min_{i\in\{1,2\},j\in\{3,4\}}\left|\alpha_{2}^{(i)}-\alpha_{2}^{(j)}\right| \right\}, \tag{9}\]
_where_
\[\alpha_{2}^{(1)/(2)}=\beta_{2}^{0}+\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu\left(\beta_{1}^{0}, \beta_{2}^{0}\pm w\right)-\sigma^{2}}}}\] \[\alpha_{2}^{(3)/(4)}=\beta_{2}^{0}-\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu\left(\beta_{1}^{0}, \beta_{2}^{0}\right)-\sigma^{2}}}}\]
Proof.: By using (5) and (1) for any \((\beta_{1},\beta_{2})=(\beta_{1}^{0},\beta_{2}^{0})\), we have the following
\[\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}= \left|\sqrt{P}\left(\frac{\sqrt{F}\lambda e^{-jk_{0}d_{0}}}{4\pi d _{0}}\right)L_{x}L_{y}\text{sinc}\left(K_{x}(\alpha_{1}-\beta_{1}^{0})\right)\right.\] \[\left.\times\text{sinc}\left(K_{y}(\alpha_{2}-\beta_{2}^{0})\right) \right|^{2}. \tag{10}\]
For \((\beta_{1},\beta_{2})=(\beta_{1}^{0}+v,\beta_{2}^{0})\), where \(v\) is any arbitrary parameter such that \(K_{x}v\in\mathbb{N}\) and \(\left|\beta_{1}^{0}\pm v\right|\leq 1\) holds, we have
\[\mu(\beta_{1}^{0}+v,\beta_{2}^{0})-\sigma^{2}= \left|\sqrt{P}\left(\frac{\sqrt{F}\lambda e^{-jk_{0}d_{0}}}{4\pi d _{0}}\right)L_{x}L_{y}\right.\] \[\left.\times\text{sinc}\left(K_{y}(\alpha_{2}-\beta_{2}^{0})\right) \right|^{2}. \tag{11}\]
Dividing (10) by (11), we obtain
\[\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0}+v,\beta _{2}^{0})-\sigma^{2}}=\frac{\left|\text{sinc}\left(K_{x}(\alpha_{1}-\beta_{1}^{ 0})\right)\right|^{2}}{\left|\text{sinc}\left(K_{x}(\alpha_{1}-\beta_{1}^{0}-v )\right)\right|^{2}}\]
\[\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0}+v,\beta _{2}^{0})-\sigma^{2}}=\frac{\left|\frac{\text{sinc}\left(K_{x}\pi(\alpha_{1}- \beta_{1}^{0})\right)}{K_{x}\pi(\alpha_{1}-\beta_{1}^{0})}\right|^{2}}{\left| \text{sinc}\left(K_{x}(\alpha_{1}-\beta_{1}^{0}-v)\right)\right|^{2}}. \tag{12}\]
If \(v\) is selected such that \(K_{x}v\in\mathbb{N}\), then we have \(\left|\text{sinc}\left(K_{x}\pi(\alpha_{1}-\beta_{1}^{0}\pm v)\right)\right|= \left|\text{sinc}\left(K_{x}\pi(\alpha_{1}-\beta_{1}^{0})\right)\right|\). As a result, (12) is simplified to
\[\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0}+v,\beta _{2}^{0})-\sigma^{2}}=\left|\frac{\alpha_{1}-\beta_{1}^{0}-v}{\alpha_{1}- \beta_{1}^{0}}\right|^{2}. \tag{13}\]
Since \(\mu(\beta_{1},\beta_{2})\geq\sigma^{2}\), it follows that \(\mu(\beta_{1},\beta_{2})-\sigma^{2}=\left|\mu(\beta_{1},\beta_{2})-\sigma^{2}\right|\) always holds, for all \((\beta_{1},\beta_{2})\in\mathcal{B}\). Using this fact, (13) can be written equivalently as
\[\sqrt{\left|\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0} +v,\beta_{2}^{0})-\sigma^{2}}\right|}=\left|\frac{\alpha_{1}-\beta_{1}^{0}-v}{ \alpha_{1}-\beta_{1}^{0}}\right|. \tag{14}\]
By solving the nonlinear equation in (14) w.r.t. the unknown \(\alpha_{1}\), we obtain two solutions for \(\alpha_{1}\), denoted by \(\alpha_{1}^{(1)}\) and \(\alpha_{1}^{(2)}\), given by
\[\alpha_{1}^{(1)/(2)}=\beta_{1}^{0}+\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu\left(\beta_{1}^{0}+v, \beta_{2}^{0}\right)-\sigma^{2}}}}. \tag{15}\]
It is not known which of the two values \(\alpha_{1}^{(1)}\) and \(\alpha_{1}^{(2)}\) is equal to \(\alpha_{1}\). To identify the correct solution for \(\alpha_{1}\) of the two solutions given by (15), we need the value of \(\mu(\beta_{1},\beta_{2})\) for \((\beta_{1},\beta_{2})=(\beta_{1}^{0}-v,\beta_{2}^{0})\). Following the same procedure as for (10)-(15), but now by using the values of \(\mu(\beta_{1},\beta_{2})\) for \((\beta_{1},\beta_{2})=(\beta_{1}^{0},\beta_{2}^{0})\) and \((\beta_{1},\beta_{2})=(\beta_{1}^{0}-v,\beta_{2}^{0})\), we obtain
\[\sqrt{\left|\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0}-v, \beta_{2}^{0})-\sigma^{2}}\right|}=\left|\frac{\alpha_{1}-\beta_{1}^{0}+v}{ \alpha_{1}-\beta_{1}^{0}}\right|. \tag{16}\]
By solving (16), we obtain
\[\alpha_{1}^{(3)/(4)}=\beta_{1}^{0}-\frac{v}{1\pm\sqrt{\frac{\mu \left(\beta_{1}^{0},\beta_{2}^{0}\right)-\sigma^{2}}{\mu(\beta_{1}^{0}-v, \beta_{2}^{0})-\sigma^{2}}}}. \tag{17}\]
\[\sqrt{\left|\frac{\mu(\beta_{1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0}, \beta_{2}^{0}+w)-\sigma^{2}}\right|}=\left|\frac{\alpha_{2}-\beta_{2}^{0}-w}{ \alpha_{2}-\beta_{2}^{0}}\right|. \tag{19}\]
By solving the nonlinear equation (19), we obtain two solutions for \(\alpha_{2}\), denoted by \(\alpha_{2}^{(1)}\) and \(\alpha_{2}^{(2)}\), given by
\[\alpha_{2}^{(1)/(2)}=\beta_{2}^{0}+\frac{w}{1\pm\sqrt{\left|\frac{\mu(\beta_{ 1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0},\beta_{2}^{0}+w)-\sigma^{2 }}\right|}}. \tag{20}\]
To identify the correct solution for \(\alpha_{2}\) of the two given in (20), we need the value of \(\mu(\beta_{1},\beta_{2})\) for \((\beta_{1},\beta_{2})=(\beta_{1}^{0},\beta_{2}^{0}-w)\). Again, following the procedure from (10)-(15), by using the values of \(\mu(\beta_{1},\beta_{2})\) for \((\beta_{1}^{0},\beta_{2}^{0})\) and \((\beta_{1}^{0},\beta_{2}^{0}-w)\), we obtain
\[\alpha_{2}^{(3)/(4)}=\beta_{2}^{0}-\frac{w}{1\pm\sqrt{\left|\frac{\mu(\beta_{ 1}^{0},\beta_{2}^{0})-\sigma^{2}}{\mu(\beta_{1}^{0},\beta_{2}^{0}-w)-\sigma^{ 2}}\right|}}. \tag{21}\]
One of the solutions in (20) is exactly same as the solutions of (21). Therefore, using (20) and (21), the correct solution of \(\alpha_{2}\) can be obtained as
\[\alpha_{2}=\left\{\frac{\alpha_{2}^{(i)}+\alpha_{2}^{(j)}}{2}:\min_{i\in\{1,2 \},j\in\{3,4\}}\left|\alpha_{2}^{(i)}-\alpha_{2}^{(j)}\right|\right\}. \tag{22}\]
Finally, by setting \(\beta_{1}^{*}=\alpha_{1}\) and \(\beta_{2}^{*}=\alpha_{2}\), where \(\alpha_{1}\) and \(\alpha_{2}\) are given by (18) and (22), respectively, we obtain (8) and (9).
**Remark 2**.: In [13, Sec. IV.A], the authors proposed the channel estimation strategy under the assumption that there is no noise in the system. However, in the noisy case, we proposed an estimation scheme based on the assumption that \(\mu(\beta_{1},\beta_{2})\) for any of the phase-shifting parameters \((\beta_{1},\beta_{2})\in\mathcal{B}\) are perfectly known at the HMT.
However, in practice the exact values of \(\mu(\beta_{1},\beta_{2})\) for any of the phase-shifting parameters \((\beta_{1},\beta_{2})\in\mathcal{B}\) cannot be known in advance at the HMT, and therefore they need to be estimated using pilot symbols. In the following, we propose an algorithm that estimates \(\mu(\beta_{1},\beta_{2})\) for the phase-shifting parameters in \(\mathcal{B}\) and then uses the estimated values of \(\mu(\beta_{1},\beta_{2})\) to find the optimal phase-shifting parameters \((\beta_{1}^{*},\beta_{2}^{*})\) in the presence of noise.
### _Estimation Of The Optimal Phase-Shifting Parameters In The Noisy Case_
The user sends in total \(N\) number of pilot signals to the HMT for the estimation of the five values of \(\mu(\beta_{1},\beta_{2})\) for the five pairs of \((\beta_{1},\beta_{2})\in\mathcal{B}\). As a result, the proposed algorithm works in five epochs. In the \(k^{th}\) epoch, for \(k=1,2,\ldots\), the user transmits \(\left\lfloor\frac{N}{5}\right\rfloor\) number of pilots to the HMT. The HMT sets \((\beta_{1},\beta_{2})\) to the \(k^{th}\) element in \(\mathcal{B}\), and collects \(\left\lfloor\frac{N}{5}\right\rfloor\) samples of the received signal squared, given by (3). Then \(\mu(\beta_{1},\beta_{2})\), for \((\beta_{1},\beta_{2})\) being the \(k^{th}\) elements in \(\mathcal{B}\), is estimated as
\[\hat{\mu}(\beta_{1},\beta_{2})=\frac{1}{\left\lfloor N/5\right\rfloor}\sum_{i= 1}^{\left\lfloor N/5\right\rfloor}r_{i}(\beta_{1},\beta_{2}), \tag{23}\]
where \(r_{i}(\beta_{1},\beta_{2})\) is the \(i^{th}\) sample of \(r(\beta_{1},\beta_{2})\) in (3).
Next, we replace \(\mu(\beta_{1},\beta_{2})\) in (15), (17), (20), and (21) by \(\hat{\mu}(\beta_{1},\beta_{2})\), \(\forall(\beta_{1},\beta_{2})\in\mathcal{B}\), and thereby obtain our estimates for \(\beta_{1}^{*}\) and \(\beta_{2}^{*}\), denoted by \(\hat{\beta}_{1}^{*}\) and \(\hat{\beta}_{2}^{*}\). The pseudo-code of the proposed algorithm is given in Two-Stage Phase-Shifts Estimation Algorithm below. We note that the choice of the initial \((\beta_{1}^{0},\beta_{2}^{0})\) in the set \(\mathcal{B}\) was arbitrary. The values of \((\beta_{1}^{0},\beta_{2}^{0})\) can effect the estimation error. In general, if the values \((\beta_{1}^{0},\beta_{2}^{0})\) are closer to the \((\alpha_{1},\alpha_{2})\), the better the estimation will be. A good choice for \((\beta_{1}^{0},\beta_{2}^{0})\) is given in [13, Sec. V.C], which leads to faster learning of \((\alpha_{1},\alpha_{2})\).
## IV Theoretical Guarantees For The Proposed Algorithm
In the section, we bound the probability that the estimates, obtained from the proposed Two-Stage Phase-Shifts Estimation Algorithm Algorithm Algorithm, deviate from the true values of \((\alpha_{1},\alpha_{2})\) by an amount \(0\leq\epsilon\leq 1\). In particular, we upper bound the following error probability
\[\mathbb{P}\left\{\left(\hat{\beta}_{1}^{*}-\alpha_{1}\right)^{2}+\left(\hat{ \beta}_{2}^{*}-\alpha_{2}\right)^{2}\geq\epsilon\right\}. \tag{26}\]
We use the following results to upper bound the error probability in (26).
**Lemma 1**.: _Let \(\{X_{n}\}\) be a sequence of random variables (RVs) on a probability space. Let \(X\) be a RV defined on the same probability space. Then, the following holds_
\[\mathbb{P}\left\{\left|X_{n}-X_{m}\right|\geq\epsilon\right\}\leq \mathbb{P}\left\{\left|X_{n}-X\right|\geq\frac{\epsilon}{2}\right\}+\mathbb{P} \left\{\left|X_{m}-X\right|\geq\frac{\epsilon}{2}\right\}.\]
Proof.: The proof is given in the Appendix A.
Let \(\chi_{p}^{2}(\lambda)\) denote a non-central Chi-squared distribution with \(p\) degrees of freedom and non-centrality parameter \(\lambda\).
**Lemma 2**.: _Let \(X=\frac{2}{\sigma^{2}}r(\beta_{1},\beta_{2})\), where \(r(\beta_{1},\beta_{2})\) is given by (3), and let \(\lambda_{1}=\frac{2}{\sigma^{2}}\left|\sqrt{P}H(\beta_{1},\beta_{2})\right|^{2}\). Then, \(X\) is distributed as \(\chi_{2}^{2}(\lambda_{1})\), i.e., \(X\sim\chi_{2}^{2}(\lambda_{1})\). Furthermore, if \(X_{i}\) for \(i=1,2,\ldots,n\) are \(n\) independently and identically distributed (i.i.d.) RVs of \(\chi_{2}^{2}(\lambda_{1})\), then_
\[\sum_{i=1}^{n}X_{i}\sim\chi_{2n}^{2}(n\lambda_{1}).\]
Proof.: The proof is given in the Appendix B.
The following theorem provides an upper bound on the error probability in (26).
**Theorem 2**.: _Let us perform uniform exploration on the set \(\mathcal{B}\) given in (7). For any \(0\leq\epsilon\leq 1\), the error probability in (26) is upper bounded as_
\[\mathbb{P}\left(\left(\hat{\beta}_{1}^{*}-\alpha_{1}\right)^{2}+ \left(\hat{\beta}_{2}^{*}-\alpha_{2}\right)^{2}\geq\epsilon\right)\leq 4\left\{e^{-\frac{n}{2 }\left(\frac{\epsilon\lambda_{2}}{1+\alpha_{2}}\right)^{2}}+e^{-\frac{n}{2} \left(\frac{\epsilon\lambda_{2}}{1+\alpha_{2}}\right)^{2}}\right.\] \[\left.+\,e^{-\frac{n}{2}\left(\frac{\epsilon\lambda_{2}}{1+ \alpha_{2}}\right)^{2}}+e^{-\frac{n}{2}\left(\frac{\epsilon\lambda_{2}}{1+ \alpha_{2}}\right)^{2}}\right\}, \tag{27}\]
_where_
\[\lambda_{1} =\frac{2\sqrt{P}H(\beta_{1}^{0},\beta_{2}^{0})\Big{|}^{2}}{ \sigma^{2}},\quad\lambda_{2}=\frac{2\sqrt{P}H(\beta_{1}^{0}+v,\beta_{2}^{0}) \Big{|}^{2}}{\sigma^{2}},\] \[\lambda_{3} =\frac{2\sqrt{P}H(\beta_{1}^{0}-v,\beta_{2}^{0})\Big{|}^{2}}{ \sigma^{2}},\quad\lambda_{4}=\frac{2\sqrt{P}H(\beta_{1}^{0},\beta_{2}^{0}+w) \Big{|}^{2}}{\sigma^{2}},\] \[\lambda_{5} =\frac{2\sqrt{P}H(\beta_{1}^{0},\beta_{2}^{0}-w)\Big{|}^{2}}{ \sigma^{2}}.\]
Proof.: Let us denote the estimate of \(\mu(\beta_{1}^{0},\beta_{2}^{0})\) by \(\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0})\) which is given by
\[\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0})=\frac{1}{n}\sum_{i=1}^{n}r_{i}(\beta_{1 }^{0},\beta_{2}^{0})=\frac{\sigma^{2}}{2n}\sum_{i=1}^{n}X_{i}.\]
Using Lemma 2, we have
\[\hat{\mu}_{1} :=\frac{2n}{\sigma^{2}}\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0})\sim \chi_{2n}^{2}(n\lambda_{1}) \tag{28}\] \[\hat{\mu}_{2} :=\frac{2n}{\sigma^{2}}\hat{\mu}(\beta_{1}^{0}+v,\beta_{2}^{0}) \sim\chi_{2n}^{2}(n\lambda_{2})\] (29) \[\hat{\mu}_{3} :=\frac{2n}{\sigma^{2}}\hat{\mu}(\beta_{1}^{0}-v,\beta_{2}^{0}) \sim\chi_{2n}^{2}(n\lambda_{3})\] (30) \[\hat{\mu}_{4} :=\frac{2n}{\sigma^{2}}\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0}+w) \sim\chi_{2n}^{2}(n\lambda_{4})\] (31) \[\hat{\mu}_{3} :=\frac{2n}{\sigma^{2}}\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0}-w) \sim\chi_{2n}^{2}(n\lambda_{5}) \tag{32}\]
where \(\lambda_{1}\),\(\lambda_{2}\),\(\lambda_{3}\),\(\lambda_{4}\) and \(\lambda_{5}\) is given in Theorem 2.
The random variables \(\hat{\mu}_{1},\hat{\mu}_{2},\hat{\mu}_{3},\hat{\mu}_{4}\), and \(\hat{\mu}_{5}\) are mutually independent, since they are sampled at different epochs. The estimated optimal phase-shifting parameters \((\hat{\beta}_{1}^{*},\hat{\beta}_{2}^{*})\), are given by (24) and (25), where the values of \(\hat{\alpha}_{1}^{(1)}\), \(\hat{\alpha}_{1}^{(2)}\), \(\hat{\alpha}_{1}^{(3)}\), \(\hat{\alpha}_{1}^{(4)}\), and, \(\hat{\alpha}_{2}^{(1)}\), \(\hat{\alpha}_{2}^{(2)}\), \(\hat{\alpha}_{2}^{(3)}\), and \(\hat{\alpha}_{2}^{(4)}\) are given by
\[\hat{\alpha}_{1}^{(1)/(2)} =\beta_{1}^{0}+\frac{v}{1\pm\sqrt{\frac{\hat{\mu}(\beta_{1}^{0}, \beta_{2}^{0})-\sigma^{2}}{\hat{\mu}(\beta_{1}^{*}+w,\beta_{2}^{0})-\sigma^{2}}}} \tag{33}\] \[\hat{\alpha}_{1}^{(3)/(4)} =\beta_{1}^{0}-\frac{v}{1\pm\sqrt{\frac{\hat{\mu}(\beta_{1}^{0}, \beta_{2}^{0})-\sigma^{2}}{\hat{\mu}(\beta_{1}^{*}-w,\beta_{2}^{0})-\sigma^{2}}}}\] (34) \[\hat{\alpha}_{2}^{(1)/(2)} =\beta_{2}^{0}+\frac{w}{1\pm\sqrt{\frac{\hat{\mu}(\beta_{1}^{0}, \beta_{2}^{0})-\sigma^{2}}{\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0}+w)-\sigma^{2}}}}\] (35) \[\hat{\alpha}_{2}^{(3)/(4)} =\beta_{2}^{0}-\frac{w}{1\pm\sqrt{\frac{\hat{\mu}(\beta_{1}^{0}, \beta_{2}^{0})-\sigma^{2}}{\hat{\mu}(\beta_{1}^{0},\beta_{2}^{0}-w)-\sigma^{2}}}}. \tag{36}\]
By inserting (28), (29), (30), (31), and (32) into (33), (34), (35) and (36), we obtain
\[\hat{\alpha}_{1}^{(1)/(2)} =\beta_{1}^{0}+\frac{v}{1\pm\sqrt{\frac{\hat{\mu}_{1}-2n}{\hat{\mu} _{2}-2n}}} \tag{37}\] \[\hat{\alpha}_{1}^{(3)/(4)} =\beta_{1}^{0}-\frac{v}{1\pm\sqrt{\frac{\hat{\mu}_{1}-2n}{\hat{\mu} _{2}-2n}}}\] (38) \[\hat{\alpha}_{2}^{(1)/(2)} =\beta_{2}^{0}+\frac{w}{1\pm\sqrt{\frac{\hat{\mu}_{1}-2n}{\hat{\mu} _{2}-2n}}}\] (39) \[\hat{\alpha}_{2}^{(3)/(4)} =\beta_{2}^{0}-\frac{w}{1\pm\sqrt{\frac{\hat{\mu}_{1}-2n}{\hat{\mu} _{2}-2n}}}. \tag{40}\]
Let us denote
\[I :=\mathbb{P}\left\{\left|\beta_{1}^{*}-\alpha_{1}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}\] \[II :=\mathbb{P}\left\{\left|\beta_{2}^{*}-\alpha_{2}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}.\]
Now, applying Lemma 1 in (26), we obtain
\[\mathbb{P}\left\{\left(\left(\hat{\beta}_{1}^{*}-\alpha_{1}\right)^{2}+ \left(\hat{\beta}_{2}^
\[\mathbb{P}\left\{\left|\hat{\beta}_{1}^{*}-\alpha_{1}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}\] \[=\mathbb{P}\left\{\left\{\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(3)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\right.\] \[\quad\left.\bigcup\left\{\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(4)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\right.\] \[\quad\left.\bigcup\left\{\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(3)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\right.\] \[\quad\left.\bigcup\left\{\left|\frac{\hat{\alpha}_{1}^{(2)}+\hat{ \alpha}_{1}^{(4)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\right.\] \[\leq\mathbb{P}\left\{\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(3)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\quad+\mathbb{P}\left(\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(4)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right)\] \[\quad+\mathbb{P}\left(\left|\frac{\hat{\alpha}_{1}^{(2)}+\hat{ \alpha}_{1}^{(3)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\quad+\mathbb{P}\left(\left|\frac{\hat{\alpha}_{1}^{(1)}+\hat{ \alpha}_{1}^{(4)}}{2}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right)\] \[=\sum_{\begin{subarray}{c}i=1,2\\ j=3,4\end{subarray}}\mathbb{P}\left\{\left|\left(\hat{\alpha}_{1}^{(1)}- \alpha_{1}\right)+\left(\hat{\alpha}_{1}^{(j)}-\alpha_{1}\right)\right|\geq 2 \sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq\sum_{\begin{subarray}{c}i=1,2\\ j=3,4\end{subarray}}\left[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(i)}- \alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}+\mathbb{P}\left(\left| \hat{\alpha}_{1}^{(j)}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right)\right]\] \[=2\left(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1} \right|\geq\sqrt{\frac{\epsilon}{2}}\right\}+\mathbb{P}\left(\left|\hat{ \alpha}_{1}^{(2)}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right)\right.\] \[\quad+\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(3)}-\alpha_{1} \right|\geq\sqrt{\frac{\epsilon}{2}}\right\}+\mathbb{P}\left\{\left|\hat{ \alpha}_{1}^{(4)}-\alpha_{1}\right|\geq\sqrt{\frac{\epsilon}{2}}\right\}\right), \tag{42}\]
where we applied the union bound to get the first inequality and applied Lemma 1 for the second inequality. We now bound each term in (42) separately.
**Upper bound of \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right|\geq\sqrt{\frac {\epsilon}{2}}\right\}\):** Substituting the values of \(\hat{\alpha}_{1}^{(2)}\), as given by (37), in \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right|\geq\sqrt{\frac {\epsilon}{2}}\right\}\), we obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[=\mathbb{P}\left\{\left|\frac{v}{1+\sqrt{\left|\frac{\hat{\mu}_{1} -2n}{\hat{\mu}_{2}-2n}\right|}}-(\alpha_{1}-\beta_{1}^{0})\right|\geq\sqrt{ \frac{\epsilon}{2}}\right\}. \tag{43}\]
Note that the following holds.
\[\left|\frac{v}{1+\sqrt{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2} -2n}\right|}}-(\alpha_{1}-\beta_{1}^{0})\right|\leq\left|\frac{v}{1+\sqrt{ \left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n}\right|}}\right|+\left|\alpha_{1 }-\beta_{1}^{0}\right| \tag{44}\]
By applying (44) in (43), we obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq\mathbb{P}\left(\left|\frac{1}{1+\sqrt{\left|\frac{\hat{\mu}_ {1}-2n}{\hat{\mu}_{2}-2n}\right|}}\right|\geq\frac{1}{v}\left(\sqrt{\frac{ \epsilon}{2}}-\left|\alpha_{1}-\beta_{1}^{0}\right|\right)\right). \tag{45}\]
For the RVs \(\hat{\mu}_{1}\) and \(\hat{\mu}_{2}\), \(\frac{1}{1+\sqrt{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n}\right|}}\) is always positive. Using this fact in (45), we obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq\mathbb{P}\left\{\frac{1}{1+\sqrt{\left|\frac{\hat{\mu}_{1} -2n}{\hat{\mu}_{2}-2n}\right|}}\geq\frac{1}{v}\left(\sqrt{\frac{\epsilon}{2}}- \left|\alpha_{1}-\beta_{1}^{0}\right|\right)\right\}.\]
Let \(a=\frac{1}{v}\left(\sqrt{\frac{\epsilon}{2}}-\left|\alpha_{1}-\beta_{1}^{0} \right|\right).\) We have
\[\mathbb{P}\left(\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right| \right)\geq\sqrt{\frac{\epsilon}{2}}\right) \leq\mathbb{P}\left\{1+\sqrt{\left|\frac{\hat{\mu}_{1}-2n}{\hat{ \mu}_{2}-2n}\right|}\leq\frac{1}{a}\right\}\] \[=\mathbb{P}\left\{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n} \right|\leq\left(1-\frac{1}{a}\right)^{2}\right\}. \tag{46}\]
**Upper bound of \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(2)}-\alpha_{1}\right|\geq\sqrt{\frac{ \epsilon}{2}}\right\}\):** Substituting the values of \(\hat{\alpha}_{1}^{(2)}\), as given by (37), in \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right|\geq\sqrt{\frac{ \epsilon}{2}}\right\}\), we obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(1)}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[=\mathbb{P}\left(\left|\frac{v}{1-\sqrt{\left|\frac{\hat{\mu}_{1} -2n}{\hat{\mu}_{2}-2n}\right|}}-(\alpha_{1}-\beta_{1}^{0})\right|\geq\sqrt{ \frac{\epsilon}{2}}\right). \tag{47}\]
Note that the following holds.
\[\left|\frac{v}{1-\sqrt{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2} -2n}\right|}}-(\alpha_{1}-\beta_{1}^{0})\right|\leq\left|\frac{v}{1-\sqrt{ \left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n}\right|}}\right|+\left|\alpha_{1 }-\beta_{1}^{0}\right|. \tag{48}\]
By applying (48) in the right-hand side of (47), we
obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(2)}-\alpha_{1}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq\mathbb{P}\left\{\left|1-\sqrt{\frac{\hat{\mu}_{1}-2n}{\hat{ \mu}_{2}-2n}}\right|\leq\frac{1}{a}\right\}\] \[=\mathbb{P}\left\{\left(1-\frac{1}{a}\right)^{2}\leq\left|\frac{ \hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n}\right|\leq\left(1+\frac{1}{a}\right)^{2}\right\}\] \[\leq\mathbb{P}\left\{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2} -2n}\right|\leq\left(1+\frac{1}{a}\right)^{2}\right\}-\mathbb{P}\left\{\left| \frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{2}-2n}\right|\leq\left(1-\frac{1}{a}\right) ^{2}\right\}. \tag{49}\]
**Upper bound of \(\mathbb{P}\left|\hat{\alpha}_{1}^{(3)}-\alpha_{1}\right|\geq\sqrt{\frac{ \epsilon}{2}}\right\}\):** Substituting the values of \(\hat{\alpha}_{1}^{(3)}\), as given by (38), in \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(3)}-\alpha_{1}\right|\geq\sqrt{ \frac{\epsilon}{2}}\right\}\) and following similar steps \(\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(4)}-\alpha_{1}\right|\geq\sqrt{ \frac{\epsilon}{2}}\right\}\), we obtain
\[\mathbb{P}\left\{\left|\hat{\alpha}_{1}^{(4)}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq\mathbb{P}\left\{\left|\frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{3} -2n}\right|\leq\left(1+\frac{1}{a}\right)^{2}\right\}-\mathbb{P}\left\{\left| \frac{\hat{\mu}_{1}-2n}{\hat{\mu}_{3}-2n}\right|\leq\left(1-\frac{1}{a} \right)^{2}\right\}. \tag{51}\]
By inserting the bounds (46), (49), (50) and (51) to (42) we obtain
\[\mathbb{P}\left\{\left|\hat{\beta}_{1}^{*}-\alpha_{1}\right| \geq\sqrt{\frac{\epsilon}{2}}\right\}\] \[\leq 2\left(\mathbb{P}\left\{\left|\frac{\hat{\mu}_{2}-2n}{\hat{ \mu}_{1}-2n}\right|\geq\gamma_{1}\right\}+\mathbb{P}\left\{\left|\frac{\hat{ \mu}_{3}-2n}{\hat{\mu}_{1}-2n}\right|\geq\gamma_{1}\right\}\right), \tag{52}\]
where we set \(\gamma_{1}=\left(\frac{1}{1+(1/a)}\right)^{2}\).
We next upper bound each term on the right-hand side of (52). The bounds are derived using the properties of the sub-exponential distributions which we introduce below.
**Step 2: Sub-exponential Distributions and Its Tail Bound**
**Definition IV.1** (sub-exponential distribution).: A RV \(X\) with mean \(\mu\) is said to be sub-exponential with parameters \((\nu,\alpha)\), for \(\alpha>0\), if
\[\mathbb{E}\left[\exp\left(t(X-\mu)\right)\right]\leq\exp\left(\frac{t^{2}\nu^ {2}}{2}\right),\ \text{for}\ \left|t\right|<\frac{1}{\alpha}.\]
**Theorem 3** ([18]).: _Let \(X_{k}\) for \(k=1,2,\ldots,n\) be independent RVs where \(X_{k}\) is sub-exponential with parameters \((\nu_{k},b_{k})\), and mean \(\mu_{k}=\mathbb{E}\left[X_{k}\right]\). Then \(\sum\limits_{k=1}^{n}(X_{k}-\mu_{k})\) is a sub-exponential RV with parameters \((\nu_{*},b_{*})\) where_
\[b_{*}=\max_{k=1,2,\ldots,n}b_{k},\]
_and_
\[\nu_{*}=\sqrt{\sum\limits_{k=1}^{n}\nu_{k}^{2}}.\]
_Furthermore, its tail probability can be bounded as_
\[\mathbb{P}\left\{\left|\frac{1}{n}\sum\limits_{k=1}^{n}(X_{k}-\mu_{k})\right| \geq t\right\}\leq\left\{\begin{aligned} & 2e^{-\frac{m^{2}}{2(2\nu/n)}}, &\text{for}\ 0\leq t\leq\frac{\nu_{*}^{2}}{nb_{*}},\\ & 2e^{-\frac{m^{2}}{2n_{*}}},&\text{for}\ \ t\geq\frac{\nu_{*}^{2}}{nb_{*}}.\end{aligned}\right.\]
Proof.: The proof is given in Appendix C.
**Corollary 1**.: _Let \(X_{k}\) for \(k=1,2\ldots,n\) be i.i.d. sub-exponential RVs with parameters \((2(2+2a),4)\) each with mean \(2+a\). Then,_
\[\mathbb{P}\left\{\left|\frac{1}{n}\sum\limits_{k=1}^{n}(X_{k}-\mu_{k})\right| \geq t\right\}\leq 2e^{-\frac{m^{2}}{8(2+2a)^{2}}},\qquad\text{for}\ t>0.\]
Proof.: The proof is given in then Appendix D.
We use Corollary 1 to upper bound of the right-hand side terms in (52). The following lemma establishes the connection between the non-central chi-squared distribution and the sub-exponential distributions.
**Lemma 3**.: _Let \(X\sim\chi_{p}^{2}(a)\). Then, \(X\) is sub-exponential with parameters \(\left(2(p+2a),4\right)\)._
Proof.: The proof is given in then Appendix E.
**Step 3: Upper Bounding Eq. (52)**
* Recall that \(\hat{\mu}_{1}\sim\chi_{2n}^{2}(n\lambda_{1})\) and \(\hat{\mu}_{2}\sim\chi_{2n}^{2}(n\lambda_{2})\). Let \(f_{\hat{\mu}_{1}}\) denote the pdf of \(\hat{\mu}_{1}\). We upper bound the term \(\mathbb{P}\left[\left|\frac{\hat{\mu}_{2}-2n}{\hat{\mu}_{1}-2n}\right|\geq \gamma_{1}\right]\) as follows \[\mathbb{P}\left\{\left|\frac{\hat{\mu}_{2}-2n}{\hat{\mu}_{1}-2n} \right|\geq\gamma_{1}\right\}\] \[=\int\limits_{0}^{\infty}\mathbb{P}\left\{\left|\hat{\mu}_{2}-2n \right|\geq\gamma_{1}\right|u-2n\left|\right\}f_{\hat{\mu}_{1}}(u)du\] \[=\int\limits_{0}^{\infty}\mathbb{P}\left\{\left|\hat{\mu}_{2}-2n-n \lambda_{2}+n\lambda_{2}\right|\geq\gamma_{1}\middle|u-2n\left|\right|\right\}f_{ \hat{\mu}_{1}}(u)du\] \[\leq\int\limits_{0}^{\infty}\mathbb{P}\left\{\frac{1}{n}\middle| \hat{\mu}_{2}-n(2+\lambda_{2})\right|\geq\frac{\gamma_{1}\middle|u-2n\left| -n\lambda_{2}\right|}{n}\right\}f_{\hat{\mu}_{1}}(u)du\] (53) Note that, if \(\frac{\gamma_{1}\left|u-2n\right|-n\lambda_{2}}{n}<0\), then \(\mathbb{P}\left\{\left|\frac{\hat{\mu}_{2}-2n}{\hat{\mu}_{1}-2n}\right|\geq\gamma _{1}\right\}\leq 1\) as \(\mathbb{P}\left\{\frac{1}{n}\middle|\hat{\mu}_{2}-n(2+\lambda_{2})\right|\geq \frac{\gamma_{1}\left|u-2n\left|-n\lambda_{2}\right|}{n}\right\}=1\), which is trivial.
For \(\frac{\gamma_{1}|u-2n|-n\lambda_{2}}{n}\geq 0\), using the assumption \(0\leq\epsilon\leq 1\) in (53), we have
\[\mathbb{P}\left\{\left|\frac{\left|\hat{\mu}_{2}-2n\right|}{\hat{ \mu}_{1}-2n}\right|\geq\gamma_{1}\right\}\] \[\leq\int\limits_{0}^{\infty}\mathbb{P}\left\{\frac{1}{n}\Big{|} \hat{\mu}_{2}-n(2+\lambda_{2})\Bigg{|}\geq\epsilon\left(\frac{\gamma_{1}|u-2n| -n\lambda_{2}}{n}\right)\Bigg{\}}\] \[\times f_{\hat{\mu}_{1}}(u)du. \tag{54}\]
The last inequality follows from Lemma 1. Let \(t_{1}:=t_{1}(u)=\epsilon\left(\frac{\gamma_{1}|u-2n|-n\lambda_{2}}{n}\right)\). As \(\mathbb{E}\left[\hat{\mu}_{2}\right]=2n+n\lambda_{2}\), by applying Corollary 1, we obtain
\[\mathbb{P}\left\{\frac{1}{n}\Big{|}\hat{\mu}_{2}-n(2+\lambda_{2}) \Bigg{|}\geq t_{1}\right\}\leq 2e^{-\frac{m_{1}^{2}}{8(2+\lambda_{2})^{2}}}, \qquad t_{1}\geq 0. \tag{55}\]
By applying (55) to (54), we obtain
\[\mathbb{P}\left\{\left|\hat{\mu}_{2}-2n\right|\geq\gamma_{1} \Big{|}\hat{\mu}_{1}-2n\right|\right\}\] \[\leq\int\limits_{0}^{\infty}2e^{-\frac{m_{1}^{2}}{8(2+\lambda_{2 })^{2}}}f_{\hat{\mu}_{1}}(u)du,\] \[=\int\limits_{0}^{2n}2e^{-\frac{n\left(\frac{\mu}{2}\left(\frac{ \mu}{2}\left(\gamma_{1}(2n-u)-n\lambda_{2}\right)\right)^{2}\right)^{2}}{8(2+ 2+\lambda_{2})^{2}}}f_{\hat{\mu}_{1}}(u)du\] \[\quad+\int\limits_{2n}^{\infty}2e^{-\frac{n\left(\frac{\mu}{2} \left(\frac{\mu}{2}\left(\gamma_{1}(2n-u)-n\lambda_{2}\right)\right)^{2} \right)^{2}}{8(2+2+\lambda_{2})^{2}}}f_{\hat{\mu}_{1}}(u)du. \tag{56}\]
For \(0\leq u\leq 2n\), we have
\[2e^{-\frac{n\left(\frac{\mu}{2}\left(\frac{\mu}{2}\left(\gamma_{1}(2n-u)-n \lambda_{2}\right)\right)^{2}\right)^{2}}{8(2+2+\lambda_{2})^{2}}}\leq 2e^{-\frac{n \left(\frac{\mu}{2}\left(\epsilon\lambda_{2}\right)^{2}\right)^{2}}{8(2+2+ \lambda_{2})^{2}}}. \tag{57}\]
For \(2n\leq u\leq\infty\), we have
\[2e^{-\frac{n\left(\frac{\mu}{2}\left(\frac{\mu}{2}\left(\gamma_{1}(2n-2n-2n \lambda_{2}\right)\right)^{2}\right)^{2}}{8(2+2+\lambda_{2})^{2}}}\leq 2e^{-\frac{n \left(\frac{\mu}{2}\left(\epsilon\lambda_{2}\right)^{2}\right)^{2}}{8(2+2+ \lambda_{2})^{2}}}. \tag{58}\]
Using (57) and (58) in (56), we obtain
\[\mathbb{P}\left\{\left|\hat{\mu}_{2}-2n\right|\geq\gamma_{1} \Big{|}\hat{\mu}_{1}-2n\Big{|}\right\}\] \[\leq 2e^{-\frac{n\left(\epsilon\lambda_{2}\right)^{2}}{8(2+2+ \lambda_{2})^{2}}}\mathbb{P}\left\{0<\hat{\mu}_{1}<2n\right\}\] \[+2e^{-\frac{n\left(\epsilon\lambda_{2}\right)^{2}}{8(2+2+2 \lambda_{2})^{2}}}\mathbb{P}\left\{\hat{\mu}_{1}>2n\right\}\] \[\mathbb{P}\left\{\left|\hat{\mu}_{2}-2n\right|\geq\gamma_{1} \Big{|}\hat{\mu}_{1}-2n\Big{|}\right\}\leq 2e^{-\frac{n\left(\epsilon\lambda_{2} \right)^{2}}{8(2+2\lambda_{2})^{2}}}. \tag{59}\]
* We next upper bound \(\mathbb{P}\left\{\left|\frac{\hat{\mu}_{2}-2n}{\hat{\mu}_{1}-2n}\right|\geq \gamma_{1}\right\}\). Set \(t_{2}=\epsilon\left(\frac{\gamma_{1}|u-2n|-n\lambda_{2}}{n}\right)\). Recall that \(\hat{\mu}_{3}\sim\chi_{2n}^{2}(n\lambda_{3})\). Following the steps similar to the derivation of the bound in (59), we obtain
\[\mathbb{P}\left\{\left|\hat{\mu}_{3}-2n\right|\geq\gamma_{1} \Big{|}\hat{\mu}_{1}-2n\Big{|}\right\}\leq 2e^{-\frac{n\left(\epsilon\lambda_{2} \right)^{2}}{8(2+2\lambda_{2})^{2}}}. \tag{60}\]
Combining (59) and (60) we obtain the following upper bound on (52)
\[\mathbb{P}\left\{\left|\hat{\beta}_{1}^{*}-\alpha_{1}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}\leq 4\Bigg{(}e^{-\frac{n}{23}\left(\frac{ \epsilon\lambda_{2}}{1+\lambda_{2}}\right)^{2}}+e^{-\frac{n}{23}\left(\frac{ \epsilon\lambda_{2}}{1+\lambda_{2}}\right)^{2}}\Bigg{)}. \tag{61}\]
\(\bullet\)**Step 4: Upper bound on II**
By following the same steps for deriving the upper bound of \(\mathbb{P}\left\{\left|\hat{\beta}_{1}^{*}-\alpha_{1}\right|\geq\sqrt{\frac{ \epsilon}{2}}\right\}\), we can obtain the following bound
\[\mathbb{P}\left\{\left|\hat{\beta}_{2}^{*}-\alpha_{2}\right|\geq \sqrt{\frac{\epsilon}{2}}\right\}\leq 4\Bigg{(}e^{-\frac{n}{23}\left(\frac{ \epsilon\lambda_{2}}{1+\lambda_{2}}\right)^{2}}+e^{-\frac{n}{23}\left(\frac{ \epsilon\lambda_{2}}{1+\lambda_{2}}\right)^{2}}\Bigg{)}. \tag{62}\]
Combining (61) and (62), we obtain the required upper bound in (27). \(\blacksquare\)
## V Numerical Simulations
We estimate the initial value of \((\beta_{1}^{0},\beta_{2}^{0})\) as given in [13, Sec. V.C]. Based on the the initial value of \((\beta_{1}^{0},\beta_{2}^{0})\) we set \(\mathcal{B}\) as given in (7), where \(v\) and \(w\) are selected such that \(K_{x}v\in\mathbb{N}\) and \(K_{y}w\in\mathbb{N}\) respectively. In addition to the LoS path, we assume that there are 4 NLoS path components due to scatters between the user and the HMT. The elevation and azimuth angles of each NLoS path from these scatters to the center of HMT follow the uniform distribution, i.e., \(U(0,2\pi)\). Moreover, we consider the path coefficient of each NLoS path as a complex Gaussian distribution, i.e., \(CN(0,\sigma_{s}^{2})\), where \(\sigma_{s}^{2}\) is 20 dB weaker than the power of the LoS component [19]. The system parameters for numerical simulations are listed in Table I.
### _Comparison Between the Proposed Algorithm and Benchmark Scheme_
According to the approximated channel model, where the phase-shift parameters at the HMT are given by \(\beta_{1}\) and \(\beta_{2}\), the achieved data rate at the user of the HMT-assisted wireless communication system is given by
\[R(\beta_{1},\beta_{2})=log_{2}\left(1+\frac{P|H(\beta_{1},\beta_{2})|^{2}}{ \sigma^{2}}\right), \tag{63}\]
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parameters** & **Values** & **Description** \\ \hline \hline \(f_{c}\) & 30 GHz & Carrier frequency \\ \hline \(\lambda\) & 1 cm & Wavelength \\ \hline \(L_{x}\) & 1 m & Width of the HMT \\ \hline \(L_{y}\) & 1 m & Length of the HMT \\ \hline \(d_{r}\) & \(\lambda/4\) & Unit element spacing \\ \hline \(L_{e}\) & \(d_{r}\) & Width and length of each phase-shifting element \\ \hline \(P\) & 20 dBm & Transmission power of the HMT during data transmission \\ \hline \(\sigma^{2}\) & -115 dBm & Noise power for 200 KHz \({}^{2}\) \\ \hline \end{tabular}
\end{table} TABLE I: A list of system parameters for numerical simulations
where \(P\) is the transmission power at the HMT. The HMT uses the acquired CSI during the channel estimation period to maximize the received data rate by the user. Hence, we consider the achieved data rate by the user, using the acquired CSI as a performance metric. We applied the proposed algorithm in two different cases when the distance between the user and the center of the HMT (\(d_{0}=200\) m and when \(d_{0}=10\) m. We compared our proposed algorithm with two benchmarks, the proposed algorithm in [13] and the oracle scheme where \(\alpha_{1}\) and \(\alpha_{2}\) are estimated perfectly and thereby the maximum rate is achieved.
In Fig. 4, we compared the achievable rates, given by (63), of the proposed scheme and the benchmark schemes. We considered both \(d_{0}=200\) m and \(d_{0}=10\) m regions of the HMT with respect to the transmit power of the pilot signals when the number of the pilot signals is fixed to 23. For all the algorithms we use the same number of pilots, i.e. 23, for both the cases. Our proposed algorithm uses four pilots in each epoch and there are five epochs, which makes the total number of pilots equals to 20. We require additional three number of pilots to estimate (\(\beta_{1}^{0}\),\(\beta_{2}^{0}\)). We run the simulation for 1000 times. We see that in both cases, the proposed Two-Stage Phase-Shifts Estimation Algorithm gives higher rates than other two benchmark schemes.
### _Convergence of The Proposed Algorithm_
We now numerically evaluate the convergence of the upper bound of the proposed algorithm, given by (27). We also compare the actual probability, given by (26), that we obtain by simulations.
In Fig. 5, we show the convergence property of the error probability and its upper bound of the proposed algorithm for increasing values of \(\epsilon=\{0.01,0.05,0.1\}\) when the power of the pilot signal is \(P=10\) dBm and \(d_{0}=200\) m. We run the simulation for 1000 times. We see that for each value of \(\epsilon\), the proposed algorithm converges towards zero as we increase the number of pilots. Moreover, the upper bound of the error probability also converges to the error probability as the number of pilot signals increases.
In Fig. 6, we compare the convergence property of the error probability of the proposed algorithm with respect to \(\epsilon=0.05\) and \(d_{0}=200\) m for different levels of power of the pilot signals, \(P=\{5,10,20\}\) dBm. As we increase the power of the pilot signals, the estimation accuracy of \(\alpha_{1}\) and \(\alpha_{2}\) increases and hence the error probability decreases. This is so because, as we increase the power of pilot signals the received signals will be less noisy which increases the chances of estimating the \(\alpha_{1}\) and \(\alpha_{2}\) more accurately.
## VI Conclusion
We investigated the problem of estimation of the optimal phase-shift at the HMT-assisted wireless communication system in a noisy environment. We proposed a learning algorithm to estimate the optimal phase-shifting parameters and showed that the probability that the phase-shifting parameters generated by the proposed algorithm to deviate by more than \(\epsilon\) from the optimal values decay exponentially fast as the number of pilots grows. Our proposed algorithm exploited structural properties of the channel gains in the far-field regions.
Fig. 4: Achievable rate vs. the transmit power of the pilot signals (in dBm).
Fig. 5: Error Probability Bound vs Number of Pilots for \(\epsilon=\{0.01,0.05,0.1\}\) for \(P=10\) dBm.
Fig. 6: Error Probability Bound v/s Number of Pilots for \(\epsilon=0.05\) for \(P=\{5,10,20\}\) dBm.
### _Proof of Proposition 1_
Proof.: Let us define the following events.
\[A_{n,m} =|X_{n}-X_{m}|>\epsilon,\quad A_{n}=|X_{n}-X|>\frac{\epsilon}{2},\] \[\text{and}\quad\quad A_{m}=|X_{m}-X|>\frac{\epsilon}{2}\]
By the triangle inequality, we have
\[|X_{n}-X_{m}|\leq|X_{n}-X|+|X_{m}-X|. \tag{64}\]
Using (64), the event \(A_{n,m}\) can be written as
\[|X_{n}-X_{m}|\geq\epsilon\implies|X_{n}-X|+|X_{m}-X|\geq\epsilon\]
Therefore, we have
\[A_{n,m} \subset\{|X_{n}-X|+|X-X_{m}|>\epsilon\}\] \[\subset\left\{|X_{n}-X|>\frac{\epsilon}{2}\bigcup|X-X_{m}|>\frac{ \epsilon}{2}\right\} \tag{65}\]
Note that for any two events \(A\) and \(B\) where \(A\subset B\), then \(\mathbb{P}\left\{A\right\}\leq\mathbb{P}\left\{B\right\}\). We use this fact in (65), and we get
\[\mathbb{P}\left\{|X_{n}-X_{m}|>\epsilon\right\}\leq\mathbb{P}\left\{|X_{n}-X| >\frac{\epsilon}{2}\right\}+\mathbb{P}\left\{|X_{m}-X|>\frac{\epsilon}{2}\right\}\]
### _Proof of Lemma 2_
Proof.: We consider \(r(\beta_{1},\beta_{2})\) as given in (3) which comprises of two complex-valued factors \(\sqrt{P}\times H(\beta_{1},\beta_{2})\) (see (1)) and \(\zeta\).
Write \(\zeta=n_{1}+jn_{2}\), where \(n_{1}\) and \(n_{2}\) follows \(N\left(0,\frac{\sigma^{2}}{2}\right)\) and are independent, and write \(\sqrt{P}\times H(\beta_{1},\beta_{2})=a+jb\), where \(a\) and \(b\) are real values. Therefore,
\[r(\beta_{1},\beta_{2})=|y(\beta_{1},\beta_{2})|^{2}=(a+n_{1})^{2}+(b+n_{2})^{2}. \tag{66}\]
Note that \(\frac{a+n_{1}}{\sigma/\sqrt{2}}\sim N\left(\frac{a}{\sigma/\sqrt{2}},1\right)\) and \(\frac{b+n_{2}}{\sigma/\sqrt{2}}\sim N\left(\frac{b}{\sigma/\sqrt{2}},1\right)\) and they are independent. Therefore,
\[\frac{2}{\sigma^{2}}\left\{(a+n_{1})^{2}+(b+n_{2})^{2}\right\}\sim\chi_{2}^{2} \left(\frac{2}{\sigma^{2}}\left(a^{2}+b^{2}\right)\right). \tag{67}\]
Applying (67) in (66), we get \(X=\frac{2}{\sigma^{2}}r(\beta_{1},\beta_{2})\sim\chi_{2}^{2}\left(\lambda_{1} \right),\) where \(\lambda_{1}=\frac{2}{\sigma^{2}}\left|\sqrt{P}\times H(\beta_{1},\beta_{2}) \right|^{2}.\) The second part of the lemma follows from the additive property of non-central Chi-squared distribution of the sum of \(n\) i.i.d. RVs of \(\chi_{2}^{2}\left(\lambda_{1}\right).\)
### _Proof of Theorem 3_
As \(X_{k},\forall k\) are independent, applying the definition IV.1, the moment generating function of \(\sum\limits_{k=1}^{n}\left(X_{k}-\mu_{k}\right)\) is given by
\[\mathbb{E}\left[e^{t\sum\limits_{k=1}^{n}\left(X_{k}-\mu_{k}\right)}\right] \leq e^{\frac{\mu^{2}}{4}\sum\limits_{k=1}^{n}\chi_{k}^{2}},\quad\forall|t|< \left(\frac{1}{\max\limits_{k=1,2,\ldots,n}b_{k}}\right).\]
Since the moment generating functions uniquely determines the distribution, comparing with the definition IV.1, it follows that \(\sum\limits_{k=1}^{n}\left(X_{k}-\mu_{k}\right)\) is a sub-exponential \(\left(\nu_{*},b_{*}\right)\) random variable, where
\[b_{*}=\max\limits_{k=1,2,\ldots,n}b_{k}\quad\text{and}\quad\nu_{*}=\sqrt{ \sum\limits_{k=1}^{n}\nu_{k}^{2}}.\]
To prove the second part of the Theorem we use the following tail bound on a sub-exponential distribution proved in [18].
**Proposition 1** ([18] Proposition 2.9).: _Let \(X\) is sub-exponential random variable with parameters \(\left(\nu,b\right)\) and \(\mathbb{E}\left[X\right]=\mu\). Then_
\[\mathbb{P}\left\{|X-\mu|\geq t\right\}\leq\left\{\begin{array}{rl}&2e^{- \frac{\mu^{2}}{2\sigma^{2}}},\qquad\text{if }0\leq t\leq\frac{\nu^{2}}{b}\\ &2e^{-\frac{\mu t}{b}},\qquad\text{if }t\geq\frac{\nu^{2}}{b}.\end{array}\right.\]
The claim immediately follows by applying the above result on \(Z_{n}:=\sum\limits_{k=1}^{n}\left(X_{k}-\mu_{k}\right)\), which is sub-exponential \(\left(\nu_{*},b_{*}\right)\), where \(b_{*}=\max\limits_{k=1,2,\ldots,n}b_{k}\) and \(\nu_{*}=\sqrt{\sum\limits_{k=1}^{n}\nu_{k}^{2}}\).
### _Proof of Corollary 1_
From Theorem 3, \(\sum_{k=1}^{n}\left(X_{k}-\mu_{k}\right)\) is sub-exponential \(\left(\nu_{*},b_{*}\right)\), where \(b_{*}=4\) and \(\nu_{*}=2\sqrt{n}(2+2a)\). Using the parameters \(\left(\nu_{*},b_{*}\right)=(2\sqrt{n}(2+2a),4)\) in Proposition 1, we get the required upper bound as
\[\mathbb{P}\left\{\left|\frac{1}{n}\sum\limits_{k=1}^{n}\left(X_{k}-\mu_{k} \right)\right|\geq t\right\}\leq\left\{\begin{array}{rl}2e^{-\frac{\mu^{2}}{ \sigma^{2}}},&0\leq t\leq(2+2a)^{2}\\ 2e^{-\frac{\mu t}{b}},&t\geq(2+2a)^{2}\end{array}\right.\] \[\mathbb{P}\left\{\left|\frac{1}{n}\sum\limits_{k=1}^{n}\left(X_{k}- \mu_{k}\right)\right|\geq t\right\}\leq 2e^{-\frac{\mu t^{2}}{\sigma^{2}(2+2a)^{2}}}, &t>0.\]
### _Proof of Lemma 3_
If \(X\sim\chi_{p}^{2}(a)\), then according to [20], the moment-generating function (MGF) of \(X\) is given by
\[\mathbb{E}\left[\exp\left\{t\left(X-(p+a)\right)\right\}\right] =e^{-t\left(p+a\right)}\mathbb{E}\left[e^{tX}\right]\] \[=\frac{e^{-t\left(p+a\right)}e^{\frac{\mu t}{\tau T}}}{\left(1-2t \right)^{p/2}}\] \[=e^{\frac{2a\mu^{2}}{1-2t}}\frac{e^{-pt}}{\left(1-2t\right)^{p/2}}, \qquad\text{for }t<\frac{1}{2}. \tag{68}\]
By following some calculus, refer [21], [18, Example 2.8], we obtain
\[\frac{e^{-pt}}{\left(1-2t\right)^{p/2}}\leq e^{2pt^{2}},\qquad\text{for }|t|\leq \frac{1}{4}. \tag{69}\]
For \(|t|\leq\frac{1}{4}\), we have
\[e^{\frac{5a\mu^{2}}{1-2t}}\leq e^{4at^{2}}. \tag{70}\]
Applying (69) and (70) to (68), we obtain
\[\mathbb{E}\left[\exp\{t\left(X-(p+a)\right)\}\leq e^{2\left(p+2a\right)t^{2}}, \qquad\forall|t|\leq\frac{1}{4}. \tag{71}\]
Therefore, by (71), \(X\) is Sub-exponential distribution with parameters \(\left(2(p+2a),4\right)\).
|
2308.16511 | PhonMatchNet: Phoneme-Guided Zero-Shot Keyword Spotting for User-Defined
Keywords | This study presents a novel zero-shot user-defined keyword spotting model
that utilizes the audio-phoneme relationship of the keyword to improve
performance. Unlike the previous approach that estimates at utterance level, we
use both utterance and phoneme level information. Our proposed method comprises
a two-stream speech encoder architecture, self-attention-based pattern
extractor, and phoneme-level detection loss for high performance in various
pronunciation environments. Based on experimental results, our proposed model
outperforms the baseline model and achieves competitive performance compared
with full-shot keyword spotting models. Our proposed model significantly
improves the EER and AUC across all datasets, including familiar words, proper
nouns, and indistinguishable pronunciations, with an average relative
improvement of 67% and 80%, respectively. The implementation code of our
proposed model is available at https://github.com/ncsoft/PhonMatchNet. | Yong-Hyeok Lee, Namhyun Cho | 2023-08-31T07:48:24Z | http://arxiv.org/abs/2308.16511v1 | # PhonMatchNet: Phoneme-Guided Zero-Shot Keyword Spotting for User-Defined Keywords
###### Abstract
This study presents a novel zero-shot user-defined keyword spotting model that utilizes the audio-phoneme relationship of the keyword to improve performance. Unlike the previous approach that estimates at utterance level, we use both utterance and phoneme level information. Our proposed method comprises a two-stream speech encoder architecture, self-attention-based pattern extractor, and phoneme-level detection loss for high performance in various pronunciation environments. Based on experimental results, our proposed model outperforms the baseline model and achieves competitive performance compared with full-shot keyword spotting models. Our proposed model significantly improves the EER and AUC across all datasets, including familiar words, proper nouns, and indistinguishable pronunciations, with an average relative improvement of 67% and 80%, respectively. The implementation code of our proposed model is available at [https://github.com/nccsoft/PhonMatchNet](https://github.com/nccsoft/PhonMatchNet).
Yong-Hyeok Lee, Namhyun Cho Speech AI Lab., NCSOFT Corporation, South Korea
{eug92, cnh2769}@ncsoft.com
**Index Terms**: keyword spotting, user-defined, zero-shot, open-vocabulary
## 1 Introduction
Keyword spotting (KWS) in speech processing enables the identification of specific keywords in audio signals. Generally, conventional KWS methods require a significant amount of labeled data for training, which is a limitation in scenarios in which target keywords are rare, specialized, or user-defined [1]. Recently, as user-friendly services, such as smart speakers, AI assistants, and personalized digital humans have become widespread, the demand for personalized technology that allows consumers to define their keywords instead of using pre-determined keywords set by manufacturers has increased.
Zero-shot KWS is key to addressing this challenge, enabling a model to detect user-defined keywords with no prior training on those keywords. Most user-defined KWS (UDKWS) models are implemented using the query-by-example (QbyE) approach, which compares an input speech and a pre-registered speech in latent space [2, 3, 4]. Despite the success of the QbyE approach, it is limited in terms of performance and implementation because it compares the input speech with the pre-enrolled speech.
Recently, a cross-modal correspondence detector (CMCD) that compares speech and text and can omit speech registration when adding new keywords has been proposed [5]. However, this method does not properly distinguish pairs of similar pronunciations, as the similarity between speech and text is calculated at the utterance level.
In this paper, we propose a novel zero-shot UDKWS model to address these issues. Our proposed model includes a two-stream speech encoder with a pre-trained speech embedder [6], self-attention-based pattern extractor [7], and phoneme-level detection loss to achieve high performance in various pronunciation environments. We conducted experiments on datasets containing familiar words [8], proper nouns [9], and indistinguishable pronunciations [5] to evaluate the effectiveness of our proposed model. Our proposed model outperforms the baseline model and achieves competitive performance compared with full-shot keyword spotting models. Particularly, our proposed model achieved a significant improvement in the equal error rate (EER) and area under the curve (AUC) across all datasets, with an average relative improvement of 67% and 80%, respectively.
## 2 Related works
This section examines recent UDKWS methods and related studies for our proposed method. The most recent model [5] showed degraded performance in similar pronunciations that are difficult to distinguish owing to the comparison of speech-text pairs at the utterance level. Furthermore, the audio encoder cannot properly handle uncommon pronunciations in the training dataset because it comprises only fully trainable modules. Thus, we employed a pre-trained speech embedder and phoneme-level detection loss to overcome these limitations.
Recently, various studies on non-semantic representation that allow a single embedder model to be used across various target tasks, not only in speech recognition [10] but also in other areas, such as keyword spotting [6], speaker identification, and language identification [11] have been presented. These studies improve models using techniques, such as self-supervised or unsupervised learning, multi-task learning [12], and knowledge distillation [13]. Our objective is to obtain embeddings that can show high performance in general speech situations; therefore, we selected and froze the embedder model [6] while considering the KWS performance and model size.
In various detection tasks using speech-text [14], visual-text [15], speech-visual [16], and visual-only [17], temporal location information of targets significantly assists in achieving high performance. However, providing the correct labels for every timestamp in sequence data is challenging. Therefore, connectionist temporal classification (CTC) loss [18], which does not require explicit alignment information, is widely used. However, unlike speech recognition in which speech and text labels always match, CTC loss is not appropriate for UDKWS, where speech and text labels may not match. Therefore, we propose a method that uses the sameness of speech and text pronunciation as a label rather than the exact time information.
## 3 Proposed method
Here, we describe our proposed model architecture and training criterion, as shown in Figure 1. Our proposed model comprises three sub-modules: an audio and text encoder with a pre-trained embedder, pattern extractor, and pattern discriminator. We use two loss functions based on the cross-entropy loss in the training criterion, which are computed at the utterance (\(\mathcal{L}_{utt}\)) and phoneme levels (\(\mathcal{L}_{phon}\)).
### Model architecture
**Audio Encoder.** The audio encoder comprises two feature extractors: a pre-trained speech embedder [6] with high performance in representing general pronuncations and a fully trainable feature extractor, which learns representations of special pronuncations, such as proper nouns. The pre-trained speech embedder has \(775\,\mathrm{ms}\) windows and computes 96-dimensional feature vectors every \(80\,\mathrm{ms}\). We upsample the feature and time dimensions using a 1-D transposed convolution with a kernel size of 5 and stride of 4, along with fully connected layers. The fully trainable feature extractor comprises two 1-D convolutions with a kernel size of 3, batch normalization, and ReLU layers. The first convolution layer has a stride of 2, whereas the others have a stride of 1. The 40-dimensional mel-filterbank coefficients used as input are extracted every \(10\,\mathrm{ms}\) with a window of \(25\,\mathrm{ms}\). Finally, we compute the 128-dimensional feature vectors extracted every \(20\,\mathrm{ms}\) by adding the two feature vectors. We denote audio embeddings as \(\mathbf{E}^{a}\in\mathbb{R}^{T_{a}\times 128}\), where \(T_{a}\) and 128 are the lengths of the audio and embedding dimension, respectively.
**Text Encoder.** The text encoder, similar to [5], includes a pre-trained grapheme-to-phoneme (G2P) model1 followed by a fully connected layer and a ReLU activation function. We extract the G2P embedding from the last hidden states of the encoder. We denote text embeddings by \(\mathbf{E}^{t}\in\mathbb{R}^{T_{t}\times 128}\), which is the same as the audio encoder.
Footnote 1: [https://github.com/Kyubyong/g2p](https://github.com/Kyubyong/g2p)
**Pattern extractor.** Considering the characteristics of the KWS task, which requires maintaining fewer model parameters, our pattern extractor is based on a self-attention [7, 15] rather than a cross-attention mechanism [16, 19]. As in [20], during the fusion of multiple modalities, the self-attention method does not require other modules, unlike other attention mechanisms. The matrix of attention outputs for a set of queries \(Q\) with keys and values packed into matrices \(K\) and \(V\) is computed as
\[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V. \tag{1}\]
Unimodal embeddings \(\mathbf{E}^{a}\) and \(\mathbf{E}^{t}\) are concatenated along the time dimension to prepare the joint embeddings \(\mathbf{E}^{j}\):
\[\mathbf{E}^{c}=(\mathbf{E}^{a};\mathbf{E}^{t})\in\mathbb{R}^{(T_{a}+T_{t}) \times 128}. \tag{2}\]
The concatenated embedding \(\mathbf{E}^{c}\) is calculated as joint embeddings \(\mathbf{E}^{j}\) using equation 1:
\[\mathbf{E}^{j}=\text{Attention}(\mathbf{E}^{c},\mathbf{E}^{c},\mathbf{E}^{c}) \in\mathbb{R}^{(T_{a}+T_{t})\times 128}. \tag{3}\]
We used a lower triangular matrix as an attention mask to use only causal information of the intra-modality.
**Pattern discriminator.** Our pattern discriminator determines two probabilities, the match probability of audio and keywords and the audio and phoneme. To detect an utterance-level matching, we use a single GRU layer with 128 dimensions, taking the joint embeddings \(\mathbf{E}^{j}\) along the time dimension as an input. The output of the last hidden state is fed into a fully connected layer with a sigmoid function:
\[P_{utt}=\sigma(\mathbf{W}^{u}\cdot\text{GRU}(\mathbf{E}^{j})+\mathbf{b}^{u}) \in\mathbb{R}^{1\times 1}. \tag{4}\]
Similarly, we extract only the phoneme sequences from the sequence of joint embeddings \(\mathbf{E}^{j}\) and feed this into the fully connected layer with a sigmoid function to detect phoneme-level matching:
\[P_{phon}=\sigma(\mathbf{W}^{p}\cdot\mathbf{E}^{j}_{\mathbf{s}}+\mathbf{b}^{p })\in\mathbb{R}^{T_{t}\times 1}, \tag{5}\]
where \(\mathbf{W}\), \(\mathbf{b}\), \(\sigma\), and \(\mathbf{s}\) denote the trainable weights and biases of each fully connected layer, the sigmoid function, and the frame index of \((T_{a},T_{a}+T_{t}]\), respectively.
Figure 1: Architecture of the proposed model. Boldface and red arrows denote the unmatched phonemes, labels, and BCE losses. “_TCom” is the abbreviation of “Transposed Convolution.”_
### Training criterion
Our training criterion (\(\mathcal{L}_{total}\)) comprises two binary-cross entropy (BCE) losses: an utterance- (\(\mathcal{L}_{utt}\)) and a phoneme-level detection losses (\(\mathcal{L}_{phon}\)),
\[\mathcal{L}_{total}=\mathcal{L}_{utt}+\mathcal{L}_{phon}. \tag{6}\]
**Uterance-level detection loss.** Similar to general KWS methods [21], the utterance-level detection loss is used to train the similarity between speech and keywords within a sample. The sample-level ground truth is used with \(P_{utt}\) from Equation 4 to calculate the BCE loss.
**Phoneme-level detection loss.** We propose the phoneme-level detection loss to improve the performance of distinguishing similar pronuncations (e.g., "friend" and "trend") without using speech and pronunciation alignment information. The phoneme-level ground truth y is defined as 1 if the phoneme sequence of the speech label is the same as that of the keyword label and 0, otherwise.
\[y_{p}=\begin{cases}1&\text{if }y_{p}^{t}=y_{p}^{s}\\ 0&\text{otherwise}\end{cases}, \tag{7}\]
where \(y_{p}^{s}\), \(y_{p}^{t}\), and \(p\) denote the speech, keyword label, and phoneme index from 1 to \(T_{t}\). As with \(\mathcal{L}_{utt}\), the phoneme-level ground truth y and prediction \(P_{phon}\) are used to calculate the BCE loss.
## 4 Experiments
Here, we describe the experimental setup, including the datasets, evaluation metrics, and implementation details for training and testing. Although most of our experimental settings are similar to those in [5], we employ different methods to create anchor-negative pairs in certain test sets. An example of this is listed in Table 2 for clarity.
### Datasets and metrics
We used three datasets: LibriPhrase [5], Google Speech Commands V1 (**G**) [8], and Qualcomm Keyword Speech dataset (**Q**) [9], for training and evaluation. In the training phase, we used the training set of LibriPhrase and babble noise from the MS-SNSD dataset [22] for robust detection. Detailed training conditions were similar to that of [5]. During evaluation, we used the datasets **G**, **Q**, LibriPhrase-easy (**LP\({}_{\textbf{E}}\)**), and LibriPhrase-hard (**LP\({}_{\textbf{H}}\)**). **LP\({}_{\textbf{E}}\)** and **LP\({}_{\textbf{H}}\)** are datasets reclassified as easy and difficult to differentiate between anchor and negative pairs in the test sets of LibriPhrase [5]. We evaluated our proposed models by measuring the EER and AUC at the sample level on these datasets. Because the datasets **G** and **Q** do not provide negative pairs, we calculated the EER and AUC by considering all keywords except the positive pairs as negative pairs among the candidate keywords for each dataset. Table 2 provides examples of anchor and negatives in **G** and **Q**.
### Implementation details
Our implementation was based on the TensorFlow library. The training criterion was optimized using the Adam optimizer with the default parameters. The models were trained for 100 epochs with a fixed learning rate of \(10^{-3}\), and the best model was selected based on performance on the test sets. For training, we used a single V100 with a batch size of 2048 for a day.
## 5 Results
### Comparison with baseline
We re-implemented the baseline model, CMCD, to compare its performance with that of our proposed model. According to Table 1, our reconstructed CMCD performed similarly in **LP\({}_{\textbf{E}}\)** and **LP\({}_{\textbf{H}}\)** but showed decreased performance in **G** and **Q** datasets compared with the results of the original study [5]. This can be attributed to the difference in the method of generating anchor-negative pairs. Although the method differs from [5], our method can measure the performance of these models with increased accuracy because it computes one anchor and other keywords. Our proposed model outperformed the baseline by a significant margin in all datasets, including familiar words
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Params} & \multicolumn{4}{c|}{EER (\%) \(\downarrow\)} & \multicolumn{4}{c}{AUC (\%) \(\uparrow\)} \\ \cline{3-10} & & **G** & **Q** & **LP\({}_{\textbf{E}}\)** & **LP\({}_{\textbf{H}}\)** & **G** & **Q** & **LP\({}_{\textbf{E}}\)** & **LP\({}_{\textbf{H}}\)** \\ \hline \({}^{\dagger}\) CMCD [5] & 653K & 31.94 & 26.09 & 10.48 & 29.34 & 73.86 & 82.42 & 95.63 & 77.60 \\
**Proposed** & 655K & **6.77** & **4.75** & **2.80** & **18.82** & **98.11** & **98.90** & 99.29 & **88.52** \\ \hline w/o Phoneme-level Detection Loss & & 7.52 & 6.05 & 3.39 & 19.65 & 97.45 & 98.45 & **99.30** & 88.04 \\ w/o Self-attention-based Pattern Extractor & 655K & 11.32 & 18.24 & 3.47 & 20.51 & 95.73 & 90.14 & 99.26 & 87.49 \\ w/o Fully trainable Speech Encoder & & 14.34 & 22.90 & 5.13 & 25.77 & 93.91 & 84.19 & 98.71 & 82.53 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of the baseline, proposed method, and ablations on various datasets. \(\dagger\) refers to our implementation of H.-K. Shin et al. **G**, **Q**, **LP\({}_{\textbf{E}}\)**, and **LP\({}_{\textbf{H}}\)** refer to our test sets from Section 4.1. **w/o Phoneme-level Detection Loss** represents the case of training using only utterance-level detection loss. **w/o Self-attention-based Pattern Extractor** means the pattern extractor is composed of cross-modality attention rather than self-attention. **w/o Fully trainable Speech Encoder** means the audio encoder comprises only a pre-trained speech embedder without convolution layers. We adjust the model parameters of the ablation studies to be the same as the proposed model.
\begin{table}
\begin{tabular}{c c c} \hline
**Dataset** & **Anchor** & **Negatives** \\ \hline & & yes \\ & & no \\ & & up \\ & & down \\
**G** & go & left \\ & & right \\ & & on \\ & & off \\ & & stop \\ \hline
**Q** & hi galaxy & hey android \\ & & hey snapragon \\ & & hi lumina \\ \hline \end{tabular}
\end{table}
Table 2: Examples of anchor and negatives of each dataset
(_e.g._, "yes" and "no"), proper nouns (_e.g._, "snapdragon" and "lumina"), and indistinguishable pronunciations (_e.g._, "friend" and "frmid"). A relative improvement of 67% in EER and 80% in AUC was observed on average across all the datasets.
### Ablation Study
We evaluated the effectiveness of our proposed methods as shown in Table 1: phoneme-level detection loss, self-attention-based modality fusion, and two-stream speech encoder.
**Effectiveness of Speech Encoder Components.** Our speech encoder comprised fully trainable and pre-trained modules. Comparing the first and last rows of Table 1, we observed a considerable improvement in EER and AUC when using only the pre-trained speech embedder, except for the \(\mathbf{Q}\) dataset. This could be attributed to the limitation that the embedder model we used was trained on general conversations from YouTube [6]. Therefore, we compensated for this by adding fully trainable modules, which improved the performance on all test sets, including the \(\mathbf{Q}\) dataset, as shown in the fourth row.
**Fusion Strategy for Audio-Text Representations.** Comparing the third and fourth rows of Table 1, the performance of cross-modality attention and that of self-attention-based pattern extractor differed. Unlike the aforementioned analysis, we observed performance improvements on all test sets, particularly on the \(\mathbf{Q}\) dataset. The keywords of \(\mathbf{Q}\) comprise proper nouns that are uncommon in typical training datasets. Unlike cross-modality attention-based models, a self-attention-based pattern extractor concatenated bi-modal data to produce outputs that contain all information in speech-to-speech, text-to-speech, and text-to-text modalities. This permitted the attention outputs, which are enhanced with speech information, to help distinguish the pronunciation similarity of rare cases in the training dataset.
**Effectiveness of the Phoneme-level Detection Loss.** We proposed the phoneme-level detection loss to improve discrimination performance in speech-keyword pairs with duplicate pronunciations. As shown in Table 1 and Figure 2, our proposed loss function outperformed the utterance-level detection loss only, particularly on the \(\mathbf{LP_{H}}\) dataset and with low normalized Levenshtein distance, indicating the effectiveness of our proposed loss in discriminating speech-keyword pairs with similar pronunciations.
### Performance analysis via keyword similarity
As shown in Figure 2, we examined the relationship between phonetic similarity and model performance. We defined the similarity between speech-text label pairs using the normalized Levenshtein distance [23] and calculated the mean squared error (MSE) between our model predictions and the labels over the similarity range greater than 0 and less than 1. Comparing our proposed model to the baseline, we observed superior performances across all distances. Notably, the error of the baseline model remained constant for normalized Levenshtein distances below 0.45, whereas our proposed model exhibited a linear improvement. Furthermore, we can attribute the performance improvements in similar pronunciations to the effectiveness of our phoneme-level detection loss.
### Comparison with Conventional Methods
We compared the performance of our proposed zero-shot KWS model with conventional full-shot KWS models in Table 3. Because our model only calculated the matching probability, we set a threshold of 0.8 to compute the accuracy. Our proposed model outperformed some conventional models and achieved competitive performance compared with the state-of-the-art model. This result is noteworthy considering that we did not include the Google Speech Command dataset when training our model.
## 6 Conclusions
This study presented a novel user-defined keyword spotting model that leveraged the relationship between audio and phonemes of keywords. We introduced a two-stream speech encoder containing pre-trained embedders, a self-attention-based pattern extractor, and a phoneme-level detection loss to improve the performance of distinguishing similar pronunciations. Our proposed model outperformed the baseline model and achieved competitive performance compared with full-shot KWS models. However, despite our various efforts, relative performance improvements in challenging cases, such as \(\mathbf{LP_{H}}\) were lower compared with other test sets. Improving the performance of highly similar pronunciations is our future research direction.
\begin{table}
\begin{tabular}{l c c} \hline Model & Zero-shot & Accuracy \(\uparrow\) \\ \hline DS-CNN [24] & & 95.4 \\ Att-RNN [25] & & 95.6 \\ TC-ResNet [26] & & 96.6 \\ MHAit-RNN [27] & & 97.2 \\ MatchBoxNet [28] & & 97.5 \\ KWT-3 [29] & & 97.5 \\ BC-ResNet-8 [30] & & **98.0** \\ \hline
**Proposed** & \(\checkmark\) & **96.8** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of conventional KWS models and the proposed model in the test set of Google Speech Commands VI-12.
Figure 2: Evaluation results according to the normalized Levenshtein distance in a LibriPhrase test set. The smaller the distance, the more similar the pronunciation. |
2309.16638 | Blind spots and biases: the dangers of ignoring eccentricity in
gravitational-wave signals from binary black holes | Most gravitational wave (GW) events observed by the LIGO and Virgo detectors
are consistent with mergers of binary black holes (BBHs) on quasi-circular
orbits. However, some events are also consistent with non-zero orbital
eccentricity, which can indicate that the binary formed via dynamical
interactions. Active GW search pipelines using quasi-circular waveform
templates are inefficient for detecting eccentric mergers. Also, analysing
eccentric GW signals with waveform models neglecting eccentricity can lead to
biases in the recovered parameters. We explore the detectability and
characterisation of eccentric signals when searches and analyses rely on
quasi-circular waveform models. We find that for a reference eccentric
population, the fraction of events having fitting factor (FF) $< 0.95$ can be
up to $\approx 2.2\%$ compared to $\approx 0.4\%$ for the baseline population.
This leads to the loss in signal recovery fraction for up to $6\%$ for
parameter space with non-negligible eccentricity ($e_{10} > 0.01$) and high
mass ratio ($q > 3$). We perform parameter estimation (PE) for non-spinning and
aligned-spin eccentric GW injections from BBHs with a total mass $M=35
M_\odot$, based on numerical relativity simulations and an EOB based
inspiral-merger-ringdown model (TEOBResumS). We recover these injections using
both quasi-circular and eccentric waveform models. For cases with $e_{20} \sim
0.1$, quasi-circular models fail to estimate chirp mass within the 90% credible
interval accurately. Further, for these low-mass injections, spin-induced
precession does not mimic eccentricity. For injections of $e_{20}\sim 0.1$, PE
conducted with an inspiral-only eccentric waveform model correctly
characterises the injected signal to within 90% confidence, and recovers the
injected eccentricities, suggesting that such models are sufficient for
characterisation of low-mass eccentric BBH. (abridged) | Divyajyoti, Sumit Kumar, Snehal Tibrewal, Isobel M. Romero-Shaw, Chandra Kant Mishra | 2023-09-28T17:47:20Z | http://arxiv.org/abs/2309.16638v3 | Blind spots and biases: the dangers of ignoring eccentricity in gravitational-wave signals from binary black holes
###### Abstract
Most gravitational wave (GW) events observed so far by the LIGO and Virgo detectors are consistent with mergers of binary black holes (BBHs) on quasi-circular orbits. However, some events, such as GW190521, are also consistent with having non-zero orbital eccentricity at detection, which can indicate that the binary formed via dynamical interactions. Active GW search pipelines employing quasi-circular waveform templates are inefficient for detecting eccentric mergers. Additionally, analysis of GW signals from eccentric BBH with waveform models neglecting eccentricity can lead to biases in the recovered parameters. Here, we explore the detectability and characterisation of eccentric signals when searches and analyses rely on quasi-circular waveform models. We find that for a reference eccentric population, the fraction of events having fitting factor (FF) \(<0.95\) can be up to \(\approx 2.2\%\) compared to \(\approx 0.4\%\) for the baseline population. This results in the loss in signal recovery fraction for up to \(6\%\) for the region in parameter space with non-negligible eccentricity (\(e_{10}>0.01\)) and high mass ratio (\(q>3\)). We perform parameter estimation (PE) for non-spinning and aligned-spin eccentric injections of GWs from binaries of total mass \(M=35\) M\({}_{\odot}\), based on numerical relativity simulations and an EOB based inspiral merger-ringdown model (TEOBREsumS), and recover them using both quasi-circular and eccentric waveform models. For \(e_{20}\sim 0.1\), analyses using quasi-circular waveform models are unable to recover the injected chirp mass within the \(90\%\) credible interval. Further, for these low-mass injections, spin-induced precession _does not_ mimic eccentricity, with PE correctly ruling out high values of effective precession parameter \(\chi_{p}\). For injections of \(e_{20}\sim 0.1\), PE conducted with an inspiral-only eccentric waveform model correctly characterises the injected signal to within \(90\%\) confidence, and recovers the injected eccentricities, suggesting that such models are sufficient for characterisation of low-mass eccentric BBH.
## I Introduction
Since the first detection of gravitational waves (GWs) [1], the LIGO-Virgo-KAGRA (LVK) collaboration has reported about 85 GW candidates from binary black hole (BBH) mergers [2; 3; 4; 5]. While most of the detected signals are consistent with GW emission from inspiralling BBHs on quasi-circular orbits, several events have been argued to be more consistent with coming from binaries with non-negligible orbital eccentricity at detection [e.g., 6; 7; 8]. In particular, GW190521 [9; 10] has been interpreted as coming from a moderately- to highly-eccentric BBH [11; 12; 13]. Non-negligible orbital eccentricity measured at detection in the LVK sensitive frequency range (above 10 Hz) implies that the radiating BBH was driven to merge by external influences: for example, as part of a field triple [e.g., 14], in a densely-populated star cluster [e.g., 15], or in the accretion disk of a supermassive black hole [e.g., 16].
Search pipelines based on matched-filtering methods use quasi-circular waveform templates, motivated by the expected efficient circularisation via GW emission of compact binary orbits during the late stages of their evolution [17]. However, binaries formed through dynamical processes in dense stellar environments [18; 19; 20; 21; 22; 23; 24] or through Kozai-Lidov processes [25; 26] in field triples [27; 28], may be observed in ground based detectors such as advanced LIGO [29; 30; 31] and Virgo [32; 33] with residual eccentricities \(\gtrsim 0.1\) [e.g., 14; 15; 16; 34; 35; 36; 37]. While pipelines employing quasi-circular templates should be able to detect the majority of systems with eccentricities \(e_{10}\lesssim 0.1\) at a GW frequency of 10 Hz [38] if observed with current LIGO-Virgo detectors, binaries with larger eccentricities would require constructing template banks for matched-filter searches including the effect of eccentricity [e.g., 39; 37]. Moreover, the presence of even small eccentricities (\(e_{10}\sim 0.01-0.05\)) can induce bias in extracting source properties [40; 41; 42; 43; 44; 45; 46]. As existing detectors upgrade to their LIGO A+/Voyager
[47; 48] configurations, improving their low-frequency sensitivity, neglecting eccentricity in detection and parameter estimation pipelines may lead to incorrect inference of source properties and/or failure to identify the presence of eccentric signals in data. This problem is likely to be exacerbated in detections made with next-generation ground-based instruments such as Cosmic Explorer [49; 50; 51] and the Einstein Telescope [52; 53; 54], since their sensitivity to frequencies \(\sim 1\) Hz and above should enable them to frequently observe systems with detectable eccentricities [35; 55].
In searches for compact binary coalescence signals, computation time and availability of waveform models play crucial role. In recent years, there have been some targeted searches for eccentric systems [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67], and upper limits have been provided in the absence of detection: with data from first two observing runs of LIGO and Virgo detector network, Nitz _et al_ (2019) [68] provided 90% credible upper limits for binary neutron stars as \(\sim 1700\) mergers Gpc\({}^{-3}\) Yr\({}^{-1}\), for eccentricities \(\leq 0.43\), for dominant mode frequency at 10 Hz. For sub-solar mass binaries, Nitz _et al_ (2021) [69] provided 90% credible upper limits for \(0.5-0.5\) (\(1.0-1.0\)) M\({}_{\odot}\) binaries to be 7100 (1200) Gpc\({}^{-3}\) Yr\({}^{-1}\).
While inspiral-only models for GW signals from eccentric compact binary systems are sufficiently accurate to NR simulations of inspirals, and are rapid enough to generate for use in direct parameter estimation via Bayesian inference [70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83], their use may be limited to low mass eccentric (typically \(\lesssim 25M_{\odot}\)) [84] binaries due to the absence of the merger and ringdown in the signal model. Waveform models containing the inspiral, merger, and ringdown (IMR) are under development and/or available for use [e.g., 85; 86; 87; 88; 89; 90; 91; 92]; these are generally slower to generate than their quasi-circular counterparts, and Bayesian inference using these models has usually required reduction of accuracy conditions [e.g., 93], using likelihood reweighting techniques [e.g., 94], or utilising highly computationally expensive parallel inference on supercomputer clusters [e.g., 7; 95].
Methods currently in use for eccentric _searches_[96; 97; 98; 99; 35], with little or no dependence on signal model, are sensitive to high masses [ideally \(\gtrsim 70M_{\odot}\)[63; 64]. Nevertheless, one can infer the presence of orbital eccentricity in signals found by standard searches tuned to quasi-circular BBH by employing available eccentric waveform models via Bayesian parameter estimation [e.g., 100; 101; 102; 103; 104; 105; 6; 6; 7; 11] or by comparing the data directly to numerical relativity waveform simulations of GW from eccentric BBH [13]. A caveat to all of the Bayesian inference studies mentioned above is the absence of spin-induced precession [104] in the waveform model employed: since both eccentricity and misaligned spins introduce modulations to the gravitational waveform [11; 93], it may be critical to account for both spin precession and eccentricity while aiming to measure either or both of the two effects, particularly for GWs from high-mass BBHs [102; 42].1
Footnote 1: While the model in [80] does include the effect of spin-precession and eccentricity, it is an inspiral-only prescription and thus might only be suitable for long inspiral signals such as those observed by LISA [105].
Even though the currently-available eccentric waveform models may not include the effect of spin-induced precession, these are still useful for studying systematic errors incurred due to the neglect of orbital eccentricity in waveform models used in LVK catalogues [e.g., 5]. Eccentric versions of the effective-one-body (EOB) waveforms [106] including higher modes [107; 8; 108] and an eccentric numerical relativity (NR) surrogate model [89] are available. In this work, we focus on assessing the impact of orbital eccentricity in the dominant quadrupole mode (\(\ell=2,|m|=2\)) of the waveform, which we argue is a reasonable representation of the GW emission for the majority of observed source types, which are equal-mass and low-spin.2 A summary of the investigations presented here is included below.
Footnote 2: While this may not be true generically, the fact that only two of the events observed so far show the presence of a higher order mode [108; 109] and that most detections are consistent with a zero-effective spin scenario [3] favours the assumption.
### Summary of analysis and results
We assess the impact of employing a circular template bank for GW searches when the source population may exhibit eccentricity. To achieve this, we simulate diverse source populations, covering the parameters of binary black hole masses and eccentricity. We determine the detection efficiency of the population by comparing the inherent signal strength to the values obtained from the circular template bank. We also investigate the regions in the parameter space where the largest loss in signal strength occurs due to the difference between injection and recovery waveform model or due to non-inclusion of eccentricity.
In order to explore the effect of eccentricity on various inferred parameters of a BBH system, we perform parameter estimation (PE) on injected non-spinning and aligned spin eccentric signals generated using numerical relativity (NR) codes, and recover them using different waveform models with different combinations of spins and eccentricity. Our aim is to observe and quantify the biases incurred due to absence of eccentricity in the recovery waveform when analyzing eccentric signals, and to verify that those biases can be corrected to a certain extent by using the currently available eccentric waveforms.
We perform multiple sets of zero-noise NR hybrid and waveform approximant injections which include synthetic GWs consistent with non-spinning and
aligned-spin BBHs in eccentric orbits, and recover these injections with a variety of either quasi-circular or eccentric waveform models including no spin, aligned spins, or misaligned (precessing) spins.3 Due to a lack of available waveform models, we are unable to investigate injection recovery using models with both spin-precession and eccentricity. We observe that the state-of-the-art inspiral-merger-ringdown (IMR) quasi-circular waveforms of the PhenonX [110; 111] family do not recover the true chirp mass of an eccentric signal. Regardless of the spin configuration (non-spinning, spin-aligned, or spin-precessing) taken for these quasi-circular recovery waveforms, the bias of the chirp mass and spin posteriors when the injected signal is non-spinning and eccentric is similar relative to the non-biased recovery of the quasi-circular injection. Therefore, we conclude that for low-mass (total mass of 35 M\({}_{\odot}\)), long-inspiral, non-spinning, eccentric binary systems, parameter estimation with a precessing-spin waveform model **will not lead to a false positive value of precession**. This supports the conclusions drawn in Romero-Shaw _et al._[102], where the authors demonstrate that eccentricity and spin-precession are distinguishable for signals with more than a few cycles in-band.
Footnote 3: We have also explored the simulated noise case for one of the injection sets. Those results are discussed in Appendix D.
Further, analyzing these same signals using a computationally efficient inspiral-only eccentric waveform results in a significant reduction of the biases in the posteriors on the intrinsic parameters of the binary, leading to the recovery of the true values within the 90% credible bounds. This implies that for GWs from low-mass and aligned- or low-spin BBHs, **inspiral-only waveforms that are readily available within the lalsuite framework are adequate for accurate recovery of the source parameters.** We also see clear correlation between chirp mass and eccentricity posteriors for both the non-spinning and aligned spin eccentric injections, in agreement with the previous studies [38; 42; 112], in addition to a mild correlation of eccentricity with the effective inspiral spin parameter \(\chi_{\text{eff}}\) (see Eq. 8) when the injection is aligned-spin and eccentric, consistent with the correlations seen in O'Shea and Kumar [93].
While an inspiral-only eccentric waveform model is better for analyzing eccentric signals than a full IMR quasi-circular model, it is likely that biases can be further reduced and posteriors be better constrained if an IMR eccentric waveform model is used for recovery. However, our results demonstrate that even eccentric waveform models with limited physics (no merger or ringdown, no spin-precession, no higher modes) can reduce errors in the inference of BBH parameters, and may therefore be a useful stepping stone towards analysis with full IMR models including eccentricity, spin-precession, and higher modes.
This paper is organized as follows. In Section II, we describe the methodology used for quantifying the sensitivity reduction of a GW search to eccentric signals, along with other metrics such as fitting factor and signal recovery fraction. We present our results for the detectability of eccentric systems in Section II.1. In the second half of the paper, we focus on parameter estimation of eccentric binaries, starting in Section III. We start with non-spinning eccentric systems (Section III.1) and then move to aligned spin eccentric systems in Section III.2. Finally, in Section IV, we include a comprehensive summary of our findings and future directions.
## II Blind spots: Reduced GW search sensitivity to eccentric binaries
In this section, we investigate the fraction of events that might be missed by searches due to neglecting eccentricity in template banks used for matched-filter based searches such as PyCBC [113], or GSTLAL [114]. The GW data represented by time series \(s(t)\) contains noise \(n(t)\), and may contain signal \(g(t)\). The signal model \(h(t,\theta_{i})\) is dependent on parameters \(\theta_{i}\), which define the intrinsic properties of the binary from which the GW emanates. The modeled searches perform matched filtering between signal templates \(\tilde{h}\) and the detector data \(\tilde{s}\) in the Fourier domain. The correlation of data \(\tilde{s}(f)\) with the signal model \(\tilde{h}(f)\) weighted by the power spectral density (PSD) \(S_{n}(f)\) of the noise is given by,
\[<s|h>=4\int_{0}^{\infty}\frac{\tilde{s}(f)\tilde{h}(f)}{S_{n}(f)} \tag{1}\]
The GW searches use discrete template banks. Such searches will always miss a fraction of signals because i) templates may not be accurate representations of the real signal, especially if these signals include additional physics (e.g., eccentricity, misaligned spins, higher modes) not included in the template bank and ii) discreteness of the bank. In order to quantify the loss of sensitivity of the search, we define the match between the template \(h(\theta_{i},\Phi)\) and a signal \(g\) in terms of the overlap between them as:
\[m(g,h(\theta_{i}))=\max(<g|h(\theta_{i},\Phi)>), \tag{2}\]
where \(h(\theta_{i},\Phi)\) is a template with intrinsic parameters \(\theta_{i}\) and extrinsic parameters \(\Phi\). The RHS in equation 2 is maximised over all the extrinsic parameters. The fitting factor \(FF(g)\), for a signal \(g\), is defined as [115]:
\[FF(g)=\max(m(g,h(\theta_{i}))), \tag{3}\]
where RHS in equation 3 is maximized over all the templates \(h(\theta_{i})\).
In order to get fraction of recovered signals relative to an optimal search, for which FF=1, we use the metric described in [116], which takes into account the intrinsic SNR of the signal to calculate the signal recovery fraction (SRF), defined as [115]:
\[SRF\equiv\frac{\sum_{i=0}^{n_{s}-1}\mathrm{FF}_{\mathrm{TB}}^{3}(s_{i})\sigma^{ 3}(s_{i})}{\sum_{i=0}^{n_{s}-1}\sigma^{3}(s_{i})}, \tag{4}\]
where \(\mathrm{FF}_{\mathrm{TB}}\) is the fitting factor for a volumetric distribution of \(n_{s}\) sources using a template bank and \(\sigma(s_{i})\) is the intrinsic loudness of each signal \(s_{i}\). We calculate the SRF for a given distribution of sources and a template bank.
### Reduced detectability of eccentric systems
We use a reference population distributed uniformly in source frame masses between \(m_{1,2}^{\mathrm{source}}\in[5,50]\) and redshift up to 3. For eccentricity parameter, log-uniform distribution is used while the sources are distributed uniformly in sky. In this section, we consider non-spinning population to quantify the effects of eccentricity parameter. The population is generated with three different waveform models:
* **No eccentricity distribution:** We use the IMRPhenomD[117, 118] waveform model to inject non-spinning, quasi-circular signals and use a template bank with the same waveform for recovery. This serves as the optimal search and we expect the maximal recovery of the injected signals.
* **With eccentricity distribution:** For these sets of injections, we use two waveform models: we generate one set with TaylorF2Ecc[119, 74], and the other with EccentricFD[120]. We use the same component masses as above, and a log-uniform eccentricity distribution in the range \(\log_{10}e_{10}\in[-7,-0.3]\) defined at 10 Hz. We choose this upper limit so that we stay within the regions of validity of the waveform models.
For the recovery, we use the 'quasi-circular' template bank (non-spinning) constructed with the waveform model IMRPhenomD. We use stochastic placing algorithms [121, 122] implemented in PyCBC to generate template bank for component masses in range \(\in[3,200]\)\(M_{\odot}\), using the minimal match criteria of 0.98. We use this template bank to quantify the fraction of lost signals if the intrinsic population has some intrinsic eccentricity distribution. We calculate the optimal SNR for each injection using the template bank and then estimate the \(FF\) for the set of injections. We use the low frequency cutoff of 10 Hz and detector sensitivities for i) advanced LIGO [123], and ii) \(A+\) design sensitivity [124]. In figure 1, we show the fitting
Figure 1: Cumulative fraction of events above a given Fitting Factor \(FF\) for various populations, distributed uniformly in masses and log-uniformly in eccentricity (measured at 10 Hz) with the match calculated against the standard template bank, shown here for detector sensitivities: Adv-LIGO (left) and \(A+\) (right). The grey curves show the fraction recovered for the reference population with no eccentricity, while the green and red curves show the fraction recovered for the eccentric population represented by the TaylorF2Ecc and EccentricFD models respectively. Three vertical dashed black lines show the fitting factor values of 0.9, 0.95, and 0.98 increasing in value from the left. These plots show that if we use the quasi-circular template bank to search for the population which contains a log-uniform distribution of eccentricities, we fail to detect a higher fraction of signals in searches. E.g. in the left panel (Adv LIGO sensitivity): eccentric population constructed with TaylorF2Ecc (EccentricFD) waveform has \(\approx\) 1 percent (\(\approx\) 2.2 percent) events with fitting factor less than 0.95. \(FF\) for baseline model IMRPhenomD is \(\approx\) 0.4 percent. In the right panel (\(A+\)): eccentric population constructed with TaylorF2Ecc (EccentricFD) waveform has \(\approx\) 0.6 percent (\(\approx\) 1.6 percent) events with \(FF\) less than 0.95. \(FF\) for baseline model IMRPhenomD is \(\approx\) 0.1 percent.
factors for all three injection sets considered above. As expected, the quasi-circular injection set generated with IMRPhenomD and recovered with the quasi-circular template bank gives us the maximum \(FF\). The upper cut-off frequency, for each system, is chosen to be the frequency corresponding to the innermost stable circular orbit (\(f_{\text{ISCO}}\)) for a test particle orbiting a Schwarzschild black hole. We use the minimum low frequency of 10 Hz to calculate the match via equation 2. The loss of \(FF\) is visible with both eccentric injection sets. In figure 2, we explore the parameter regime where the reduction in \(FF\) is maximum. As expected, larger eccentricity values (\(e_{10}>0.01\)) give us lower \(FF\). We also notice that the combination of high mass ratio (\(q=m_{1}/m_{2};m_{1}>m_{2}\)) and high eccentricity gives us maximum loss in the \(FF\). We propose that more extreme mass ratios lead to a larger reduction in \(FF\) for the same value of \(e_{10}\) because binaries with more extreme \(q\) have longer GW signals in-band, and therefore have more inspiral cycles over which the mismatch due to eccentricity accumulates.
Figures 1 and 2 show that for a given population of eccentric signals, there will be loss of \(FF\) for signals with eccentricity \(e_{10}>0.01\). This trend becomes more prominent for more extreme values of mass ratio \(q\). The extent of the overall search volume loss depends on the proportion of high-eccentricity signals in the population. In order to include eccentricity in GW searches, we require i) efficient eccentric waveform models and ii) a low computational cost in comparison to the gain in the search volume.
The SRF depends on the intrinsic source population under consideration. If the fraction of signals with high
Figure 2: The Fitting Factor (FF) varying with mass ratio \(q\) and \(\log_{10}\left(e\right)\) for a population uniform in component masses and log-uniform in eccentricity \(e_{10}\) measured at 10 Hz. Top row represents the population generated with TaylorF2Ecc and bottom row represents the population generated with EccentricFD. For all plots, we use the recovery template bank generated with IMRPhenomD. The left column shows results assuming the detector sensitivity of advLIGO and the right column shows results assuming the detector sensitivity of \(A+\). The maximal loss in fitting factor occurs for high mass ratios (\(q=m_{1}/m_{2}\)) and high eccentricity regimes, with high eccentricity values playing dominating role. We use only the inspiral part of the waveform, up to frequency corresponding to innermost stable circular orbit (ISCO), to calculate the \(FF\).
eccentricity (\(>0.01\)) is large, we expect to fail to recover a higher fraction of them. In order to estimate SRF, we choose a network of three detectors: HLV, with two LIGO detectors H and L at Hanford and Livingston respectively, and the Virgo (V) detector in Italy. For the full population described above, we estimate SRF to be 0.992 for optimal search (injection and recovery done with IMRPhenomD) for both the Advanced LIGO and A+ detector sensitivities. We kept the same sensitivity for Virgo [125] in both the networks. With eccentric injections using EccentricFD, the SRF for full population is estimated to be 0.989 (0.987) for Advanced LIGO (A+) detector sensitivity. For another set of eccentric injections generated using TaylorF2Ecc, the estimated SRF is 0.989 (0.987) for Advanced LIGO (A+) detector sensitivity. This indicates that the presence of eccentricity in GW signal reduces the overall SRF if non-eccentric recovery models are used. Moreover if the population has a significant number of events from the parameter space which is responsible for most loss in FF, the SRF is further reduced, indicating failure of recovering comparatively large fraction of events in that parameter region. For a targeted region in parameter space of non-negligible eccentricity (\(e_{10}>0.01\)) and high mass ratio (\(q>3\)), we summarize the results in Table 1. In this targeted region, we can get the value of SRF as low as \(\sim\)0.918 compared to the SRF of \(\sim\)0.99 for optimal pipeline.
To gain insights into a realistic population, we create another injection set. This set incorporates a power-law distribution of source frame masses, consistent with GWTC-3 population analysis [126], and an eccentricity distribution drawn from simulations outlined in [37, 127]. We limit the source frame mass distribution to the range [5, 50]\(M_{\odot}\), aligning with the template banks we generated. We use the same HLV detector network with two sensitivities for LIGO detectors. For this injection set, the SRF for the baseline model was calculated at \(\sim\)0.986 for both Advanced LIGO and A+ sensitivity. Focusing on targeted regions (\(e_{10}>0.01,q>3\)), the SRF is found to be \(\sim\)0.944 (0.946) for EccentricFD (TaylorF2Ecc) injections with Advanced LIGO design sensitivity, and \(\sim\)0.927 (0.928) for EccentricFD (TaylorF2Ecc) injections with A+ design sensitivity.
While we might detect eccentric signals via either quasi-circular template-based searches _or_ unmodelled searches, increasing the true fraction of the underlying eccentric population that will enter into our catalogues, bias could be introduced in the subsequently inferred parameters. For this reason, quantifying the bias introduced by analysing eccentric signals with quasi-circular waveform models is necessary. We turn our attention to this in the following section.
## III Biases: Mischaracterizing Eccentric Binaries with Parameter Estimation
In this section, we present results from injection analyses. We assess waveform systematics due to the neglect of eccentricity in PE studies, employing quasi-circular waveform models to recover injections into detectors with zero noise (i.e., the detector response to the signal is accounted for, but no additional Gaussian noise is added to the power spectral density representing the detector's sensitivity). We also perform injections into simulated Gaussian noise and analyze those injections too, for completeness; the results, which are consistent with those with zero noise, and are presented in Appendix D. We perform two sets of injections: one with non-spinning simulations based on NR [85, 90], and one using an EOB-based IMR signal model (TEOBResumS) for aligned-spin injections [128, 129, 130, 131, 132, 133, 134]. For non-spinning injections, we analyse quasi-circular and eccentric signals with mass ratios \(q=(m_{1}/m_{2})=(1,2,3)\) and a fixed total mass of \(M=35M_{\odot}\). For \(q=1\) injection, since the mass ratio prior is restricted to \(q\geq 1\), the posterior almost entirely lies above the injected value, skewing the posteriors for other correlated parameters (this has been discussed in detail in the following sections). Hence, for aligned-spin eccentric injection, we choose \(q=(1.25,2,3)\) with the same total mass, and drop \(q=1\) case. We employ state-of-the-art quasi-circular waveform models (with and without spins) to recover the injections via Bayesian parameter estimation (PE). We also perform PE with an approximate inspiral eccentric waveform TaylorF2Ecc[74, 119]. The approximate eccentric model used here for PE does not include contributions
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Injection Waveform** & \begin{tabular}{c} **SRF** \\ **Full Range** \\ \end{tabular} & \begin{tabular}{c} **SRF** \\ **(\(\mathbf{e_{10}>0.01}\))** \\ \end{tabular} & \begin{tabular}{c} **SRF** \\ **(\(\mathbf{q>3}\))** \\ \end{tabular} &
\begin{tabular}{c} **SRF** \\ **(\(\mathbf{e_{10}>0.01}\)) \& \& \((\mathbf{q>3}\))** \\ \end{tabular} \\ \hline IMRPhenomD & 0.992 (0.992) & - & 0.986 (0.997) & - \\ \hline TaylorF2Ecc & 0.989 (0.987) & 0.973 (0.97) & 0.923 (0.978) & 0.923 (0.948) \\ \hline EccentricFD & 0.989 (0.987) & 0.969 (0.963) & 0.923 (0.979) & 0.918 (0.944) \\ \hline \end{tabular}
\end{table}
Table 1: The SRF, as described in equation 4, is calculated for the injection sets described in the text to quantify the reduced detectability of eccentric signals when circular template bank is used. For recovery, we use a template bank designed for non-eccentric searches using IMRPhenomD waveform model. For each column, two numbers are shown: one for Advanced LIGO search sensitivity and the numbers in the bracket are quoted for A+ search sensitivity. The SRF for the optimal search (injection with IMRPhenomD) indicates the maximum. For eccentric injections, the loss in the SRF is maximum in the parameter space (\(e_{10}>0.01,q>3\)) which is affected most due to loss in FF.
from spin corrections associated with eccentricity and is based on a Taylor approximant [135; 136] different from the one used in TEOBResumS. We assume that our sources are at a distance of 410 Mpc and inclined at an arbitrary angle of \(30^{\circ}\) to the line of sight. The right-ascension (\(\alpha\)), declination (\(\delta\)), and polarization (\(\psi\)) angles are chosen arbitrarily with the values \(\sim\,164^{\circ}\), \(60^{\circ}\), and \(60^{\circ}\) respectively, and the geocent time (\(t_{\rm gps}\)) is taken to be 1137283217 s. Since the SNR of a GW signal depends on extrinsic parameters in addition to intrinsic parameters like mass and eccentricity, different extrinsic parameters may lead to different SNRs, changing the widths of the posteriors presented here.
The Bayesian posterior probability for a parameter \(\vec{\theta}\), given the data \(\vec{s}\) and a GW model \(h\), is given by
\[p(\vec{\theta}|\vec{s},h)=\frac{p(\vec{s}|\vec{\theta},h)p(\vec{\theta},h)}{p( \vec{s})}\,, \tag{5}\]
where \(p(\vec{s}|\vec{\theta},h)\) represents the likelihood, \(p(\vec{\theta})\) is the prior, and \(p(\vec{s}|h)\) represents the evidence. We also calculate Bayes factors between recoveries with eccentric and quasi-circular models, defined as:
\[\mathcal{B}_{\rm E/C}=\frac{p(\vec{s}|h_{1})}{p(\vec{s}|h_{2})} \tag{6}\]
where \(E\) and \(C\) correspond to eccentric and quasi-circular recoveries respectively.4 To estimate parameters, we use the PyCBC Inference Toolkit[137] and explore the parameter space that includes chirp mass (\(\mathcal{M}\)), mass ratio (\(q\)), time of coalescence (\(t_{c}\)), luminosity distance (\(d_{L}\)), phase of coalescence (\(\phi_{c}\)), inclination angle (\(\iota\)), right ascension(\(\alpha\)), and declination(\(\delta\)). For aligned spin recoveries, we use two additional parameters corresponding to the \(z\)-components of the spin vectors _viz._\(\chi_{\rm 1z}\) and \(\chi_{\rm 2z}\). For recoveries with spin-precession, we use isotropic spin distribution sampling the six spin components in spherical polar coordinates _viz._ the spin magnitudes (\(a_{i}\)) and the spin angles (\(S_{i}^{\Theta}\), \(S_{i}^{\Phi}\)).5 For eccentric recovery, we include an additional eccentricity (\(e\)) parameter in the parameter space.6 In our analysis, we marginalise over the polarization angle. The following subsections provide details of the specific injections, as well as various variations of recovery-waveform spin settings with which these injections have been recovered. While discussing the results in the following subsections, we make use of the term "recovery" to indicate a result in which the 90% credible interval of the posterior includes the injected value, and the systematic bias (difference between the median value and injected value) in the posterior is less than the width of the posterior (at 90% confidence). We judge that the result shows a significant bias if the injected value lies completely outside the 90% credible interval of the posterior. As indicated earlier, these biases are dependent on SNRs which, in this study, fall in the range of typical SNRs observed in the GW event catalogs. We use the HLV network with design sensitivities of Advanced LIGO [123] and Virgo [125] detectors to perform all the parameter estimation analyses shown here.
Footnote 4: For more information about the method, see Ref. [137].
Footnote 5: where \(i=[1,2]\) corresponds to the binary components, and \(\Theta\) and \(\Phi\) indicate the polar and azimuthal angles respectively used in spherical polar coordinate system.
Footnote 6: Refer Table 3 in Appendix A for complete information on priors.
### Non-spinning, eccentric injections
We perform zero-noise injections using non-spinning, quasi-circular as well as quasi-elliptical GW waveforms for BBH mergers of total mass of \(M=35\) M\({}_{\odot}\) with mass ratios \(q=(1,2,3)\). These injections include the dominant modes (\(\ell=2,|m|=2\)) of the eccentric and quasi-circular IMR hybrids constructed in Ref. [90], in addition to a quasi-circular SXS simulation (SXS:BBH:1132). Details of the simulations used in this study, including their eccentricity at the reference frequency of 20 Hz, are shown in Table 2. To calculate the eccentricity at 20 Hz GW frequency, we have used the reference value (\(e_{0}\)) from Table 1 of Ref. [90] which they have quoted for a dimen
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**S. No.** &
\begin{tabular}{c} **Injection Simulation ID /** \\ **Waveform** \\ \end{tabular} & **q** & **e20** & \(\chi_{\rm eff}\) \\ \hline
1 & SXS:BBH:1132 & 1 & 0 & - \\ \hline
2 & HYB:SXS:BBH:1355 & 1 & 0.104 & - \\ \hline
3 & HYB:SXS:BBH:1167 & 2 & 0.0 & - \\ \hline
4 & HYB:SXS:BBH:1364 & 2 & 0.104 & - \\ \hline
5 & HYB:SXS:BBH:1221 & 3 & 0.0 & - \\ \hline
6 & HYB:SXS:BBH:1371 & 3 & 0.123 & - \\ \hline
7 & TEOBResumS & 1.25 & 0.0 & 0.3 \\ \hline
8 & TEOBResumS & 1.25 & 0.1 & 0.3 \\ \hline
9 & TEOBResumS & 2 & 0.0 & 0.3 \\ \hline
10 & TEOBResumS & 2 & 0.1 & 0.3 \\ \hline
11 & TEOBResumS & 3 & 0.0 & 0.3 \\ \hline
12 & TEOBResumS & 3 & 0.1 & 0.3 \\ \hline \end{tabular}
\end{table}
Table 2: List of non-spinning, eccentric NR hybrid simulations (constructed in Ref. [90]) and injections based on aligned-spin eccentric EOB model TEOBResumS[128] used as injections. Columns include a unique hybrid ID for each simulation (SXS IDs are retained for identification with SXS simulations [138; 139; 138; 140; 139; 141] used in constructing the hybrids) and the name of the waveform model used for generating injections, information concerning the mass ratio (\(q=m_{1}/m_{2}\)), eccentricity (\(e_{20}\)) at the reference frequency of 20 Hz for a total mass of \(M=35\) M\({}_{\odot}\), and effective spin \(\chi_{\rm eff}\) defined in Eq. (8) (only shown for spinning injections).
sionless frequency of \(x=0.045\). Using the following relation we compute the 22 mode frequency corresponding to \(x=0.045\) and total mass 35 \(M_{\odot}\):
\[f_{22}=\frac{x^{3/2}}{\pi M}=17.62\ \mathrm{Hz} \tag{7}\]
where \(M\) is the total mass taken in natural units (seconds). Now that we have \(e_{0}\) at \(f_{22}\), we evolve it using Equation (4.17a) of Ref. [74] (eccentricity evolution for orbit averaged frequency) to get eccentricity value at 20 Hz (\(e_{20}\) shown in Table 2). Note that this is also the starting frequency for likelihood calculation.
#### iii.2.1 Quasi-circular, IMR recovery
Here we explore the bias introduced in the source parameters recovered via parameter estimation (PE) of GW events when eccentricity is ignored, i.e. we inject an eccentric signal but do not use eccentric waveforms for recovery. For this exercise, we use the quasi-circular IMR phenomenological waveform models (IMRPhenomXAS[110] and IMRPhenomXP[111]). In figures 3 and 4, we plot the posterior probability distributions on chirp mass \(\mathcal{M}\) for non-spinning quasi-circular and eccentric injections recovered using IMR quasi-circular waveform models. The injection is recovered using three spin setting configurations: non-spinning (NS), in which we restrict all spins to 0; aligned-spin (AS), in which we restrict spin-tilt angles to 0 and allow spin magnitudes to range between 0 and 0.99; and precessing-spin (PS), in which we allow all spin parameters to vary. By using different spin configurations, we investigate whether spurious measurements of spin occur for non-spinning eccentric injections. The spin precession parameter \(\chi_{p}\) is plotted in Fig. 5 for PS recovery. In addition to studying biases in spin parameters due to the presence of eccentricity, we also compare the effect of spin settings on the recovery of chirp mass in the presence of eccentricity, since eccentricity and chirp mass are known to be correlated parameters (see for instance Favata _et al._[38]). We use the waveform IMRPhenomXAS for non-spinning and aligned-spin recoveries, and IMRPhenomXP for the precessing-spin recovery. We display the corresponding corner plots in figures 6 and 13 for chirp mass (\(\mathcal{M}\)), effective spin parameter (\(\chi_{\mathrm{eff}}\)), and spin precession parameter (\(\chi_{p}\)).
Figure 3: Chirp mass posteriors for injections with mass ratios (\(q=2,3\)). The rows indicate the nature of the injections, with the top panel showing results for the quasi-circular injection, and the bottom panel showing results for the eccentric injection (\(e_{20}\sim 0.1\)). The colours correspond to different spin settings used during recovery. Recovery is performed using quasi-circular waveforms in all cases: IMRPhenomXAS is used for the non-spinning (red) and aligned spin (green) recoveries, and IMRPhenomXP is used for recovery allowing precessing spins (grey). The dashed vertical coloured lines of the same colours denote the 90% credible interval of the corresponding recoveries, the solid black line shows the injected value of \(\mathcal{M}\), and the dotted black curve indicates the prior which is same for all recoveries. The injected value is recovered within the 90% credible interval for the quasi-circular injections, while it is not recovered for the eccentric injections. The slight shift of posteriors for the quasi-circular injection in the \(q=3\) case may be attributed to systematic differences between the waveform models used for injection and recovery. The matched filter SNRs for \(q=2\) and \(q=3\) are 38 and 33 respectively.
Looking at figures 3, 4, 5, 6, and 13, we make the following observations:
* For all the values of mass ratio that we consider, the recovery of eccentric injections with quasi-circular waveform models results in significant bias of the chirp mass posterior, such that the injected value falls outside the 90% credible interval.
* The spin settings (non-spinning, aligned-spin, or precessing-spin) chosen for recovery do not affect the magnitude of shift in the chirp mass posterior for mass ratios 2 and 3, or in other words; the bias in the recovered \(\mathcal{M}\) is same regardless of assumptions about spin magnitude and spin tilt.
* For \(q=1\), the shift in the chirp mass posteriors for different spin configurations, seen in figure 4, can partly be explained due to the prior railing of mass ratio (\(q\)) leading to almost the entire posterior volume lying outside the injected value. This can lead to prior railing in component masses and other correlated parameters. This is discussed in detail in appendix B.
* The effective spin (\(\chi_{\text{eff}}\)) posterior is largely consistent with zero for both quasi-circular and eccentric injections. The same trend is seen for both the \(q=2\) and \(q=3\) cases. The slight deviation of \(\chi_{\text{eff}}\) from 0 for \(q=1\) case can be explained by looking at the correlation between chirp mass and \(\chi_{\text{eff}}\) (see Fig. 13 in appendix B).
* The posterior for \(\chi_{\text{p}}\) peaks towards 0 for the \(q=2\) and \(q=3\) cases, showing little to no evidence of spin-precession in the signal. For \(q=1\), the \(\chi_{p}\) posterior is uninformative. Both the uninformative posterior and posteriors that peak towards 0 support the conclusion that **eccentricity is not confused for spin-induced precession in long-duration signals from low-mass BBH**.
For the \(q=3\) case, the slight deviation of posteriors from the injected value for the quasi-circular injection (top left panel of Figure 3) is most likely due to systematic differences between the injection and the recovery waveform. Even at a reasonably modest eccentricity of \(e_{20}\sim 0.1\), the chirp mass posterior is shifted enough that the injected value is not recovered within 90% confidence. Further, for a non-spinning eccentric system with moderate total mass (\(M=35M_{\odot}\)), the presence of eccentricity in the signal is not mimicked by a spin-precessing quasi-circular waveform. This is consistent with the findings of Romero-Shaw _et al._[102], who find that eccentricity and spin-precession may be distinguished in signals with long inspirals coming from low-mass BBH due to the signal duration exceeding the timescale upon which modulations induced by eccentricity differ significantly from those induced by spin-precession. The fact that the spin posteriors are similar for both eccentric and quasi-circular injections also implies that a lack of spin can be confidently identified in low-mass systems regardless of their eccentricity.
#### iv.2.2 Eccentric, inspiral-only recovery
We analyze the same injections as in the previous section with an eccentric inspiral-only waveform model TaylorF2Ecc. We present the results in figures 7, 8, and 9. We also show results obtained when the recovery is performed under the constraint that \(e_{20}=0\) with the same waveform, in order to account for any biases arising due to systematic differences between waveform model families and/or the lack of merger and ringdown in TaylorF2Ecc. Since TaylorF2Ecc is an inspiral-only model, we have truncated the likelihood integration in the recovery using the quasi-circular waveform model also to the same frequency as the eccentric recovery for a fair comparison and to get comparable SNRs. This frequency has been chosen to be 110 Hz, close to the ISCO frequency for a 35 M\({}_{\odot}\) system.
In the case of mass ratios \(q=2,3\), for eccentric injections (plotted in red in the figures), a quasi-circular recovery excludes the injection value of chirp mass from the 90% credible interval when the recovery waveform has arbitrary spin constraints, whereas when eccentricity
Figure 4: Posteriors for \(q=1\) for non-spinning quasi-circular (top) and eccentric (bottom) injections. Recovery with different waveform models is indicated with different colours, corresponding to quasi-circular waveform IMRPhenomXAS for non-spinning and aligned spin, and IMRPhenomXP for precessing spins. The matched filter SNR for these injections is 41.
is included in the recovery waveform model and is sampled over, the injected value is recovered within the 90% credible interval. We show the 1D posteriors for eccentricity in Figure 9, and the 2D contours of \(e_{20}\) with \(\mathcal{M}\) and \(q\) in figure 14 of Appendix C which highlights the correlations between these parameters. The log-Bayes factors between eccentric and quasi-circular recoveries performed with TaylorE2cc are close to 0 for \(q=2\) and \(q=3\), but for \(q=1\) it is \(\sim 11\), and thus favours recovery with an eccentric template when the injected waveform is eccentric. However, for \(q=1\) (figure 8) even the quasi-circular recovery (NS) for quasi-circular injection (grey) is not recovered within 90% confidence. This may be partially caused by waveform systematics between the injected and recovered waveforms. Additionally, as shown in figure 9, the injected \(e_{20}\) is not recovered within 90% confidence for the eccentric injection. As well as waveform systematics, this is likely because the injected \(q\) is at the lower boundary of the prior, so the entire \(q\) posterior spans higher \(q\) than injected, and (as shown in, for example, figure 14) higher \(q\) correlates with higher \(e_{20}\). To eliminate possible biases due to this prior effect, we use \(q=1.25\) instead of \(q=1\) in the following section.
### Aligned-spin, eccentric injections
We inject an aligned-spin eccentric signal using the waveform model TEOBResumS [128] and recover in four configurations: quasi-circular aligned spin (AS), eccentric aligned spin (e-AS), eccentric non-spinning (e-NS), and quasi-circular precessing spin (PS). For the first three cases (AS, e-AS, and e-NS) we have used the waveform model TaylorE2cc for recovery, whereas for the last case (PS), we have used IMRPhenomXP. Since TaylorE2cc is an inspiral-only waveform, we truncate the likelihood calculation at 110 Hz in accordance with the choice of total mass as described above. As above, the total mass is 35 M\({}_{\odot}\), and here we choose to inject signals with mass ratios \(q=1.25,2,3\). The injected spin
Figure 5: \(\chi_{p}\) posteriors for non-spinning injections with mass ratios (\(q=1,2,3\)). The colours correspond to quasi-circular (grey) and eccentric (red) injections. Recovery is performed using IMRPhenomXP. The dashed vertical coloured lines of the same colours denote the 90% credible interval of the corresponding injections, and the black dotted curve shows the prior which is same for both the injections.
Figure 6: The corner plot for the \(q=2\) injection, showing posteriors on the chirp mass (\(\mathcal{M}\)), effective spin (\(\chi_{\text{eff}}\)), and spin precession parameter (\(\chi_{\text{p}}\)), for the precessing-spin recovery (performed using IMRPhenomXP) of both quasi-circular and eccentric injections. We show on the same plot results for the quasi-circular injection (grey), results for the eccentric injection (red), and the injection values black lines. The histograms shown on the diagonal of the plot are 1D marginalized posteriors for the respective parameters with vertical dashed lines denoting 90% credible intervals. The dotted curves in the 1D plots show the priors used,1 which are same for the recovery of both quasi-circular and eccentric injections.
magnitudes are \(\chi_{\rm 1z}=\chi_{\rm 2z}=0.3\). Thus the effective spin parameter \(\chi_{\rm eff}\)[161; 162] is:
\[\chi_{\rm eff}=\frac{m_{1}\chi_{\rm 1z}+m_{2}\chi_{\rm 2z}}{m_{1}+m_{2}}=0.3, \tag{8}\]
where \(\chi_{\rm 1z}\) and \(\chi_{\rm 2z}\) are the components of the two spin vectors in the direction of the angular momentum vector. The eccentric injections have \(e_{20}=0.1\), consistent with injections in earlier sections, with the eccentricity defined at the orbit-averaged frequency of 20 Hz [163]. The chirp mass posteriors for the above are plotted in the form of violin plots in figure 10. For arbitrary spin settings, the quasi-circular waveform models are not able to correctly recover the chirp mass of the eccentric injection, whereas the eccentric waveform model with aligned spins does correctly recover the injected value. This once again indicates that a quasi-circular precessing waveform model can cause biases in the recovered value of \(\mathcal{M}\) if the signal is truly eccentric. Further, looking at figures 11 and 16, we note that the \(\chi_{p}\) posteriors peak at the same value for both eccentric and quasi-circular injections. This again suggests that an aligned spin eccentric signal when recovered with quasi-circular precessing model does not mimic a spin-precessing signal any more than a quasi-circular aligned-spin injection.
We also analyze the same injection with the eccentric waveform model with zero spins (shown as e-NS in the figures). In this case, the posteriors for both the quasi-circular and eccentric injections are biased towards lower masses than injected. A positively-aligned-spin system has more cycles compared to its non-spinning counterpart, which is also the case for lower-mass systems. Thus, when an aligned-spin signal is recovered using a non-spinning waveform model, it is naturally
Figure 8: Posteriors shown (\(q=1\) case) for the eccentric model recovery of non-spinning injections. The colours indicate quasi-circular (grey) and eccentric (red) injections, whereas the horizontal axis denotes the recovery configuration as eccentric non-spinning (e-NS), eccentric aligned spin (e-AS), quasi-circular non-spinning (NS), and quasi-circular aligned spin (AS). The recovery waveform used here is TaylorF2Ecc and matched filter SNR for these injections is 33.
Figure 7: We show the recovery with TaylorF2Ecc for eccentric and quasi-circular injections in the form of violin plots, for mass ratios \(q=2,3\). The colours used distinguish the injection hybrids used; red shows eccentric (\(e_{20}\sim 0.1\)) injections while grey shows quasi-circular injections. The horizontal axis denotes the spin configuration in the recovery of posteriors. e-NS, e-AS, NS, and AS correspond to eccentric non-spinning, eccentric aligned spin, quasi-circular non-spinning, and quasi-circular aligned spin recoveries, respectively. The vertical axis corresponds to chirp mass values \(\mathcal{M}\). The black horizontal line indicates the injection value and coloured lines inside the shaded posteriors indicate the 90% credible interval. The matched filter SNRs for \(q=2\) and \(q=3\) are 31 and 28 respectively.
Figure 9: Eccentricity posteriors for \(q\)=(1, 2, 3) when the injections are non-spinning and eccentric, and the recovery is with TaylorF2Ecc in eccentric, non-spinning configuration. The coloured lines inside the shaded posteriors indicate 90% credible interval whereas the black dashed lines denote the injected eccentricity values.
biased towards lower masses7. Hence, a system which is eccentric and spinning (aligned) may be recovered with a positive bias in chirp mass when eccentricity is ignored, and a negative bias when the spins are ignored. Again, we provide the eccentricity posteriors in Fig 12 as well as in the form of corner plots along with \(\mathcal{M}\), \(q\), and \(\chi_{\text{eff}}\) parameters in figure 15 of Appendix C. Looking at figure 15, there is both a clear correlation between \(e_{20}\) and \(\mathcal{M}\), and a mild correlation between \(\chi_{\text{eff}}\) and \(e_{20}\), consistent with the findings of O'Shea and Kumar [93]. Also, as in Section III.1, we find that the Bayes factors (\(\mathcal{B}_{E/C}\)) for \(q=1.25\), 2, and 3 are not high enough to indicate a clear preference for either the quasi-circular or eccentric waveform model for any injection.
Figure 11: \(\chi_{p}\) posteriors for aligned spin injections with mass ratios (\(q=1.25,2,3\)). The colours correspond to quasi-circular (grey) and eccentric (red) injections. Recovery is performed using IMRPhenomXP truncated at 110 Hz for PE. The dashed vertical coloured lines of the same colours denote the 90% credible interval of the corresponding injections, and the black dotted curve shows the prior which is same for both the injections.
Figure 10: Eccentric (\(e_{20}=0.1\)) (red) and quasi-circular (grey) aligned-spin injections generated with eccentric waveform model TEOBResumS, recovered with TaylorE2Cec in quasi-circular aligned-spin (AS), eccentric aligned-spin (e-AS), and eccentric non-spinning (e-NS) configurations, and with quasi-circular waveform model IMRPhenomXP including precessing spin (PS). The aligned-spin eccentric injection is only correctly recovered using the e-AS configuration in the recovery waveform. The matched filter SNRs for \(q=1.25\), \(q=2\) and \(q=3\) are 35, 33 and 30 respectively.
## IV Conclusions
Measurable orbital eccentricity is a key indicator of BBH formation channel. However, catalogues of BBH detections, e.g. Abbott _et al._[5], typically neglect this parameter and study all GW candidates using only quasi-circular signal models. Additionally, matched-filter searches for GW signals typically rely on quasi-circular waveform templates. In this work, we explore both the detectability of the eccentric signals when eccentricity is neglected from matched-filtering searches, and the biases that result from performing parameter estimation on eccentric GW signals using quasi-circular waveform models under a variety of spin assumptions.
We find that there is a loss in the fitting factor (\(<0.95\)) for eccentricities higher than 0.01 at 10 Hz in conjunction with high values of mass ratio (\(q>3\)). Further, we calculate the signal recovery fraction (SRF) and find that there's a loss in SRF up to 6% for the region in parameter space with \(e_{10}>0.01\) and mass ratio \(q>3\). While we restrict this calculation to the inspiral section of the signal, we argue that the loss in \(FF\) would be similar for full IMR signals, since eccentricity is efficiently radiated away from an inspiralling system and so the binary should be close to circular before the merger and ringdown. The overall loss in the fraction of recovered signals depends on the fraction of events in the population that are high-eccentricity and high-mass ratio. These population characteristics, in turn, depend on the formation channels contributing to the population. For example, we would detect a higher fraction of binaries that formed in globular clusters (GCs) than those that formed in active galactic nuclei (AGN), since the eccentricity and mass ratio distributions expected from binaries forming in GCs are less extreme than those expected from AGN [e.g., 164, 165, 166, 164]. Therefore, with a severity depending on the balance between the formation channels contributing to the observed population, missing eccentric binaries in searches can lead to errors in the inferred merger rate and underlying population characteristics.
Even if an eccentric signal is detected via a matched-template search based on quasi-circular waveform templates, the recovery of source parameters can be biased when the signal is analysed using a quasi-circular waveform model. We perform parameter estimation on non-spinning and aligned-spin eccentric injections, and recover them using various spin assumptions with quasi-circular and eccentric waveform models. We find that for \(e_{20}\sim 0.1\), analyses with the quasi-circular waveform models are unable to recover the injected values of chirp mass within the 90% credible interval. Further, we note that for the relatively low-mass BBH considered in this study, no spurious spin detections are made for non-spinning eccentric injections, and no spurious inferences of precession are made for any eccentric injections. This leads us to conclude that **for relatively low mass systems, spin-precession does not mimic eccentricity**. The spin parameter posteriors are similar for both quasi-circular and eccentric injections.
As discussed in Section I, both eccentricity and spin-precession can indicate that a binary formed in a dynamical environment. Our results suggest that a non-spinning low-mass eccentric system, if analyzed using quasi-circular waveform models only, may be mistaken for a binary that formed in isolation since the quasi-circular waveform models do not enable measurements of eccentricity and the spin posteriors show no additional signatures of dynamical formation. This may lead to miscategorization of such systems as uninteresting or "vanilla" binaries. Moreover, if we are routinely biased to higher masses even for a small subset of signals that include the influence of binary eccentricity, the population distribution of mass will gradually be shifted higher. Eventually, this could lead to incorrect inferences about, for example, the location of the pair-instability mass gap and the fraction of the population comprised of hierarchical mergers [using hierarchical inference methods such as, e.g., 167]. While the shifts in the inferred chirp mass for the low-mass and moderate-eccentricity injections studied here are relatively minor, for higher eccentricities and higher masses the bias would likely be worse (see for instance Eq. 1.1 in [38]).
We also observe that for the eccentricity values chosen here (\(e_{20}\sim 0.1\)), even an inspiral-only eccentric waveform is able to recover the injected chirp mass within the 90% confidence interval. Therefore, we conclude that for GW signals from relatively low-mass BBH, inspiral-only eccentric waveform models are adequate for identifying and quantifying orbital eccentricity.
Figure 12: Eccentricity posteriors for \(q\)=(1.25, 2, 3) when the injections are aligned-spin and eccentric, and the recovery is with TaylorF2Ecc in eccentric, aligned-spin configuration. The coloured lines inside the shaded posteriors indicate 90% credible interval whereas the black dashed lines denotes the injected eccentricity value.
###### Acknowledgements.
We thank Patricia Schmidt for a critical reading of the manuscript and useful comments. We thank K. G. Arun, B. S. Sathyaprakash, Anuradha Gupta, Alexander H. Nitz, and Rahul Dhurkunde for useful discussions. IMR-S acknowledges support received from the Herchel Smith Postdoctoral Fellowship Fund. Computations were performed on the powehi workstation in the Department of Physics, IIT Madras, ATLAS cluster at AEI Hannover, and CIT cluster provided by the LIGO Laboratory. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. We used the following software packages: LALSuite [168], PyCBC [169], NumPy [170], Matplotlib [171], Seaborn [172], jupyter [173], dynesty [174], corner [175]. This document has LIGO preprint number LIGO-P2300326.
## Appendix A Priors used for parameter estimation
## Appendix B Note on equal-mass case (\(q=1\))
Here we discuss the \(q=1\) case, which is different from the other mass ratio cases due to physical limits on the mass ratio prior (\(q\geq 1\)). The shift in the chirp mass posteriors for different spin configurations, seen in figure 4, can partly be explained due to the prior railing of mass ratio (\(q\)) leading to a prior railing in component masses. Since the true value of injection is exactly \(q=1\), parameters correlated with \(q\) can become biased due to the entire posterior volume existing above the \(q=1\) boundary. In order to confirm this, we carry out an identical baseline injection run where we inject (\(\ell\)=2, \(m\)=\(|2|\)) mode, non-spinning, quasi-circular signal into zero noise using IMRPhenomXAS, and recover it in the same three spin configurations (non-spinning, aligned spin, precessing spin) as used for the hybrids. We observe that the trends are identical to the ones observed using the hybrids. Hence we conclude that for \(q=1\) case, the slight deviation from the usual trend is because of the mass ratio prior skewing the chirp mass posteriors.
## Appendix C Summary of correlations of eccentricity with other parameters
Here we show the bounds on \(e_{20}\) obtained using TaylorF2ECc to recover injections. In figure 14, we show the posteriors on eccentricity, chirp mass and mass ratio. For both the \(q=2\) and \(q=3\) cases, the 90% bounds on \(e_{20}\) include the injected value. Looking at the 2D plots, we see that eccentricity shows negative and positive correlations with the chirp mass and mass ratio parameters, respectively. Further, when we look at the aligned-spin injections shown in figure 15, there is also a mild correlation with the effective spin parameter \(\chi_{\text{eff}}\). Here again we see that the 90% credible interval for the eccentricity parameter includes the injected value.
## Appendix D Simulated noise injections: non-spinning and eccentric
We perform a set of injection recoveries with Gaussian noise simulated using the power spectral density (PSDs) of the detectors. The total mass of the injected BBH systems is \(40M_{\odot}\) and the mass ratios are \(q=1,2,3\), with the slightly heavier mass chosen to increase the computational efficiency of the analysis. These are recovered using quasi-circular waveform IMRPhenomXAS with zero spins. The results can be seen in figure 17. For each case, we have taken 10 noise realizations, which each correspond to the posteriors shown by thin curves in the plot. An equal number of samples were taken from each of these runs and combined to form the average posterior shown by the thick coloured curve. We also perform a zero-noise injection for each mass ratio case, which is shown by the dot-dashed curve in the plot. The vertical coloured lines denote 90% credible interval and the black line shows the injected value. As can be seen, the average posterior of all the noisy injections agrees well with the zero-noise curve for each case. Hence, for the analy
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **Prior** & **Range** \\ \hline \multirow{2}{*}{\(\mathcal{M}\)} & Uniform in & \multirow{2}{*}{5 - 50 \(M_{\odot}\)} \\ & component masses & \\ \hline \multirow{2}{*}{\(q\)} & Uniform in & \multirow{2}{*}{1 - 5} \\ & component masses & \\ \hline \multirow{2}{*}{\(d_{L}\)} & Uniform radius & 100 - 3000 Mpc \\ \hline \multirow{2}{*}{\(\iota\)} & Uniform sine & 0 - \(\pi\) \\ \hline \multirow{2}{*}{\(t_{c}\)} & Uniform & \(t_{\text{gss}}+(-0.1\) - 0.1) s \\ \hline \multirow{2}{*}{\(\phi_{c}\)} & Uniform & 0 - \(2\pi\) \\ \hline \multirow{2}{*}{\(\chi_{ia}\)1, \(a_{2}\)2 & Uniform & 0 - 0.9 \\ \hline \multirow{2}{*}{\((S_{i}^{\Theta}+S_{i}^{\Phi})\), \(c\)} & Uniform solid angle & \(\begin{array}{c}\Theta\in(0,\pi),\\ \Phi\in(0,2\pi)\end{array}\) \\ \hline \multirow{2}{*}{\((\alpha+\delta)\)} & Uniform sky & \(\begin{array}{c}\delta\in(\pi/2,-\pi/2),\\ \alpha\in(0,2\pi)\end{array}\) \\ \hline \multirow{2}{*}{\(e^{a}\)} & Uniform & 0 - 0.4 \\ \hline \end{tabular}
\end{table}
Table 3: Priors for parameters in various quasi-circular and eccentric recoveries.
ses in the main text, we have only performed zero-noise injections.
Figure 14: The corner plot, for \(q=2\) (left) and \(q=3\) (right), of chirp mass (\(\mathcal{M}\)), mass ratio (\(q\)), and eccentricity (\(e_{20}\)), for the non-spinning recovery from figure 7 performed using TaylorF2Ecc. The histograms on the diagonal of the plot are 1D marginalized posteriors for the respective parameters with vertical dashed lines denoting 90% credible intervals. The dotted curves show the priors used.
Figure 13: Same as figure 6, but for the cases of mass ratio \(q=1\) (left) and \(q=3\) (right).
Figure 15: The corner plot, for mass ratios \(q=1.25\) (top-left), \(q=2\) (top-right) and \(q=3\) (bottom), of chirp mass (\(\mathcal{M}\)), mass ratio (\(q\)), effective spin parameter (\(\chi_{\rm eff}\)), and eccentricity (\(e\)), for the aligned spin recovery from figure 10 performed using Taylor2ccc. The histograms on the diagonal of the plot are 1D marginalized posteriors for the respective parameters with vertical dashed lines denoting 90% credible intervals. The dotted curves show the priors used.
Figure 16: Same as figure 6, but for aligned-spin injections with mass ratios (from left to right) \(q=(1.25,2,3)\).
Figure 17: The chirp mass posteriors, for injections with mass ratios \(q=1,2,3\), compared between eccentric (red) and quasi-circular (grey) non-spinning injections, all recovered using IMRPhenomXAS with zero spins. The faint lines depict injections in different noise realisations, and the dark solid curve is the combined posterior. We have also plotted the zero-noise injection as dashed curve. The coloured vertical lines represent the 90% credible interval of the combined posterior of the same colour as the line, and the black line denotes the injected value. |
2309.09718 | Learning Covariances for Estimation with Constrained Bilevel
Optimization | We consider the problem of learning error covariance matrices for robotic
state estimation. The convergence of a state estimator to the correct belief
over the robot state is dependent on the proper tuning of noise models. During
inference, these models are used to weigh different blocks of the Jacobian and
error vector resulting from linearization and hence, additionally affect the
stability and convergence of the non-linear system. We propose a gradient-based
method to estimate well-conditioned covariance matrices by formulating the
learning process as a constrained bilevel optimization problem over factor
graphs. We evaluate our method against baselines across a range of simulated
and real-world tasks and demonstrate that our technique converges to model
estimates that lead to better solutions as evidenced by the improved tracking
accuracy on unseen test trajectories. | Mohamad Qadri, Zachary Manchester, Michael Kaess | 2023-09-18T12:32:11Z | http://arxiv.org/abs/2309.09718v1 | # Learning Covariances for Estimation with Constrained Bilevel Optimization
###### Abstract
We consider the problem of learning error covariance matrices for robotic state estimation. The convergence of a state estimator to the correct belief over the robot state is dependent on the proper tuning of noise models. During inference, these models are used to weigh different blocks of the Jacobian and error vector resulting from linearization and hence, additionally affect the stability and convergence of the non-linear system. We propose a gradient-based method to estimate well-conditioned covariance matrices by formulating the learning process as a constrained bilevel optimization problem over factor graphs. We evaluate our method against baselines across a range of simulated and real-world tasks and demonstrate that our technique converges to model estimates that lead to better solutions as evidenced by the improved tracking accuracy on unseen test trajectories.
## I Introduction
Robot state estimation is the problem of inferring the state of a robot (a set of geometric or physical quantities such as position, orientation, contact forces, etc.) given sensor measurements. The problem is typically formulated as Maximum a Posteriori (MAP) inference over factor graphs where each node (robot state \(\mathbf{x}_{i}\)) is connected to other states via factors (potentials) \(\phi_{i}\) which are distilled from sensor measurements:
\[\mathbf{x}^{\text{MAP}}=\operatorname*{argmax}_{\mathbf{x}}\prod_{i}^{N}\phi_ {i}(\mathbf{x}_{i};\theta_{i},z_{i}) \tag{1}\]
The factors typically assume the form:
\[\phi_{i}\propto\exp\left(-\frac{1}{2}||g_{i}(\mathbf{x}_{i})-z_{i})||_{\theta _{i}}^{2}\right) \tag{2}\]
which leads, after taking the negative log, to the equivalent non-linear least squares objective:
\[\dot{\mathbf{x}}=\operatorname*{argmin}_{\mathbf{x}}\sum_{i=1}^{N}\frac{1}{2} ||g_{i}(\mathbf{x}_{i})-z_{i}||_{\theta_{i}}^{2} \tag{3}\]
where \(\mathbf{x}\) are the state variables, \(\mathbf{x}_{i}\) a subset of \(\mathbf{x}\), \(\mathbf{z}=\{z_{i}\}\) the sensor measurements, and \(g\) the prediction function which maps states onto the sensor's measurement manifold. _Noise Models_\(\{\theta_{i}\}=\boldsymbol{\theta}\) affect the loss landscape (as seen in objective 3) and, as typical in data assimilation procedures [1], correspond to error covariance matrices. These parameters dictate the weight assigned to each measurement which, given an optimal parameter set \(\boldsymbol{\theta}^{*}\), should ideally correlate with the uncertainty of each sensor. Hence, \(\{\theta_{i}\}\) when inaccurately defined will lead to suboptimal solutions. On the other hand, and specifically because each \(\theta_{i}\) is used to scale different error and Jacobian terms after relinearization, the condition number of each \(\theta_{i}\) is correlated with the overall numerical conditioning and stability of problem 3. Traditionally the set \(\boldsymbol{\theta}\) is manually tuned per application. Nevertheless, alternative approaches exist for estimating \(\boldsymbol{\theta}\) from data. Zero-order optimization techniques such as [2, 3, 4, 5] can be leveraged but can also quickly become sample inefficient. Other methods attempt to minimize the final tracking error loss by performing gradient-based parameter updates. These techniques generally either 1) rely on unrolling the optimizer which is sensitive to various hyperparameters [6] or 2) assume that the selected graph optimization algorithm is differentiable [7]. Note however that this assumption does not hold for state-of-the-art optimizers such as iSAM2 [8] due to dependence on relinearization thresholds and non-differentiable operations such as removal/insertion of tree nodes. In addition, these methods do not consider the conditioning of the learned parameters. Hence in this work, we make three key contributions:
* We formulate the problem as a bilevel optimization problem over factor graphs and use numerical differentiation to efficiently estimate the required gradients.
* We propose a technique for estimating well-conditioned positive definite matrices by incorporating hard condition number constraints into the learning procedure.
* We evaluate our approach on different synthetic navigation and real-world planar pushing examples in incremental estimation settings.
Fig. 1: At each iteration, the parameters \(\boldsymbol{\theta}^{i}\) serve as inputs to to the least squares solver. The inner loop optimization outputs a trajectory estimate \(\hat{\mathbf{x}}\) which depends on \(\boldsymbol{\theta}^{i}\). The Jacobian of \(\hat{\mathbf{x}}\) with respect to \(\boldsymbol{\theta}^{i}\) is computed via numerical differentiation and used to compute the gradient of the loss with respect to the parameters. Finally, this gradient is used to update the parameters \(\boldsymbol{\theta}\).
## II Background and Related Work
### _Filtering and Smoothing for State Estimation_
Early state estimation techniques such as the family of Kalman Filters [9, 10] rely on the Markov assumption to enable real-time performance. Recent algorithms have been proposed to make these filters differentiable [11, 12, 13, 14, 15]. However, the inherent reliance on the Markov assumption and the inability to re-linearize past states can lead to convergence to poor solutions. Hence, state-of-the-art robotic state estimation algorithms transitioned to factor graph smoothing-based methods which encode the inherent temporal structure avoiding the need to marginalize past states by providing efficient methods to relinearize past estimates [16, 17, 8]. Under Gaussian assumptions, the smoothing problem is equivalent to a non-linear least squares objective weighted by error covariance matrices which often necessitate manual tuning tailored to each application [18, 19].
### _Learned Components in Factor-Graph Inference_
Recent works incorporate learned components into factor-graph-based inference models. [20] learns depth codes which are subsequently used to compute various factors for dense monocular SLAM. [21] learns observation models which predict relative sensor poses then used in a factor graph formulation to predict the pose of manipulated objects. Similarly, [22] learns a model to predict relative robot poses from non-sequential ground penetrating radar image pairs then used in a factor graph in GPS-denied localization. However, these methods use surrogate losses for learning independently of the graph optimizer or final tracking error and generally require separate tuning of noise models. While traditionally these models have been manually tuned, novel strategies have emerged to learn them directly from data. [23, 6] proposed differentiating through the \(\operatorname*{argmin}\) operator in eq. 3 by unrolling the optimizer. However, these techniques are typically sensitive to hyperparameters such as the number of unrolling steps [24] and, in addition, can suffer from vanishing as well as high bias and variance gradients. Few methods use variational inference techniques to refine noise models when no groundtruth trajectories are available [25, 26] or use groundtruth trajectories to learn time-correlated measurement noise models [27]. These methods target batch state estimation problems and do not consider the conditioning of the estimated matrices. Recently, a novel method _LEO_[28] capitalizes on the probabilistic view offered by iSAM2 (as a solver of eq.1) to provide a way to learn observation models, by minimizing a novel tracking error. In essence, at every training iteration, LEO samples trajectories from the posterior distribution (a joint Gaussian distribution over the states) and the deviation with respect to the groundtruth trajectory is minimized using an energy-based loss. However, LEO exhibits slow convergence speed due to its dependence on sampling from high-dimensional probability distributions and is prone to convergence to poor local minimas.
### _Covariance Estimation in Mathematical Statistics_
The estimation of well-conditioned and stable covariances has garnered considerable attention within the mathematical statistics community as their use spans different statistical methods and practical applications ranging from numerical weather prediction to financial portfolio optimization. [29] proposes incorporating a prior involving the nuclear norms of the covariance and its inverse in the estimation process to bound the eigenvalues. [30] performs maximum likelihood estimation of covariances subject to hard condition number constraints. [31] proposes new theoretical perspectives on reconditioning covariances using ridge regression or the minimum eigenvalue method. In this work, we propose to estimate error covariance matrices while imposing condition number constraints for the task of incremental robot state estimation.
### _Conditioning of Non-linear Least-Squares Problems_
Iterative methods solve eq. 3 by relying on a sequence of linearized subproblems. Each subproblem involves solving the linear system \(\mathbf{A}\Delta=\mathbf{b}\) where the stability of the solution is influenced by the condition number of \(\mathbf{A}:\kappa(\mathbf{A})\). In fact, it has additionally been shown that the convergence rate of specific solvers of the normal equation \(\mathbf{A}^{T}\mathbf{A}\Delta=\mathbf{A}^{T}\mathbf{b}\), such as conjugate gradient (CG), is upper bounded by \(\sqrt{\kappa(\mathbf{A}^{T}\mathbf{A})}\)[32]. When \(\sqrt{\kappa(\mathbf{A}^{T}\mathbf{A})}\) is high and without proper preconditioning, the performance of CG methods can be especially poor. In section III-C1, we show how estimating well-conditioned error covariances matrices is correlated with the stability of the linearized system.
## III Method
Our goal is to learn the parameters \(\{\theta_{i}\}\) using a gradient-based method from groundtruth robots trajectories \(\mathbf{x}_{\text{GT}}\). In this work, we view any unconstrained non-linear least squares solver (e.g. Levenberg-Marquardt (LM) or iSAM2), as a function \(f(\boldsymbol{\theta}):\mathcal{S}_{++}^{n_{1}}\times\ldots\times\mathcal{S}_ {++}^{n_{p}}\rightarrow\mathcal{X}\) with \(\boldsymbol{\theta}=\{\theta_{i}\mid\theta_{i}\in\mathcal{S}_{++}^{n_{i}}\}\) which, given an initial estimate \(\mathbf{x}^{0}\in\mathcal{X}:\mathcal{M}_{1}\times\ldots\times\mathcal{M}_{n}\), returns an estimate of the optimal state \(\hat{\mathbf{x}}\in\mathcal{X}\) after performing \(N\) update steps. Here, \(\mathcal{M}_{i}\) is a Lie Group (e.g. the special Euclidean group \(SE(n)\)) and \(\mathcal{S}_{++}^{n_{i}}\) is the set of \(n_{i}\times n_{i}\) positive definite matrices. Consider the following inner-outer optimization procedure (also illustrated in Fig. 1):
\[\text{Inner Loop: } f(\boldsymbol{\theta})=\operatorname*{argmin}_{\mathbf{x}}H( \mathbf{x},\boldsymbol{\theta};\mathbf{z})=\hat{\mathbf{x}}(\boldsymbol{ \theta}) \tag{4}\] \[=\operatorname*{argmin}_{\mathbf{x}}\sum_{i}\frac{1}{2}||g_{i}( \mathbf{x}_{i})-z_{i}||_{\theta_{i}}^{2}\] \[\text{Outer Loop: }\min_{\boldsymbol{\theta}}\mathcal{L}(f( \boldsymbol{\theta}),\mathbf{x}_{\text{GT}}) \tag{5}\]
Where \(\mathcal{L}\) is a differentiable loss function capturing the deviation of the estimate \(f(\boldsymbol{\theta})\) from the GT. At every iteration, eq. 5 outputs a set \(\boldsymbol{\theta}^{t}\). Or, in other words, selects an updated loss landscape for the inner loop optimization such that solving problem 4 leads to a minima/solution that is closer to the groundtruth trajectory \(\mathbf{x}_{\text{GT}}\). Let \(h(\mathbf{x},\boldsymbol{\theta}):=\frac{\partial H}{\partial\mathbf{x}}\). The graph of \(f\) consists of all points satisfying first order-optimality conditions of problem 4: \(\text{gph}(f)=\{(\boldsymbol{\theta},f(\boldsymbol{\theta}))\mid f(\boldsymbol{ \theta})=H(\hat{\mathbf{x}},\boldsymbol{\theta})\text{ and }h(\hat{\mathbf{x}}, \boldsymbol{\theta})=0\}\). By the chain rule, the gradient \(\frac{\partial\mathcal{L}}{\partial\boldsymbol{\theta}}\) requires an estimate of \(\frac{\partial f}{\partial\boldsymbol{\theta}}\) (i.e. the Jacobian of the solution with respect to the parameter vector) and by the
implicit function theorem [33], this Jacobian (i.e. \(\frac{\partial f}{\partial\mathbf{\theta}}\)) exists and can be computed as done in existing work in convex optimization [34, 35].
**Theorem 1**: **The Implicit Function Theorem:**
_Let \(\hat{\mathbf{x}}(\mathbf{\theta}):=\{\mathbf{x}\mid h(\mathbf{x},\mathbf{\theta})=0\}\) where \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{\theta}=\{\theta_{i}\mid\theta_{i}\in\mathcal{S}^{n}_{++}\}\). Let \(h\) be continuously differentiable in the neighborhood of \((\hat{\mathbf{x}},\mathbf{\theta})\) namely \(\frac{\partial h(\hat{\mathbf{x}}(\mathbf{\theta}),\mathbf{\theta})}{\partial\mathbf{x}}\) be nonsingular. Then:_
\[\frac{\partial f}{\partial\mathbf{\theta}}\!=\!\frac{\partial\hat{ \mathbf{x}}(\mathbf{\theta})}{\partial\mathbf{\theta}}\!=\!-\left(\frac{\partial h( \hat{\mathbf{x}}(\mathbf{\theta}),\mathbf{\theta})}{\partial\mathbf{x}}\right)^{-1} \!\frac{\partial h(\hat{\mathbf{x}}(\mathbf{\theta}),\mathbf{\theta})}{\partial\mathbf{ \theta}} \tag{6}\]
Note that the original theorem considers functions operating on vector spaces. However, the theorem can readily be extended to other manifolds by applying the appropriate group operations [36]. The partial derivatives in eq. 6 can be derived and computed analytically. However, since the size of the parameter vector \(\mathbf{\theta}\) is typically small (first, because each \(\theta_{i}\) is associated with one physical sensor and second, since each \(\theta_{i}\) has a small number of free parameters e.g. a maximum of \(6\) for elements residing in \(SE(2)\)), numerical differentiation proved to be efficient especially when coupled with parallelization on CPU.
### _Numerical Jacobians over Lie Groups_
The left Jacobian of functions acting on manifolds \(f:\mathcal{N}\rightarrow\mathcal{M}\) is defined as the linear map from the Lie algebra \(T_{\mathcal{E}}(\mathcal{N})\) of \(\mathcal{N}\) to \(T_{\mathcal{E}}(\mathcal{M})\), the Lie algebra of \(\mathcal{M}\):
\[\frac{{}^{\mathcal{E}}Df(\mathcal{Y})}{D\mathcal{Y}} =\lim_{\tau\to 0}\frac{f(\tau\oplus\mathcal{Y})\ominus f (\mathcal{Y})}{\tau} \tag{7}\] \[=\lim_{\tau\to 0}\frac{\text{Log}(f(\text{Exp}(\tau) \circ\mathcal{Y})\circ f(\mathcal{Y})^{-1})}{\tau} \tag{8}\]
where \(\mathcal{Y}\in\mathcal{N}\), \(\tau\) is a small increment defined on \(T_{\mathcal{E}}(\mathcal{N})\). The Log operator maps elements from a Lie Group to its algebra while the Exp operator maps elements from the algebra to the group. \(\oplus\), \(\ominus\), and \(\circ\) are the plus, minus, and composition operators respectively [37] where:
\[\tau\oplus\mathcal{Y}=\text{Exp}(\tau)\circ\mathcal{Y} \tag{9}\] \[\tau=\mathcal{Y}_{1}\ominus\mathcal{Y}_{2}=\text{Log}(\mathcal{Y }_{1}\circ\mathcal{Y}_{2}^{-1});\ \mathcal{Y}_{1},\mathcal{Y}_{2}\in\mathcal{N} \tag{10}\]
In this work, \(\mathcal{N}=\mathcal{S}^{n_{1}}_{++}\times\ldots\times\mathcal{S}^{n_{p}}_{++}\) and \(\mathcal{M}=\mathcal{X}\). Additionally, we assume that each vector \(\theta_{i}\) corresponds to the non-zero elements of some corresponding diagonal positive definite matrix \(\Sigma_{i}\in\mathcal{S}^{n_{i}}_{++}\). i.e., we define the following map:
\[\theta_{i}=\text{diag}^{-1}(\Sigma_{i})\in\mathbb{R}^{n_{i}} \tag{11}\]
where the operator \(\text{diag}^{-1}\) constructs a vector from the diagonals of a matrix. Hence, \(\tau\in\mathbb{R}^{n_{i}}\) and the operator \(\oplus\) in eq. 7 is the standard addition on vector space \(\mathbb{R}^{n_{i}}\).
### _Constrained Tracking Loss_
Consider a parameter estimate \(\mathbf{\theta}\in\mathbb{R}^{m}\) with \(m=\sum_{i}^{p}n_{i}\). Let \(\mathcal{D}\) be the training set, \(T\) be the total number of states in groundtruth trajectory \(\mathbf{x}_{\text{GT}}\), and \(D\) be the sum of the Lie algebra dimensions of all states. Then the outer loss is the constrained mean squared error between the estimated trajectory and \(\mathbf{x}_{\text{GT}}\):
\[\mathcal{L}(\mathbf{\theta}) =\!\frac{1}{2|\mathcal{D}|}\sum_{j=1}^{|\mathcal{D}|}||\mathbf{vec }(f(\mathbf{\theta})\ominus\mathbf{x}_{\text{GT}})||_{2}^{2}\] (12) subject to: \[\mathbf{\lambda}_{i}^{\text{min}}\leq\theta_{i}\leq\mathbf{\lambda}_{i}^{\text{ max}}\quad\forall\,\theta_{i}\,\in\,\mathbf{\theta}\]
where \(\mathbf{\lambda}_{i}^{\text{min}}>0\) and \(\mathbf{\lambda}_{i}^{\text{max}}>\mathbf{\lambda}_{i}^{\text{min}}\) are vectors of minimum and maximum eigenvalues, defined per coordinate and per \(\theta_{i}\), which are enforced to better condition the estimated diagonal matrices as well as ensure their positive definiteness. In other words, these constraints allow to upper bound the condition number \(\kappa(\theta_{i})=\frac{\mathbf{\lambda}_{i}^{\text{min}}}{\mathbf{\lambda}_{i}^{ \text{min}}}\) of the estimated matrices. This, in turn, contributes to better conditioning of the overall linearized system during online inference (see III-C1). Hence, since the function \(\mathcal{L}(\mathbf{\theta})\) is non-convex, the constraints help in steering the optimization towards more desirable minimas. This constrained objective is solved by performing iterative Frank-Wolfe update steps. Algorithm 1 outlines a single training iteration. Note that the non-linear least squares (NLLS) in line 3 can be solved by any NLLS optimizer. i.e. our method is agnostic to the choice of optimizer, whether it is differentiable (e.g., Levenberg-Marquardt) or non-differentiable (e.g., iSAM2).
```
1:Input: Factor Graph \(\mathcal{F}\), initialization \(\mathbf{x}^{0}\)
2:whileitr \(<\) max_iter
3:\(f(\mathbf{\theta}^{t})=\) Solve[NLLS(\(\mathcal{F}\), \(\mathbf{x}^{0}\))]
4: Estimate \(\frac{\partial\mathcal{L}}{\partial\mathbf{\theta}}\) using Eq. 15
5:\(\mathbf{\theta}^{t+1}\) = Frank-Wolfe-Step(\(\frac{\partial L}{\partial\mathbf{\theta}},\mathbf{\theta}^{t}\))
6:itr = itr+1
```
**Algorithm 1** Training Loop
```
1:Input: Factor Graph \(\mathcal{F}\), initialization \(\mathbf{x}^{0}\)
2:whileitr \(<\) max_iter
3:\(f(\mathbf{\theta}^{t})=\) Solve[NLLS(\(\mathcal{F}\), \(\mathbf{x}^{0}\))]
4: Estimate \(\frac{\partial\mathcal{L}}{\partial\mathbf{\theta}}\) using Eq. 15
5:\(\mathbf{\theta}^{t+1}\) = Frank-Wolfe-Step(\(\frac{\partial L}{\partial\mathbf{\theta}},\mathbf{\theta}^{t}\))
6:itr = itr+1
```
**Algorithm 2** Frank-Wolfe-Step
The linear program in Alg. 2 line 3 has negligible computational burden (\(m\) decision variables and \(2m\) constraints) and can be solved efficiently using methods such as Interior Point or Dual-Simplex. To estimate, the gradient in Alg. 1 line 4, we propose to perform numerical differentiation by taking advantage of the small parameter space. Taking the gradient of eq. 12, we get:
\[\frac{\partial\mathcal{L}}{\partial\mathbf{\theta}}=\frac{1}{|\mathcal{D}|}\sum_{j=1 }^{|\mathcal{D}|}S(f(\mathbf{\theta}))^{T}\cdot\underbrace{\mathbf{vec}((f(\mathbf{ \theta})\ominus\mathbf{x}_{\text{GT}})}_{\in\mathbb{R}^{T\mathcal{D}}} \tag{13}\]
where \(\mathbf{vec}\) is the vectorization operator and \(S(f(\mathbf{\theta}))\in\mathbb{R}^{TD\times m}\) is a matrix such that each row \(r\) is equal to:
\[S(f(\mathbf{\theta}^{t}))_{r} =\mathbf{vec}\left(\frac{\partial f(\mathbf{\theta})}{\partial\theta_{ ij}}\right) \tag{14}\] \[=\lim_{\tau_{ij}\to 0}\frac{\mathbf{vec}(\text{Log}(f(\mathbf{\tilde{ \theta}})\circ f^{-1}(\mathbf{\theta})))}{\tau_{ij}} \tag{15}\]
where \(\hat{\theta}_{ij}=\theta_{ij}+\tau_{ij}\) and \(\hat{\mathbf{\theta}}=\mathbf{\theta}\) otherwise. By the implicit function theorem, \(\frac{\partial f(\mathbf{\theta})}{\partial\theta_{ij}}\) exists and is estimated using finite differencing by perturbing the parameter \(\theta_{ij}\) by \(\tau_{ij}\).
### _Remarks_
#### Iii-C1 Condition Number Constraints
The noise models \(\mathbf{\theta}\) are used to weight the contribution of the error terms during inference and their eigenvalue spread is correlated with the numerical condition of the linear system resulting from the linearization of eq. 3. Indeed, linearizing eq. 3 at iterate \(\mathbf{x}^{t}\), we obtain:
\[\Delta^{*}=\operatorname*{argmin}_{\Delta}\sum_{i=1}^{N}\frac{1}{2}||g_{i}( \mathbf{x}_{i}^{t})+\frac{\partial g_{i}}{\partial\mathbf{x}_{i}}\Delta_{i}- z_{i}||_{\theta_{i}}^{2} \tag{16}\]
which as shown in [38] can be re-written as:
\[\Delta^{*}\!\!=\!\operatorname*{argmin}_{\Delta}\!\!\sum_{i=1}^{N}\!\frac{1}{2 }||\theta_{i}^{-\frac{1}{2}}\frac{\partial g_{i}}{\partial\mathbf{x}_{i}} \Delta_{i}+\theta_{i}^{-\frac{1}{2}}(g_{i}(\mathbf{x}_{i}^{t})-z_{i})||_{2}^{2} \tag{17}\]
Collecting all Jacobians and prediction errors, the system can be rewritten as:
\[\Delta^{*}=\operatorname*{argmin}_{\Delta}\frac{1}{2}||\mathbf{A}\Delta- \mathbf{b}||_{2}^{2} \tag{18}\]
Since both the Jacobian \(\mathbf{A}\) and error vector \(\mathbf{b}\) are composed of elements which are matrix multiplied by some \(\theta_{i}^{-\frac{1}{2}}\), the eigenvalue spread \(\bar{s}\) which we define as the maximum eigenvalue divided by the minimum eigenvalue over all \(\theta_{i}\):
\[\bar{s}=\frac{\max\{eig(\theta_{i})\text{ for }\theta_{i}\in\mathbf{\theta}\}}{\min\{ eig(\theta_{i})\text{ for }\theta_{i}\in\mathbf{\theta}\}} \tag{19}\]
is correlated with the conditioning of matrix \(\mathbf{A}\) and the numerical stability of the system in eq. 181. Our method enables to constrain \(\bar{s}\) by incorporating hard constraints into the learning process.
Footnote 1: With factorization-based methods such as QR, poor conditioning can lead to the loss of significant digits or even yield incorrect solutions.
#### Iii-C2 Inner Loop Initialization during Training
We borrow from the imitation learning literature and initialize the inner loop optimizer (\(\mathbf{x}^{0}\) in alg. 1 line 3) with the GT trajectory _during training_. These trajectories form our training set and hence, are known a priori. Such initialization has been shown to improve convergence speed and stability [39]. Note that this is different from baselines such as LEO which, during training, solves the inference problem in eq. 3 incrementally while initializing \(\mathbf{x}^{t}=\mathbf{x}^{t-1}\) at each new timestep \(t\).
## IV Results
### _Baselines_
We compare our method against three baselines: Nelder-Mead [5], the Modified Powell's method [4], and LEO [28]. _Nelder-Mead_ is a gradient-free simplex-based optimization algorithm that aims at decreasing the value of a given function \(f\) at the vertices of a working simplex by performing a sequence of transformation (i.e. reflection, shrinkage, etc.). _Powell's method_ is similarly gradient-free and minimizes \(f\) by performing a series of one-dimensional line minimization along some search directions. Both methods minimize eq. 5 where \(\mathcal{L}\) is the L2 loss. We use SciPy's implementation of these algorithms and run them until convergence. _LEO_ minimizes the following outer-loop energy-based loss:
\[\mathcal{L}(\theta)\!=\!\frac{1}{|\mathcal{D}|}\!\sum_{(\mathbf{x}_{\text{gt} }^{j},\mathbf{z}^{j})\in\mathcal{D}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the RMSE of the output trajectories compared against GT. Other than \(D_{1}\) and \(D_{4}\), with constraints enabled, for which Nelder-Mead or Powell generates parameters with slightly better rotation accuracy, our technique consistently converges to parameters \(\mathbf{\theta}^{*}\) that lead to better tracking accuracy on all remaining unseen test trajectories.
### _Real-world planar pushing_
We perform experiments on real-world tactile pushing example where an end-effector pushes an object and the goal is to estimate the pose of both the end-effector and that of the object from sensor data. Groundtruth end-effector and object poses are obtained using a motion capture system. We follow the formulation in [21] where the graph includes relative pose factor \(\phi_{\text{fac}}\) (in which measurements predict the difference in the pose of the end-effector relative to the object between times \(t\) and \(t-n\) with \(n>1\)), quasi-static dynamics factor \(\phi_{\text{dyn}}\), geometric constraints \(\phi_{\text{geo}}\), and end-effector pose priors \(\phi_{\text{prior}}\). Each factor involves a different parameter \(\in\{\theta_{\text{fac}},\theta_{\text{dyn}},\theta_{\text{geo}},\theta_{ \text{prior}}\}=\mathbf{\theta}\) which collectively form our optimization set. We perform two sets of experiments where 1) \(\phi_{\text{fac}}\) is provided by a noisy oracle (i.e. simulating an accurate sensor with confident measurements) and 2) \(\phi_{\text{fac}}\) being computed from a stream of real tactile measurement. Here, we pre-train a fully connected network on a small training set to output the relative pose measurements from tactile input images. These measurements are designed to
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{**Losses**} & \multicolumn{4}{c}{**ELLIP**} \\ \hline \multirow{3}{*}{\(D_{1}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.784 & 0.569 & 0.817 & **0.012** & **0.012** \\ & & & 0.004 & 0.041 & 0.004 & \(6^{-5}\) & \(2^{-6}\) \\ \cline{3-7} & & & 2.625 & 1.875 & 2.581 & 0.510 & **0.273** \\ & & & 0.234 & 0.209 & 0.229 & 0.018 & **0.008** \\ \hline \multirow{3}{*}{\(D_{4}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.821 & 0.274 & 0.845 & 0.015 & **0.012** \\ & & & 0.004 & \(9^{-4}\) & 0.004 & \(1e^{-4}\) & \(1e^{-5}\) \\ \cline{3-7} & & & 2.431 & 2.456 & 2.477 & 1.543 & **0.982** \\ & & & 0.271 & 0.162 & 0.265 & 0.168 & **0.155** \\ \hline \multicolumn{2}{c|}{**Tight bounds**} & \multicolumn{2}{c|}{Initial} & \multicolumn{2}{c|}{NMead (C)} & Powell (C) & Ours (C) \\ \hline \multirow{3}{*}{\(D_{1}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.784 & - & 0.710 & **0.006** & 0.018 \\ & & 0.004 & - & 0.005 & \(2e^{-5}\) & \(2e^{-5}\) \\ \cline{3-7} & & & 2.625 & - & 1.534 & 0.364 & **0.287** \\ & & & 0.234 & - & 0.031 & 0.023 & **0.007** \\ \hline \multirow{3}{*}{\(D_{3}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.821 & - & 0.900 & 0.010 & \(1e^{-4}\) \\ & & & 0.004 & - & 0.007 & \(2e^{-5}\) & \(7e^{-5}\) \\ \cline{3-7} & & & 2.431 & - & 1.813 & 1.521 & **0.980** \\ & & & 0.271 & - & 0.159 & 0.159 & **0.141** \\ \hline \multicolumn{2}{c|}{**ROECT**} & \multicolumn{4}{c}{**Loose bounds**} & \multicolumn{2}{c|}{Initial} & LEO & NMead & Powell & Ours \\ \hline \multirow{3}{*}{\(D_{1}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.784 & - & 0.710 & **0.006** & 0.018 \\ & & & 0.004 & - & 0.005 & \(2e^{-5}\) & \(2e^{-5}\) \\ \cline{3-7} & & & 2.625 & - & 1.534 & 0.364 & **0.287** \\ & & & 0.234 & - & 0.031 & 0.023 & **0.007** \\ \hline \multirow{3}{*}{\(D_{4}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.821 & - & 0.900 & 0.010 & \(1e^{-4}\) \\ & & & 0.004 & - & 0.007 & \(2e^{-5}\) & \(7e^{-5}\) \\ \cline{3-7} & & & 2.431 & - & 1.813 & 1.521 & **0.980** \\ & & & 0.271 & - & 0.159 & 0.159 & **0.141** \\ \hline \multicolumn{2}{c|}{**ROECT**} & \multicolumn{4}{c}{**Loose bounds**} & \multicolumn{2}{c|}{Initial} & LEO & NMead & Powell & Ours \\ \hline \multirow{3}{*}{\(D_{1}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.78 & 1.027 & 1.381 & 0.912 & 0.009 & \(8e^{-4}\) \\ & & & 0.006 & 0.015 & 0.006 & \(8e^{-5}\) & \(0.007\) \\ \cline{3-7} & & & 5.71 & 5.724 & 4.308 & 0.575 & **0.396** \\ & & & 0.471 & 0.197 & 0.464 & 0.007 & **0.009** \\ \hline \multirow{3}{*}{\(D_{1}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.729 & 2.887 & 0.862 & 0.041 & **0.014** \\ & & & 0.004 & 0.001 & \(2e^{-4}\) & \(6e^{-5}\) \\ \cline{3-7} & & & 4.300 & 4.935 & 6.464 & 2.505 & **1.609** \\ & & & 0.529 & 0.287 & 0.556 & 0.229 & **0.212** \\ \hline \multicolumn{2}{c|}{**Tight bounds**} & \multicolumn{2}{c|}{Initial} & \multicolumn{2}{c|}{NMead (C)} & Powell (C) & Ours (C) \\ \hline \multirow{3}{*}{\(D_{2}\)} & \multirow{3}{*}{\(\mathcal{G}\)} & \multirow{3}{*}{\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.027 & - & 1.643 & **0.007** & 0.019 \\ & & & 0.006 & - & 0.016 & \(3e^{-5}\) & \(1e^{-5}\) \\ \cline{3-7} & & & 5.71 & - & 2.993 & 0.484 & **0.413** \\ & & & 0.471 & - & 0.061 & 0.026 & **0.013** \\ \hline \multirow{3}{*}{\(D_{4}\)} & \multirow{3}{*}{
\begin{tabular}{c} Transit \\ Rot \\ \end{tabular} } & 0.729 & - & 1.293 & 0.026 & **0.014** \\ & & & 0.004 & - & 0.
be relatively noisy and inaccurate. Both experiments are performed on 2 objects of different shapes: ellipsoidal and rectangular each featuring different contact patches. Finally, we initialize all \(\theta_{i}\in\mathbf{\theta}^{0}\) such that 1) they are far from the underlying unknown latents and 2) the spread \(\bar{s}\) is high. We use a 5/10 train/test split. The experimental setup is further illustrated in Fig. 2. Table II shows the RMSE of the output trajectories when compared against GT for the pushing task: LEO can converge to poor local minimas which lead at times to worse tracking error on the testing set compared to our initial estimate. Similarly, for Nelder-Mead, the method can fail to decrease the function value. In fact, it has practically been observed to stagnate at non-optimal points [40]. While Powell's method generates reasonable parameter estimates, our technique exploits the structure of the gradient and converges to solutions \(\mathbf{\theta}^{*}\) that lead to better or comparable tracking accuracy on all unseen test trajectories across the different experiments.
Table III shows the eigenvalue value spread \(\bar{s}\) of the optimized set \(\mathbf{\theta}^{*}\) estimated by our method under both tight and loose eigenvalue bounds. Note that the generated solution indeed satisfies the hard bound constraints (\(\bar{s}^{\text{max}}<100\)) when specified. Conversely, in the absence of upper-bound constraints, the eigenvalue spread can effectively grow unbounded.
## V Commentary
### _Invariance to the Specified Constraint Bounds_
We note from Tables I and II, that the performance of the Nelder-Mead and Powell's algorithms, in terms of tracking accuracy and variance of the output is influenced by the specified bound constraints. In contrast, our algorithm converges to minimas that lead to similar tracking performance (albeit with different eigenvalue spread) regardless of the specified bounds. Note that the parameter \(M\) in alg 2 requires tuning in order to damp the step size if the bound interval is large. In addition, and as typical in optimization problems, a solution needs to exist in the feasible region.
### _Runtime_
Fig. 4 shows the translation and rotation accuracy on the training set as a function of runtime for the navigation dataset \(D_{3}\). Nelder-Mead and Powell's method, being zero-order methods, exhibit slower convergence rates compared to gradient-based optimizers. LEO does leverage the gradient of the energy-based loss (eq. 20). However, it requires samples from a high dimensional probability distribution to approximate the integral term at each training iteration which is a time-consuming process. Our technique generates gradients by directly comparing deviations from the training trajectories leading to faster convergence. However, note that our method's running time is expected to increase proportionally to the dimension of \(\mathbf{\theta}\) since although parallelizable, the perturbations in eq. 15 need to be performed per parameter and per dimension.
### _Varying the Initialization_
We show in Fig. 5 the training error curves for different initializations of the parameter vector \(\mathbf{\theta}^{0}\) for dataset \(D_{3}\). We observe that across all initializations, our method converges to model estimates that minimize the translation and rotation error (deviation from groundtruth pose) on the training set.
### _Varying the number of training trajectories_
We used 5 training samples across experiments and found that increasing the number of training trajectories does not lead to improved generalization and testing accuracy. As noted in [41], given the relatively small parameter set \(\mathbf{\theta}\) and the fact that both train and test trajectories are sampled from the same distribution, the learning process only requires a few samples.
## VI Conclusion and Future Work
We introduce a gradient-based algorithm to learn error covariance matrices for robotic state estimation. Our technique formulates the problem as a bilevel optimization procedure and generates required gradients through numerical differentiation. Our method results in parameters that generalize better compared to baselines with the added benefit of incorporating hard condition number constraints. In future work, we want to extend our algorithm to learn parameters \(\{\theta_{i}\}\) that are themselves functions of observations i.e. \(\theta_{i}(z_{i},\Theta_{i})\) where \(\Theta_{i}\) can, for example, be the weights of a jointly trained neural network. Indeed, the outputs of the network can be perturbed to approximate \(\frac{\partial\mathbf{\hat{x}}}{\partial\theta_{i}}\) (where \(\mathbf{\hat{x}}\) is the solution returned by the graph optimizer) as proposed in this work and then chained with \(\frac{\partial\theta_{i}}{\partial\Theta}\) (obtainable from existing auto-differentiation packages such as PyTorch [42]) to get the Jacobian of the optimized output trajectory with respect to network weights. Finally, enforcing constraints on the output of a neural network offers interesting related avenues for future research.
Fig. 4: Training translation and rotation error vs runtime for all methods on the navigation \(D_{3}\) dataset.
Fig. 5: Convergence with varying parameter initialization \(\mathbf{\theta}^{0}\)
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & Ellip Ora & Ellip Net & Rect Ora & Rect Net \\ \hline Ours (Loose bounds) & 610 & 3615 & 460 & 420 \\ Ours (Tight bounds) & 70 & 62 & 67 & 75 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The spread \(\bar{s}\) (to the nearest integer) of the final output values \(\mathbf{\theta}^{*}\) learned by our method. |
2309.12476 | Differentially Private Reward Functions in Policy Synthesis for Markov
Decision Processes | Markov decision processes often seek to maximize a reward function, but
onlookers may infer reward functions by observing the states and actions of
such systems, revealing sensitive information. Therefore, in this paper we
introduce and compare two methods for privatizing reward functions in policy
synthesis for multi-agent Markov decision processes, which generalize Markov
decision processes. Reward functions are privatized using differential privacy,
a statistical framework for protecting sensitive data. The methods we develop
perturb either (1) each agent's individual reward function or (2) the joint
reward function shared by all agents. We show that approach (1) provides better
performance. We then develop a polynomial-time algorithm for the numerical
computation of the performance loss due to privacy on a case-by-case basis.
Next, using approach (1), we develop guidelines for selecting reward function
values to preserve ``goal" and ``avoid" states while still remaining private,
and we quantify the increase in computational complexity needed to compute
policies from privatized rewards. Numerical simulations are performed on three
classes of systems and they reveal a surprising compatibility with privacy:
using reasonably strong privacy ($\epsilon =1.3$) on average induces as little
as a~$5\%$ decrease in total accumulated reward and a $0.016\%$ increase in
computation time. | Alexander Benvenuti, Calvin Hawkins, Brandon Fallin, Bo Chen, Brendan Bialy, Miriam Dennis, Matthew Hale | 2023-09-21T20:48:17Z | http://arxiv.org/abs/2309.12476v3 | # Differentially Private Reward Functions
###### Abstract
Reward functions encode desired behavior in multi-agent Markov decision processes, but onlookers may learn reward functions by observing agents, which can reveal sensitive information. Therefore, in this paper we introduce and compare two methods for privatizing reward functions in policy synthesis for multi-agent Markov decision processes. Reward functions are privatized using differential privacy, a statistical framework for protecting sensitive data. Both methods we develop rely on the Gaussian mechanism, which is a method of randomization we use to perturb (i) each agent's individual reward function or (ii) the joint reward function shared by all agents. We prove that both of these methods are differentially private and compare the abilities of each to provide accurate reward values for policy synthesis. We then develop an algorithm for the numerical computation of the performance loss due to privacy on a case-by-case basis. We also exactly compute the computational complexity of this algorithm in terms of system parameters and show that it is inherently tractable. Numerical simulations are performed on a gridworld example and in waypoint guidance of an autonomous vehicle, and both examples show that privacy induces only negligible performance losses in practice.
## I Introduction
Multi-agent systems typically operate by sharing information among agents. In some cases, the information being shared is sensitive, such as an agent's location, and it requires protections when it is communicated. However, these protections can be difficult to provide in systems in which agents are inherently observable, such as in a traffic system in which agents are visible to other vehicles [1] or a power system in which power usage is visible to a utility company [2]. These systems do not offer the opportunity to modify communications to protect information precisely because agents are observed directly.
A fundamental challenge of such systems is that information about agents is readily visible to other agents and observers, though we would still like to limit the inferences that can be drawn from that information, such as an agent's intentions. To this end, we model individual agents performing sequential decision making tasks as Markov Decision Processes (MDPs), and we model collections of agents as Multi-Agent Markov Decision Processes (MMDPs). In environments modeled as MMDPs, the goal of the agents is to synthesize a reward-maximizing policy. In this work, we aim to develop a method for synthesizing policies that preserve the privacy of the agents' reward function while still approximately maximizing it.
The reward function in an MMDP is used to determine the optimal behavior, or policy, for each agent given the state all agents are in. We consider agents that each have their own reward function, and, in a way we make precise, they combine these rewards into a single joint reward function that they all maximize. This leads to a fully-cooperative MMDP [3]. Since these agents can be observed, the actions they take could reveal their reward function and thus reveal sensitive information, such as the agents' intentions. This can happen, for example, if an observer uses inverse reinforcement learning [4]. As a result, we propose using differential privacy to conceal the agents' reward functions from outside observers of agents' states and actions.
In particular, we use differential privacy to safeguard individual agents' reward functions in MMDPs. Differential privacy is a statistical notion of privacy from the computer science literature originally used to preserve the privacy of databases [5]. Differential privacy has been used recently in control systems and filtering [6, 7, 8], including LQ control [9] and formation control [10].
Differential privacy is appealing because of its strong protections for data and its immunity to post-processing [11]. That is, arbitrary computations on differentially private data are still protected by differential privacy. We propose first privatizing reward functions, then using dynamic programming to synthesize a policy with the privatized reward function. Since this dynamic programming stage is post-processing on privatized reward functions, the resulting policy preserves the privacy of the reward functions as well.
Of course, we expect perturbations to reward functions to affect performance. To assess the impact of privacy on the agents' performance we use the cost of privacy metric introduced in [12]. This metric quantifies the sub-optimality of the policy synthesized with the privatized reward function. In particular, it relates the total accumulated reward with privacy to that without privacy. We develop an algorithm to compute the cost of privacy, and we compute the exact computational complexity of this algorithm in terms of system parameters. These calculations show the tractability of this algorithm, and simulations bear out this tractability.
To summarize, we make the following contributions:
* We develop two differentially private mechanisms for privatizing reward functions in MMDPs.
* We provide an analytical bound on the accuracy of privatized reward functions and use that bound to trade-off privacy and accuracy.
* We provide an algorithm to compute the trade-off between privacy and performance then quantify its computational complexity.
* We validate the impact of privacy upon performance in two classes of simulations.
### Related Work
Privacy has previously been considered in the context of Markov decision processes, from both a planning and policy synthesis perspective [12, 13, 14] and a reinforcement learning perspective [15, 16]. Privacy has also been considered for Markov chains [17] and for general symbolic systems [18]. The closest work to ours is in [12] and [19]. In [12], the authors consider a similar problem to this work, that is, differentially private policy synthesis. However, they consider the application of privacy to transition probabilities, while we consider applying privacy to reward functions. Additionally, they only consider a single MDP, while in this work we consider a collection of MDPs. In [19], the authors also consider using differential privacy to protect rewards. However, they are concerned with privacy in communications between agents and a teacher, as well as protecting rewards that are learned. This differs from our work as we consider planning problems where the reward is already known, which changes how privacy is implemented.
The rest of this paper is organized as follows: Section II provides background and problem statements, and Section III presents two methods for privatizing reward functions. Then, Section IV bounds the accuracy of privatized rewards and provides accuracy-privacy trade-offs, and Section V presents a method of computing the cost of privacy. Section VI provides simulations, and Section VII concludes.
### Notation
For \(N\in\mathbb{N}\), we use \([N]\) to define the set \(\{1,2,\ldots,N\}\), we use \(\Delta(B)\) to be the set of probability distributions over a finite set \(B\), and we use \(|\cdot|\) for the cardinality of a set. Additionally, we use \(\lceil\cdot\rceil\) as the ceiling function. We use \(\pi\) the typical constant and as a policy for an MDP since both uses are standard. The intended meaning will be clear from context.
## II Preliminaries and Problem Formulation
In this section, we briefly review Markov decision processes and differential privacy, then we provide formal problem statements.
### Markov Decision Processes
Consider a collection of \(N\) agents indexed over \(i\in[N]\). We model agent \(i\) as a Markov decision process.
**Definition 1** (Markov Decision Process).: Agent \(i\)'s Markov Decision Process (MDP) is the tuple \(\mathcal{M}^{i}=(\mathcal{S}^{i},\mathcal{A}^{i},r^{i},\mathcal{T}^{i})\), where \(\mathcal{S}^{i}\) and \(\mathcal{A}^{i}\) are agent \(i\)'s finite sets of local states and actions, respectively. Additionally, let \(\mathcal{S}=\mathcal{S}^{1}\times\cdots\times\mathcal{S}^{N}\) be the joint state space of all agents. Then \(r^{i}:\mathcal{S}\times\mathcal{A}^{i}\rightarrow\mathbb{R}\) is agent \(i\)'s reward function, and \(\mathcal{T}^{i}:\mathcal{S}^{i}\times\mathcal{A}^{i}\rightarrow\Delta( \mathcal{S}^{i})\) is agent \(i\)'s transition probability function. \(\lozenge\)
With \(\mathcal{T}^{i}:\mathcal{S}^{i}\times\mathcal{A}^{i}\rightarrow\Delta( \mathcal{S}^{i})\), we see that \(\mathcal{T}^{i}(s^{i},a^{i})\in\Delta(\mathcal{S}^{i})\) is a probability distribution over the possible next states when taking action \(a^{i}\in\mathcal{A}^{i}\) in state \(s^{i}\in\mathcal{S}^{i}\). We abuse notation and let \(\mathcal{T}^{i}(s^{i},a^{i},y^{i})\) be the probability of transitioning from state \(s^{i}\) to state \(y^{i}\in\mathcal{S}^{i}\) when agent \(i\) takes action \(a^{i}\in\mathcal{A}^{i}\). We now model the collection of agents as a Multi-Agent Markov Decision Process (MMDP).
**Definition 2** (Multi-Agent Markov Decision Process [3]).: A Multi-Agent Markov Decision Process (MMDP) is the tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},r,\gamma,\mathcal{T})\), where \(\mathcal{S}=\mathcal{S}^{1}\times\cdots\times\mathcal{S}^{N}\) is the joint state space, \(\mathcal{A}=\mathcal{A}^{1}\times\cdots\times\mathcal{A}^{N}\) is the joint action space, \(r(s,a)=\frac{1}{N}\sum_{i\in[N]}r^{i}(s,a^{i})\) is the joint reward function given joint state \(s=(s^{1},\ldots,s^{N})\in\mathcal{S}\) and joint action \(a=(a^{1},\ldots,a^{N})\in\mathcal{A}\), the constant \(\gamma\in(0,1]\) is the discount factor on future rewards, and \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the joint transition probability distribution. That is, \(\mathcal{T}(s,a,y)\) denotes the probability of transitioning from joint state \(s\) to joint state \(y=(y^{1},\ldots,y^{N})\in\mathcal{S}\) given joint action \(a\), which is defined as \(\mathcal{T}(s,a,y)=\prod_{i=1}^{N}\mathcal{T}^{i}(s^{i},a^{i},y^{i})\) for all \(s,\ y\in\mathcal{S}\) and all \(a\in\mathcal{A}\). \(\lozenge\)
Given a joint action \(a_{j}\in\mathcal{A}\), agent \(i\) takes the local action \(a^{i}_{I_{j}(i)}\in\mathcal{A}^{i}\), where we use \(I_{j}(i)\) to denote the index of agent \(i\)'s local action corresponding to joint action \(j\). That is, for some action \(a_{j}\in\mathcal{A}\) we have \(a_{j}=\left(a^{1}_{I_{j}(1)},a^{2}_{I_{j}(2)},\ldots,a^{N}_{I_{j}(N)}\right)\). For
\[r(s_{j},a_{k})=\frac{1}{N}\sum_{i\in[N]}r^{i}(s_{j},a^{i}_{I_{k}(i)})\]
we define the mapping \(J\) such that
\[r(s_{j},a_{k})=J(\{r^{i}(s_{j},a^{i}_{I_{k}(i)}\}_{i\in[N]}). \tag{1}\]
**Remark 1**.: We emphasize that \(J\) is not simply an average of the vector forms of each agent's rewards. Indeed, because agents' sets of states and actions can have different cardinalities, such an average cannot even be computed in general. Instead, \(J\) iterates over all possible joint states and actions to produce a tabular representation of the joint reward for the MMDP in a way that includes each agent's rewards when taking each of their possible actions in each joint state.
A joint policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), represented as \(\pi=(\pi^{1},\ldots,\pi^{N})\), where \(\pi^{i}:\mathcal{S}\rightarrow\mathcal{A}^{i}\) is agent \(i\)'s policy for all \(i\in[N]\), is a set of policies which, given joint state \(s\), commands agent \(i\) to take action \(\pi^{i}(s)\). The control objective for an MMDP then is: given a joint reward function \(r\), develop a joint policy that solves
\[\max_{\pi}V_{\pi}(s)=\max_{\pi}E\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},\pi(s _{t}))\right], \tag{2}\]
where we call \(V_{\pi}\) the "value function".
Often, it is necessary to evaluate how well a non-optimal policy performs on a given MMDP. Accordingly, we state the following proposition that the Bellman operator is a contraction mapping, which we will use in Section V to evaluate any policy on a given MMDP.
**Proposition 1** (Policy Evaluation [20]).: Fix an MMDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},r,\gamma,\mathcal{T})\). Let \(V(s^{\prime})\) be the value function at state \(s^{\prime}\), and let \(\pi\) be a joint policy. Define the Bellman operator \(\mathcal{L}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) as
\[\mathcal{L}V\coloneqq\sum_{a\in\mathcal{A}}\pi(s)\left(r(s,a)+\sum_{s^{\prime }\in\mathcal{S}}\gamma\mathcal{T}(s,a,s^{\prime})V(s^{\prime})\right).\]
Then \(\mathcal{L}\) is a \(\gamma\)-contraction mapping with respect to the \(\infty\)-norm. That is, \(\left\lVert\mathcal{L}V_{1}-\mathcal{L}V_{2}\right\rVert_{\infty}\leq\gamma \left\lVert V_{1}-V_{2}\right\rVert_{\infty}\) for all \(V_{1},V_{2}\in\mathbb{R}^{n}\). \(\triangle\)
Solving an MDP has been shown to be \(P\)-Complete and is done efficiently via dynamic programming [20]. Thus, we will treat MMDPs as MDPs with larger state and action spaces, and we will perform policy synthesis for these larger MDPs.
### _Differential Privacy_
We now describe the application of differential privacy to vector-valued data and this will be applied to agents' rewards represented as vectors. First, we formulate an adjacency relation to define what differential privacy must conceal. The goal of differential privacy is to make "similar" pieces of data appear approximately indistinguishable. The notion of "similar" is defined by an adjacency relation. Many adjacency relations are possible, and we present the one we will use in the remainder of the paper; see [11] for additional background.
**Definition 3** (Adjacency).: Fix an adjacency parameter \(b>0\) and two vectors \(v,w\in\mathbb{R}^{n}\). Then \(v\) and \(w\) are said to be adjacent if the following conditions hold:
1. There exists some \(j\in[n]\) such that \(v_{j}\neq w_{j}\) and \(v_{k}=w_{k}\) for all \(k\in[n]\setminus\{j\}\)
2. \(\left\lVert v-w\right\rVert_{1}\leq b\),
where \(\left\lVert\cdot\right\rVert_{1}\) denotes vector 1-norm. We write \(\mathrm{Adj}_{b}(v,w)=1\) to say \(v\) and \(w\) are adjacent. \(\lozenge\)
Thus, \(v\) and \(w\) are adjacent if they differ in at most one element by at most \(b\).
Differential privacy is enforced by a mechanism which is a randomized map. For a given function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\), a mechanism \(\mathscr{M}\) takes a sensitive input \(x\in\mathbb{R}^{n}\) and returns a differentially private output that approximates \(f(x)\). The private output \(\tilde{y}=\mathscr{M}(x)\) can be generated by adding noise to \(f(x)\), i.e., \(\tilde{y}=f(x)+w\), where \(w\) is a random variable that takes values in \(\mathbb{R}^{n}\). To define the strength of privacy, privacy parameters \(\epsilon>0\) and \(\delta\in[0,\frac{1}{2})\) are used. Differential privacy is formally defined as follows.
**Definition 4** (Differential Privacy [11]).: Fix a probability space \((\Omega,\mathcal{F},\mathbb{P})\). Let \(b>0\), \(\epsilon>0\), and \(\delta\in[0,\frac{1}{2})\) be given. A mechanism \(\mathscr{M}:\mathbb{R}^{n}\times\Omega\rightarrow\mathbb{R}^{m}\) is \((\epsilon,\delta)\)-differentially private if for all \(x\in\mathbb{R}^{n}\) and \(x^{\prime}\in\mathbb{R}^{n}\) adjacent in the sense of Definition 3, and for all measurable sets \(S\subseteq\mathbb{R}^{m}\), we have
\[\mathbb{P}[\mathscr{M}(x)\in S]\leq e^{\epsilon}\mathbb{P}[\mathscr{M}(x^{ \prime})\in S]+\delta.\]
\(\lozenge\)
In Definition 4, the strength of privacy is controlled by the privacy parameters \(\epsilon\) and \(\delta\). In general, smaller values of \(\epsilon\) and \(\delta\) imply stronger privacy guarantees. Here, \(\epsilon\) can be interpreted as quantifying the leakage of private information and \(\delta\) can be interpreted as the probability that differential privacy leaks more information than allowed by \(\epsilon\). Typical values of \(\epsilon\) are \(0.1\) to \(3\)[21] and typical values of \(\delta\) are \(0\) to \(0.05\)[22]. Differential privacy is calibrated using the "sensitivity" of the function being privatized, which we define next.
**Definition 5** (Sensitivity).: The \(\ell_{2}\)-sensitivity of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is
\[\Delta_{2}f=\sup_{\begin{subarray}{c}x,x^{\prime}:\\ \mathrm{Adj}_{b}(x,x^{\prime})=1\end{subarray}}\left\lVert f(x)-f(x^{\prime}) \right\rVert_{2}.\]
\(\lozenge\)
In words, the sensitivity encodes how much \(f\) can differ on two adjacent inputs. A larger sensitivity implies that higher variance noise is needed to mask differences in adjacent data when generating private outputs. We now define a mechanism for enforcing differential privacy, namely the Gaussian Mechanism. The Gaussian Mechanism adds zero-mean noise drawn from a Gaussian distribution to functions of sensitive data.
**Lemma 1** (Gaussian Mechanism; [5]).: Let \(b>0\), \(\epsilon>0\), and \(\delta\in[0,1/2)\) be given, and fix the adjacency relation from Definition 3. The Gaussian mechanism takes sensitive data \(f(x)\in\mathbb{R}^{m}\) as an input and outputs private data
\[\mathcal{G}(x)=f(x)+z,\]
where \(z\sim\mathcal{N}(0,\sigma^{2}I)\). The Gaussian mechanism is \((\epsilon,\delta)\)-differentially private if
\[\sigma\geq\frac{\Delta_{2}f}{2\epsilon}\kappa(\epsilon,\delta),\]
where \(\kappa(\epsilon,\delta)=\left(\mathcal{Q}^{-1}(\delta)+\sqrt{\mathcal{Q}^{-1}( \delta)^{2}+2\epsilon}\right)\), with \(\mathcal{Q}(y)=\frac{1}{2\pi}\int_{y}^{\infty}e^{-\frac{\epsilon^{2}}{2}}d\theta\). \(\triangle\)
In this work, we often consider the identity query, namely \(f(x)=x\), which results in \(\Delta_{2}f=b\), where \(b\) is the adjacency parameter from Definition 3. Post-hoc mappings of data privatized using Lemma 1 or any differentially private mechanism are also differentially private, which we formally state next.
**Lemma 2** (Proposition 2.1, [11]).: Let \(\mathscr{M}:\mathbb{R}^{n}\times\Omega\rightarrow\mathbb{R}^{m}\) be an \((\epsilon,\delta)\)-differentially private mechanism. Let \(h:\mathbb{R}^{m}\rightarrow\mathbb{R}^{p}\) be an arbitrary mapping. Then \(h\circ\mathscr{M}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) is \((\epsilon,\delta)\)-differentially private.
### _Problem Statement_
Consider the policy synthesis problem (2). Computing \(\pi^{*}\) depends on the sensitive reward function \(r^{i}\) for all \(i\in[N]\). An adversary observing agents execute \(\pi^{*}\) may use techniques such as inverse reinforcement learning to determine \(r^{i}\). Therefore, we wish to develop a framework for multi-agent policy synthesis that preserves the privacy of \(r^{i}\) while still performing well. This will be done by solving the following problems.
**Problem 1**.: _Develop privacy mechanisms to privatize individual agents' reward functions in MMDPs._
**Problem 2**.: _Develop bounds on the accuracy of privatized rewards that will be used to trade-off privacy and accuracy._
**Problem 3**.: _Determine the loss in agents' total accumulated reward that results from using a policy computed on the privatized rewards._
## III Private Policy Synthesis
In this section, we solve Problem 1. First, we illustrate how we represent reward functions to apply differential privacy. Then, we present two mechanisms for applying privacy to our representation of reward functions. Let \(n_{i}=|\mathcal{S}^{i}|\) and \(m_{i}=|\mathcal{A}^{i}|\) be the numbers of local states and local actions, respectively, for agent \(i\). The joint state space and joint action space then has \(n=\prod_{i\in[N]}n_{i}\) states and \(m=\prod_{i\in[N]}m_{i}\) actions.
### _Privacy Setup_
To use Lemma 1 to enforce differential privacy, we first express the reward function as a vector. We represent the mapping \(r^{i}\) as a vector \(R^{i}\in\mathbb{R}^{nm_{i}}\), where the entries correspond to \(r^{i}\) being evaluated on all of its inputs. To elaborate, since \(r^{i}:\mathcal{S}\times\mathcal{A}^{i}\rightarrow\mathbb{R}\), there is a scalar reward associated with every state-action pair \((s,a^{i})\in\mathcal{S}\times\mathcal{A}^{i}\) comprised by a joint state \(s\) and local action \(a^{i}\). Furthermore, agent \(i\) has \(nm_{i}\) distinct state-action pairs of this kind. We therefore define \(R^{i}\) to be the vector with entries \(r^{i}(s,a^{i})\) for all \(s\in\mathcal{S}\) and \(a^{i}\in\mathcal{A}^{i}\).
We use the following convention for representing \(R^{i}\). Denote the joint states by \(s_{1},s_{2},\ldots,s_{n}\) and denote the local actions by \(a^{i}_{1},a^{i}_{2},\ldots,a^{i}_{m_{i}}\). Then we set
\[R^{i}=\Big{[}r^{i}(s_{1},a^{i}_{1}),r^{i}(s_{1},a^{i}_{2}), \ldots,r^{i}(s_{1},a^{i}_{m_{i}}),r^{i}(s_{2},a^{i}_{1}),\\ r^{i}(s_{2},a^{i}_{2}),\ldots,r^{i}(s_{2},a^{i}_{m_{i}}),\ldots \\ r^{i}(s_{n},a^{i}_{1}),r^{i}(s_{n},a^{i}_{2}),\ldots,r^{i}(s_{n}, a^{i}_{m_{i}})\Big{]}^{T}, \tag{3}\]
where \(R^{i}_{j}\) denotes the \(j^{th}\) entry of the vector \(R^{i}\). This vector fixes the joint state \(s_{1}\) and computes the reward for this state and each local action. Then it proceeds to \(s_{2}\) and does the same for \(s_{2}\) through \(s_{n}\). This same process can be repeated to represent the agents' joint reward \(R\in\mathbb{R}^{nm}\) by fixing the joint state \(s_{1}\) and computing the reward for this state and each joint action, then doing the same for \(s_{2}\) through \(s_{n}\).
Using Definition 3, we say two reward functions belonging to agent \(i\), denoted \(R^{i}\) and \(\tilde{R}^{i}\), are adjacent if they differ in one entry, with their absolute difference bounded above by \(b\). Let the vectors \(R^{i}\) and \(\hat{R}^{i}\) correspond to the reward functions \(r^{i}\) and \(\hat{r}^{i}\), respectively. Adjacency of \(R^{i}\) and \(\hat{R}^{i}\) is equivalent to the existence of some indices \(k\in[n]\) and \(\ell\in[m_{i}]\) such that \(r^{i}(s_{k},a^{i}_{\ell})\neq\hat{r}^{i}(s_{k},a^{i}_{\ell})\) and \(r^{i}(s_{c},a^{i}_{d})=\hat{r}^{i}(s_{c},a^{i}_{d})\) for all \(c\in[n]\backslash\{k\}\) and \(d\in[m_{i}]\backslash\{\ell\}\).
We note that the solution to Problem 1 is not unique, and we consider two means of enforcing privacy. In both setups, agents share their reward functions with an aggregator, the aggregator computes a joint decision policy for the agents, and then the aggregator sends each agent its constituent policy from the joint policy. The two setups we consider differ in where privacy is implemented in them.
First, we apply the Gaussian mechanism to agent \(i\)'s list of rewards, \(R^{i}\), referred to as "input perturbation". In input perturbation, the aggregator receives privatized rewards, denoted \(\tilde{R}^{i}\) from agent \(i\) for all \(i\in[N]\), and the aggregator uses those to generate a policy \(\tilde{\pi}^{*}\) for the MMDP. In the second setup, we instead apply the Gaussian mechanism to the list of joint rewards, \(R\), referred to as "output perturbation". In output perturbation, the aggregator receives sensitive rewards \(r^{i}\) from agent \(i\) for all \(i\in[N]\), computes the vector of joint rewards \(R\), and applies privacy to it to generate \(\tilde{R}\). Then it uses \(\tilde{R}\) to generate a policy \(\tilde{\pi}^{*}\) for the MMDP [11]. We detail each approach next.
### _Input Perturbation_
In the case of input perturbation each agent privatizes the identity map of its own reward. Formally, for all \(i\in[N]\) agent \(i\)'s reward vector is privatized by taking \(\tilde{R}^{i}=R^{i}+w^{i}\), where \(w^{i}\sim\mathcal{N}(0,\sigma^{2}I)\) for all \(i\in[N]\). The vector \(\tilde{R}^{i}\) can be put into one-to-one correspondence with the private reward function \(\tilde{r}^{i}\) in the obvious way. Then, we use \(\tilde{r}(s_{j},a_{k})=J(\{\tilde{r}^{i}(s_{j},a^{i}_{I_{k}(i)}\}_{i\in[N]})\) where \(J\) is from (1) for all \(j\in[n]\) and \(k\in[m]\) to compute \(\tilde{r}\), which is the private form of the joint reward \(r\) from Definition 2. The private reward \(\tilde{r}\) is then used in place of \(r\) to synthesize the agents' joint policy. After privatization, policy synthesis is post-processing of differentially private data, which implies that the policy also keeps each agent's reward function differentially private due to Lemma 2. Algorithm 1 presents this method of determining policies from private agent reward functions using input perturbation.
**Remark 2**.: For input perturbation, adjacency in the sense of Definition 3 implies that every agent's reward may differ in up to one entry by an amount up to \(b\), and these are the differences that must be masked by privacy.
We next state a theorem proving that input perturbation, i.e., Algorithm 1, is \((\epsilon,\delta)\)-differentially private.
**Theorem 1** (Solution to Problem 1).: _Given privacy parameters \(\epsilon>0\), \(\delta\in[0,0.5)\), and adjacency parameter \(b>0\), the mapping from \(\{r^{i}\}_{i\in[N]}\) to \(\{\pi^{*,i}\}_{i\in[N]}\) defined by Algorithm 1 keeps each \(r^{i}\)\((\epsilon,\delta)\)-differentially private with respect to the adjacency relation in Definition 3._
Proof.: Setting \(\sigma=\frac{b}{2\epsilon}\kappa(\epsilon,\delta)\), the result immediately follows
from Lemma 1 and Lemma 2 because the computation of \(\{\tilde{\pi}^{*,i}\}_{i\in[N]}\) is simply post-processing of differentially private data.
Using Algorithm 1, each agent enforces the privacy of its own reward function before sending it to the aggregator. Performing input perturbation this way enforces differential privacy on a per-agent basis, which is referred to as "local differential privacy" [23]. The main advantage of input perturbation is that the aggregator does not need to be trusted since it is only sent privatized information. The flow of information for input and output perturbation is seen in Figure 1. Another advantage is that agents may select differing levels of privacy. However, to provide a fair comparison with output perturbation, the preceding implementation considers each agent using the same \(\epsilon\) and \(\delta\).
### _Output Perturbation_
In output perturbation, for all \(i\in[N]\) agent \(i\) sends its sensitive (non-private) vector rewards, \(R^{i}\), to the aggregator. Then the aggregator uses these rewards to form the joint reward \(R\). For privacy, noise is added to the joint reward vector, namely \(\tilde{R}=R+w\), where \(w\sim\mathcal{N}(0,\sigma^{2}I)\). Similar to the input perturbation setup, computing the joint policy using the privatized \(\tilde{R}\) is differentially private because computation of the policy is post-processing of private data; this \(\tilde{R}\) can be related to a function \(\tilde{r}\) in the obvious way. Algorithm 2 presents this method of computing policies when reward functions are privatized using output perturbation.
```
Inputs: \(\{r^{i}\}_{i\in[N]}\), \(\mathcal{S}\), \(\mathcal{A}\), \(\gamma\), \(\mathcal{T}\), \(\epsilon\), \(\delta\), \(b\), \(N\); Outputs: Privacy-preserving policies \(\{\tilde{\pi}^{*,i}\}_{i\in[N]}\); for all\(i\in[N]\) in paralleldo Agent \(i\) generates its private reward function with the Gaussian mechanism, \(\tilde{R}^{i}=R^{i}+w^{i}\); Agent \(i\) sends \(\tilde{R}^{i}\) to the aggregator; end for Aggregator generates joint reward function: \(\tilde{r}(s_{j},a_{k})=J(\{r^{i}(s_{j},a_{I_{k}(i)}^{i}\}_{i\in[N]})\) for all \(j\in[n]\) and \(k\in[m]\); Aggregator generates joint policy, \(\tilde{\pi}^{*}=\arg\max_{\pi}E\left[\sum_{t=0}^{\infty}\gamma^{t}\tilde{r}(s _{t},\pi(s_{t}))\right]\); Aggregator sends \(\tilde{\pi}^{*,i}\) to agent \(i\) for all \(i\in[N]\)
```
**Algorithm 1**Private Policy Synthesis via Input Perturbation
**Remark 3**.: For output perturbation, adjacency in the sense of Definition 3 implies only a single agent's reward may change. If we regard the collection of vectorized rewards sent to the aggregator as a single vector, then adjacency in Definition 3 says that only a single entry in this vector can change. Thus, adjacency in the setting of output perturbation allows only a single agent's reward to change, and it can change by an amount of up to \(b\). However, given the mapping \(J\) in (1) between all agents' rewards and the joint reward, this difference will affect the joint reward in several places. We account for this fact next in calibrating the noise required for privacy.
We now state a theorem showing that output perturbation, i.e., Algorithm 2, is \((\epsilon,\delta)\)-differentially private.
**Theorem 2** (Alternative Solution to Problem 1).: _Fix privacy parameters \(\epsilon>0\), \(\delta\in[0,0.5)\), and adjacency parameter \(b>0\). Set \(\mu=\max_{j\in[n]}\prod_{\begin{subarray}{c}\ell\neq j\end{subarray}}^{N}m_{\ell}\). The mapping from \(\{r^{i}\}_{i\in[N]}\) to \(\{\pi^{*,i}\}_{i\in[N]}\) defined by Algorithm 2 keeps each \(r^{i}\)\((\epsilon,\delta)\)
Fig. 1: The flow of information for (a) input perturbation and (b) output perturbation. In input perturbation, agent \(i\) sends the aggregator its privatized reward function \(\tilde{r}^{i}\), while in output perturbation, agent \(i\) sends the sensitive (non-privatized) reward function \(r^{i}\). By privatizing their reward before sending it, agents using input perturbation have greater control of the strength of privacy used to protect their rewards, and the variance of the noise needed to provide privacy is less than that of output perturbation.
_differentially private with respect to the adjacency relation in Definition 3._
Proof.: From Lemma 1, Algorithm 2 is differentially private if \(\sigma\) is chosen according to
\[\sigma\geq\frac{\Delta_{2}J}{2\epsilon}\kappa(\epsilon,\delta). \tag{4}\]
We now compute \(\Delta_{2}J\) and substitute into (4), which will complete the proof, since the computation of \(\hat{\pi}^{*,i}\) is differentially private in accordance with Lemma 2 by virtue of being post-processing of differentially private data.
For each \(i\in[N]\), let \(r^{i}\) and \(\hat{r}^{i}\) denote two adjacent reward functions and let \(R^{i}\) and \(\hat{R}^{i}\) denote their vectorized forms as defined in (3). Then
\[R =\frac{1}{N}\Big{[}\sum_{i\in[N]}r^{i}\left(s_{1},a^{i}_{I_{1}(i)} \right),\sum_{i\in[N]}r^{i}\left(s_{1},a^{i}_{I_{2}(i)}\right),\cdots\] \[\sum_{i\in[N]}r^{i}\left(s_{1},a^{i}_{I_{m}(i)}\right),\sum_{i\in [N]}r^{i}\left(s_{2},a^{i}_{I_{1}(i)}\right),\cdots\] \[\sum_{i\in[N]}r^{i}\left(s_{n},a^{i}_{I_{m}(i)}\right)\Big{]}^{T }\in\mathbb{R}^{nm},\]
with \(\hat{R}\) defined analogously. As noted in Remark 3, \(R\) and \(\hat{R}\) can differ for only one agent. Suppose the index of that agent is \(j\). We then determine the sensitivity from Definition 5 of \(J\) as follows:
\[\Delta_{2}J=\max_{\begin{subarray}{c}\mathrm{Adj}_{b}(R^{j},R^{j})=1\\ R^{i}=\hat{R}^{i}\end{subarray}}\left\|R-\hat{R}\right\|_{2}.\]
Since \(R^{j}\) and \(\hat{R}^{j}\) are \(b\)-adjacent, there exists some indices \(k\in[n]\) and \(\ell\in[m_{i}]\) such that \(r^{j}(s_{k},a^{j}_{\ell})\neq\hat{r}^{j}(s_{k},a^{j}_{\ell})\) and \(r^{j}(s_{c},a^{j}_{d})=\hat{r}^{j}(s_{c},a^{j}_{d})\) for all \(c\in[n]\backslash\{k\}\) and \(d\in[m_{i}]\backslash\{\ell\}\). Using the fact that \(R^{j}\) and \(\hat{R}^{j}\) are \(b\)-adjacent, we obtain
\[\Delta_{2}J=\max_{\begin{subarray}{c}\mathrm{Adj}_{b}(R^{j},R^{j })=1\\ R^{i}=\hat{R}^{i}\end{subarray}}\frac{1}{N}\Big{\|}\Big{[}0,\cdots,r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_{\ell}),\] \[\cdots,r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_{\ell}),\cdots,0,\cdots,0\Big{]}^{T}\Big{\|}_{2}, \tag{5}\]
where \(r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_{\ell})\) appears \(\prod_{\ell\neq j}^{N}m_{\ell}\) times since the local action \(a^{j}_{\ell}\) appears in \(\prod_{\ell=1\atop\ell\neq j}^{N}m_{\ell}\) joint actions. Note that the indices of entries in \(R\) that contain the term \(r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_{\ell})\) depend on how the user orders the joint actions, but any convention for doing yields the same sensitivity.
Since \(\|R-\hat{R}\|_{2}\leq\|R-\hat{R}\|_{1}\), we upper bound (5) as
\[\Delta_{2}J\leq\frac{1}{N}|r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_ {\ell})|\prod_{\ell=1\atop\ell\neq j}^{N}m_{\ell}. \tag{6}\]
Additionally, from our definition of adjacency, we have
\[|r^{j}(s_{k},a^{j}_{\ell})-\hat{r}^{j}(s_{k},a^{j}_{\ell})|\leq b. \tag{7}\]
We then combine (6) and (7) to find
\[\Delta_{2}J\leq\frac{b}{N}\max_{j\in[n]}\prod_{\ell=1\atop\ell\neq j}^{N}m_ {\ell}=\frac{b}{N}\mu,\]
which we then substitute into (4) to find
\[\sigma\geq\frac{b}{2\epsilon N}\kappa(\epsilon,\delta)\mu,\]
which, combined with Lemma 2, completes the proof.
Unlike input perturbation, output perturbation requires that agents trust the aggregator with their sensitive information. Additionally, all agents will have the same level of privacy.
Contrary to some other privacy literature, we expect significantly better performance using input perturbation over output perturbation for an increasing number of agents at a given \(\epsilon\). For output perturbation, the standard deviation \(\sigma\) used to calibrate the noise added for privacy essentially grows exponentially with the number of agents, which can be seen in the term \(\mu\). This is because we consider the joint state \(s\) in evaluating \(r^{i}(s,a^{i})\) and each joint state-local action pair will appear many times in \(\Delta_{2}J\). In the case of input perturbation, since the joint reward is computed from privatized rewards, the standard deviation of privacy noise does not depend on the number of agents. The standard deviations for varying \(N\) and \(\epsilon\) are shown in Table I and Table II, respectively. Table I shows that the variance of privacy noise is constant for input perturbation for all \(N\), while for output perturbation it grows rapidly with \(N\). Table II shows that the variance of noise needed for input perturbation is less than that needed for output perturbation for the same level of privacy, quantified by \(\epsilon\), when \(N>1\).
**Remark 4**.: Given the significantly lower standard deviation for input perturbation versus output perturbation at a given \(\epsilon\) for \(N>1\), we focus on input perturbation going forward unless stated otherwise.
The difference in performance between input and output perturbation is explored further in Example 1 in Section VI. In the next section, we analyze the accuracy of the privatized rewards and quantify privacy-accuracy trade-offs.
\begin{table}
\begin{tabular}{c c c} \hline \(\epsilon\) & Input Perturbation & Output Perturbation \\ \hline \(0.1\) & 23.48 & 46.95 \\ \(1\) & 2.524 & 5.049 \\ \(5\) & 0.6251 & 1.250 \\ \(10\) & 0.3684 & 0.7367 \\ \hline \end{tabular}
\end{table} TABLE II: Standard Deviation required to provide differential privacy at \(\epsilon=1\) for varying \(N\)
\begin{table}
\begin{tabular}{c c c} \hline \(N\) & Input Perturbation & Output Perturbation \\ \hline \(1\) & \(2.524\) & \(2.524\) \\ \(2\) & \(2.524\) & \(5.049\) \\ \(5\) & \(2.524\) & \(129.3\) \\ \(10\) & \(2.524\) & \(66,174\) \\ \hline \end{tabular}
\end{table} TABLE I: Standard Deviation required to provide differential privacy at \(\epsilon=1\) for varying \(N\)
## IV Accuracy Analysis
In this section, we solve Problem 2. Specifically, we analyze the accuracy of the reward functions that are privatized using input perturbation. To do so, we compute an upper bound on the expected maximum absolute error between the sensitive, non-private joint reward \(r\) and the privatized reward \(\tilde{r}\), namely \(\mathbb{E}\left[\max_{k,j}|\tilde{r}(s_{k},a_{j})-r(s_{k},a_{j})|\right]\). Then we use this accuracy bound to develop guidelines for calibrating privacy, i.e., guidelines for choosing \(\epsilon\) given some bound on allowable maximum error, \(A\). The maximum absolute error is bounded as follows.
**Theorem 3** (Solution to Problem 2).: _Fix privacy parameters \(\epsilon>0\), \(\delta\in[0,0.5)\), adjacency parameter \(b>0\), and MMDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},r,\gamma,\mathcal{T})\) with \(N\) agents, \(n\) joint states, and \(m\) joint actions. Let \(\tilde{r}\) be defined as in Algorithm 1. Then_
\[\mathbb{E}\left[\max_{k,j}|\tilde{r}(s_{k},a_{j})-r(s_{k},a_{j})|\right]\leq \frac{Cb}{2\epsilon}\kappa(\epsilon,\delta)\]
_where \(C=\sqrt{\frac{2}{N\pi}}+\sqrt{\left(1-\frac{2}{\pi}\right)\frac{(nm-1)}{N}}\)._
Proof.: First we analyze the vectorized rewards \(R\) and \(\tilde{R}\). Since \(\tilde{r}^{i}\) is generated using Algorithm 1, we have \(\tilde{r}^{i}(s_{k},a^{i}_{I_{j}(i)})=r^{i}(s_{k},a^{i}_{I_{j}(i)})+w^{k}_{I_{j }(i)}\), where \(w^{k}_{I_{j}(i)}\sim\mathcal{N}(0,\sigma^{2})\). For any \(s_{k}\in\mathcal{S}\) and \(a_{j}\in\mathcal{A}\) the corresponding entry of \(\tilde{R}-R\) is
\[\frac{1}{N}\sum_{i\in[N]}\tilde{r}^{i}(s_{k},a^{i}_{I_{j}(i)})-r^{i}(s_{k},a^{ i}_{I_{j}(i)})=\frac{1}{N}\sum_{i\in[N]}w^{k}_{I_{j}(i)}.\]
Thus, the entirety of \(\tilde{R}-R\) is
\[\tilde{R}-R=\frac{1}{N}\Big{[}\sum_{i\in[N]}w^{1}_{I_{1}(i)},\sum _{i\in[N]}w^{1}_{I_{2}(i)},\cdots,\\ \sum_{i\in[N]}w^{2}_{I_{1}(i)},\sum_{i\in[N]}w^{2}_{I_{2}(i)}, \cdots,\sum_{i\in[N]}w^{n}_{I_{m}(i)}\Big{]}^{T}.\]
Let \(y_{\ell}\) be the \(\ell^{th}\) entry of \(\tilde{R}-R\). Now the maximum absolute error over the entries is denoted
\[\mathbb{E}\left[\max_{\ell}|y_{\ell}|\right]. \tag{8}\]
For any \(\ell\), we have \(y_{\ell}\sim\mathcal{N}\left(0,\frac{\sigma^{2}}{N}\right),\) and that \(|y_{\ell}|\) has a folded-normal distribution [24] because it is equal to the absolute value of a normal random variable. As a result, \(\mathbb{E}\left[|y_{\ell}|\right]=\sigma\sqrt{\frac{2}{N\pi}}\) and \(\text{Var}\left[|y_{\ell}|\right]=\frac{\sigma^{2}}{N}\left(1-\frac{2}{\pi} \right).\)
To upper bound (8) we apply Equation (4) from [25] with \(k=nm\) which gives
\[\mathbb{E}\left[\max_{\ell}|y_{\ell}|\right]\leq\mathbb{E}\left[| y_{\ell}|\right]+\sqrt{\text{Var}\left[|y_{\ell}|\right](nm-1)}\\ =\sigma\sqrt{\frac{2}{N\pi}}+\sqrt{\frac{\sigma^{2}}{N}\left(1- \frac{2}{\pi}\right)(nm-1)}\\ =\left(\sqrt{\frac{2}{N\pi}}+\sqrt{\left(1-\frac{2}{\pi}\right) \frac{(nm-1)}{N}}\right)\sigma.\]
Substituting \(\sigma=\frac{b}{2\epsilon}\kappa(\epsilon,\delta)\) gives the expression of interest.
**Corollary 1**.: _Fix \(\delta\in[0,0.5)\) and let the conditions from Theorem 3 hold. A sufficient condition to achieve \(\mathbb{E}\left[\max_{k,j}|\tilde{r}(s_{k},a_{j})-r(s_{k},a_{j})|\right]\leq A\) is given by_
\[\epsilon\geq\frac{2C^{2}b^{2}}{4A^{2}}+\frac{Cb\mathcal{Q}^{-1}(\delta)}{A},\]
_where \(C=\sqrt{\frac{2}{N\pi}}+\sqrt{\left(1-\frac{2}{\pi}\right)\frac{(nm-1)}{N}}\)._
Proof.: Ensuring that the upper bound from Theorem 3 is less than or equal to \(A\) provides a sufficient condition for \(\mathbb{E}\left[\max_{k,j}|\tilde{r}(s_{k},a_{j})-r(s_{k},a_{j})|\right]\leq A\). We solve for a condition on \(\epsilon\) as follows,
\[\frac{Cb}{2\epsilon}\left(\mathcal{Q}^{-1}(\delta)+\sqrt{\mathcal{ Q}^{-1}(\delta)^{2}+2\epsilon}\right)\leq A\\ \sqrt{\mathcal{Q}^{-1}(\delta)^{2}+2\epsilon}\leq\frac{2A\epsilon }{Cb}-\mathcal{Q}^{-1}(\delta)\\ \mathcal{Q}^{-1}(\delta)^{2}+2\epsilon\leq\left(\frac{2A\epsilon }{Cb}-\mathcal{Q}^{-1}(\delta)\right)^{2}=\\ \frac{4A^{2}}{C^{2}b^{2}}\epsilon^{2}-\frac{4A\mathcal{Q}^{-1}( \delta)}{Cb}\epsilon+\mathcal{Q}^{-1}(\delta)^{2}. \tag{9}\]
Rearranging (9) gives
\[0\leq\frac{4A^{2}}{C^{2}b^{2}}\epsilon^{2}-\left(\frac{4A \mathcal{Q}^{-1}(\delta)}{Cb}+2\right)\epsilon=\\ \epsilon\left(\frac{4A^{2}}{C^{2}b^{2}}\epsilon-\frac{4A\mathcal{ Q}^{-1}(\delta)}{Cb}-2\right). \tag{10}\]
Since \(\epsilon>0\), (10) holds as long as
\[\frac{4A^{2}}{C^{2}b^{2}}\epsilon-\frac{4A\mathcal{Q}^{-1}(\delta)}{Cb}-2\geq 0. \tag{11}\]
Rearranging (11) we find
\[\epsilon\geq\frac{C^{2}b^{2}}{4A^{2}}\left(2+\frac{4A\mathcal{Q}^{-1}(\delta)}{ Cb}\right)=\frac{2C^{2}b^{2}}{4A^{2}}+\frac{Cb\mathcal{Q}^{-1}(\delta)}{A},\]
which completes the proof.
Theorem 3 shows that the accuracy of the privatized rewards depends on the size of the MMDP \(nm\), the number of agents \(N\), and the privacy parameters \(\epsilon\), \(\delta\), and \(b\), and is shown in Figure 1(a). Corollary 1 provides a trade-off between privacy and accuracy that allows users to design the strength of privacy \(\epsilon\) according to some expected maximum absolute error, \(A\). Figure 1(b) shows this bound for an example MMDP with \(\delta=0.01\), \(nm=8\), and \(b=1\). In the next section we discuss an algorithm for computing the long term performance loss due to privacy in terms of the value function.
## V Cost of Privacy
In this section, we solve Problem 3. To do so, we compare (i) an optimal policy generated on the original, non-private reward functions, denoted \(\pi^{*}\), and (ii) an optimal policy generated on the privatized reward functions, denoted \(\tilde{\pi}^{*}\). Beginning at some state \(s_{0}\in\mathcal{S}\), the function \(V_{\pi^{*}}(s_{0})\)
encodes the performance of the MMDP given the optimal policy and \(V_{\tilde{\pi}^{*}}(s_{0})\) encodes the performance of the MMDP given the policy generated on private rewards. We analyze the loss in performance due to privacy, and we refer to the "cost of privacy" metric introduced in [12], namely \(|V_{\tilde{\pi}^{*}}(s_{0})-V_{\pi^{*}}(s_{0})|\), for quantifying this performance loss. Thus, we must compute \(V_{\pi^{*}}(s_{0})\) and \(V_{\tilde{\pi}^{*}}(s_{0})\) to quantify the cost of privacy. Note that there is not a closed form for computing a value function \(V_{\pi}\) in general. Proposition 1 provides a method for empirically evaluating the value function for a policy on a given MMDP. We can then compute the cost of privacy numerically for any problem using an off-the-shelf policy-evaluation algorithm, and we compute the computational complexity of doing so.
Next, we state a theorem on the number of operations required to compute the cost of privacy:
**Theorem 4** (Solution to Problem 3).: _Fix privacy parameters \(\epsilon>0\), \(\delta\in[0,1/2)\). Let \(\tilde{r}\) be the output of Algorithm 1. Given an MMDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},r,\gamma,\mathcal{T})\), the number of computations required to compute the cost of privacy \(|V_{\tilde{\pi}^{*}}(s_{0})-V_{\pi^{*}}(s_{0})|\) to within \(\eta\) of the exact value is_
\[nm\left(\left\lceil\frac{\log\left[\frac{2R_{\text{max}}}{\eta(1-\gamma)^{2}} \right]}{\log\left(\frac{1}{\gamma}\right)}\right\rceil+\left\lceil\frac{\log \left[\frac{2R_{\text{max}}}{\eta(1-\gamma)^{2}}\right]}{\log\left(\frac{1}{ \gamma}\right)}\right\rceil\right),\]
_where \(R_{\text{max}}=\max_{s,a}|r(s,a)|\) and \(\tilde{R}_{\text{max}}=\max_{s,a}|\tilde{r}(s,a)|\)._
Proof.: First, we define \(R_{\text{max}}=\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}|r(s,a)|\) and \(\tilde{R}_{\text{max}}=\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}|\tilde{r}( s,a)|\). This then admits some non-private and private maximum value functions, denoted \(v_{\text{max}}\) and \(\tilde{v}_{\text{max}}\), respectively, since \(v_{\text{max}}\leq\frac{R_{\text{max}}}{1-\gamma}\) and \(\tilde{v}_{\text{max}}\leq\frac{\tilde{R}_{\text{max}}}{1-\gamma}\). From Proposition 1, we know that, given the value function at convergence, namely \(v_{\infty}\), and the value function at iteration \(k+1\), namely \(v_{k+1}\), we have
\[\left\lVert v_{k+1}-v_{\infty}\right\rVert_{\infty}=\left\lVert\mathcal{L}v_{ k}-\mathcal{L}v_{\infty}\right\rVert\leq\gamma\left\lVert v_{k}-v_{\infty} \right\rVert_{\infty},\]
which implies
\[\left\lVert v_{k+1}-v_{\infty}\right\rVert_{\infty}\leq\frac{\gamma^{k}}{1- \gamma}\left\lVert v_{1}-v_{0}\right\rVert_{\infty}\leq\frac{2\gamma^{k}v_{ \text{max}}}{1-\gamma}, \tag{12}\]
where \(v_{0}\) is the initial guess of the value function and \(v_{1}\) is the value function after \(1\) iteration of policy evaluation. However, \(v_{\text{max}}\) may not be known exactly. As a result, we upper bound (12) using \(v_{\text{max}}\leq\frac{R_{\text{max}}}{1-\gamma}\) to find
\[\frac{2\gamma^{k}v_{\text{max}}}{1-\gamma}\leq\frac{2\gamma^{k}R_{\text{max}}} {(1-\gamma)^{2}}.\]
To compute the non-private value function \(V_{\pi^{*}}\) within \(\eta\) of the limiting value, we wish to find the number of iterations of policy evaluation such that
\[\frac{2\gamma^{k}R_{\text{max}}}{(1-\gamma)^{2}}\leq\eta. \tag{13}\]
We then rearrange (13) to find
\[K_{1}=\left\lceil\frac{\log\left[\frac{2R_{\text{max}}}{\eta(1-\gamma)^{2}} \right]}{\log\left(\frac{1}{\gamma}\right)}\right\rceil.\]
Following an identical construction, for computing the private value function \(V_{\tilde{\pi}^{*}}\) to within \(\eta\) of its exact value we have that
\[K_{2}=\left\lceil\frac{\log\left[\frac{2\tilde{R}_{\text{max}}}{\eta(1-\gamma)^ {2}}\right]}{\log\left(\frac{1}{\gamma}\right)}\right\rceil\]
iterations of policy evaluation are required. Therefore, the total number of iterations of policy evaluation required to
Fig. 2: Simulation of (a) Theorem 3 and (b) Corollary 1 with \(\delta=0.01\), \(nm=8\), and \(b=1\). In (a), we see that Theorem 3 captures the qualitative behavior of the empirically computed expected maximal error, while also providing accurate numerical estimates of the true maximal error. In (b), increasing privacy strength (decreasing \(\epsilon\)) from \(\epsilon=3\) to \(\epsilon=2\) leads to negligible increases in maximum average error. Furthermore, the change in maximum average error is small until \(\epsilon\leq 2,\) indicating marginal performance losses until privacy is strong.
determine the cost of privacy is
\[K=K_{1}+K_{2}=\left\lceil\frac{\log\left[\frac{2R_{\text{nm}}}{\eta(1-\gamma)^{2}} \right]}{\log\left(\frac{1}{\gamma}\right)}\right\rceil+\left\lceil\frac{\log \left[\frac{2R_{\text{nm}}}{\eta(1-\gamma)^{2}}\right]}{\log\left(\frac{1}{ \gamma}\right)}\right\rceil.\]
Additionally, each iteration of policy evaluation requires looping through all \(nm\) values of the reward function. As a result, the total number of computations required to compute the cost of privacy within \(\eta\) of its exact value is
\[nm\left(\left\lceil\frac{\log\left[\frac{2R_{\text{nm}}}{\eta(1-\gamma)^{2}} \right]}{\log\left(\frac{1}{\gamma}\right)}\right\rceil+\left\lceil\frac{\log \left[\frac{2R_{\text{nm}}}{\eta(1-\gamma)^{2}}\right]}{\log\left(\frac{1}{ \gamma}\right)}\right\rceil\right).\]
In words, \(nm\) is the number of computations required within each iteration of policy evaluation, while the term in the parentheses is the number of iterations of policy evaluation required. If \(\pi^{*}\) is generated using value iteration, the user already has access to \(V_{\pi^{*}}\) and thus does not need to compute it again for the cost of privacy. In this case, only \(V_{\pi^{*}}\) needs to be computed. However, we state the total number of computations required assuming the user only has access to \(\pi^{*}\) and \(\tilde{\pi}^{*}\).
## VI Numerical Simulations
In this section, we consider two MDPs to numerically simulate the results discussed in this work. First, we consider a small \(2\times 2\) gridworld example to discuss the cost of privacy with increasing number of agents, and then we consider a larger MDP with a single agent to assess how differential privacy changes a state trajectory over time. To compare the change in value between input perturbation and output perturbation, consider the following example comparing the cost of privacy with increasing \(\epsilon\) for the case of \(N=2\) agents.
**Example 1** (Gridworld).: Consider the \(2\times 2\) gridworld environment from Figure 2(a) where the goal is for both agents to occupy state \(0\). The set of local agent actions is \(\mathcal{A}^{i}=\{\texttt{left},\texttt{right},\texttt{up},\texttt{down}, \texttt{stay}\}\) with \(r^{i}(s,a^{i})=-1\) for all \(s\in\mathcal{S}\) and \(a^{i}\in\mathcal{A}^{i}\setminus\{\texttt{stay}\}\) and \(r^{i}((0,0),\texttt{stay})=1\). For any action \(a^{i}\in\mathcal{A}^{i}\), the probability of agent \(i\) slipping into a state not commanded by the action is \(p=0.1\). We consider the adjacency parameter \(b=3\) and privacy parameters \(\epsilon\in[0.1,15]\) and \(\delta=0.01\). Figure 2(b) shows a sample environment with \(\epsilon=1\). Simulating this environment with \(N=2\) agents, Figure 4 shows the change in utility with decreasing strength of privacy (quantified by growing \(\epsilon\)).
Output perturbation has a greater cost of privacy due to the essentially exponential dependence on the number of agents in calibrating the variance of the noise added to enforce differential privacy. The cost of privacy decreases with \(\epsilon\), which agrees with how privacy should affect the performance of the MMDP. We see a decrease in cost of privacy for both input and output perturbation as strength of privacy decreases, implying improved performance as \(\epsilon\) increases. Output perturbation has a higher cost of privacy with \(\epsilon=15\) than input perturbation does with \(\epsilon=0.1\), indicating that even when relaxing the strength of privacy for output perturbation, input perturbation has better performance.
**Example 2** (Waypoint Guidance).: While this work has been concerned with MMDPs up to this point, we highlight that a single agent may also benefit from the application of privacy to reward functions. Specifically, a single agent can still be observed which can reveal its sensitive reward function and thus there exists an inherent need for privacy in the single-agent case as well. To highlight the applicability to a single agent, consider a high-speed vehicle using waypoints to navigate towards a target. We consider an MDP with a \(20\times 20\) set of waypoints, as seen in Figure 5, as the states with the actions \(\mathcal{A}=\{\texttt{north},\texttt{south},\texttt{east},\texttt{west}, \texttt{to target}\}\). We consider a 3 degree-of-freedom vehicle model, with waypoints that exist in the plane of the initial condition. The initial position of the vehicle is considered the first waypoint and thus the initial state of the MDP. The action for that state yields a new waypoint for the vehicle to navigate towards. Once the vehicle is closer to the new waypoint than the original, we say the new waypoint is now the current state of the vehicle, and the policy again commands the vehicle to navigate to a new waypoint. This continues until the vehicle is less than \(80\) km from the target, at which point the vehicle is commanded to navigate directly to the target as opposed to another waypoint. The optimal policy gives the optimal set of waypoints given an initial starting state. However, this optimal policy may reveal the agents' target state, which we consider to be sensitive. We then implement differential privacy for the reward function of this MDP using Algorithm 1. Note that since there is only one agent, the outputs of Algorithm 1 and Algorithm 2 are identical.
Trajectories generated using various \(\epsilon\) values are presented in Figure 6. We simulate \(600\) privatized policies for each
Fig. 3: Multi-agent Gridworld from Example 1 with (a) non-private rewards and (b) privatized rewards. The local state number is in black, and each agent’s reward for both agents taking the action \(a^{i}=\texttt{stay}\) at a given local state is colored green for the state with the max reward and red for all others. Even under privacy, the goal state-action pair remains unchanged in this example. Thus, even then perturbing rewards for privacy, agents’ policy will drive them to the same terminal state in this example.
\(\epsilon\in[0.1,10]\). Treating the trajectory generated from the sensitive policy as the nominal trajectory, we assess the performance of the trajectories using the change in the cumulative acceleration commands, which we refer to as "control effort" and denote by \(\Delta c\), between the trajectories generated using the sensitive, non-private policy and the privatized policy. That is,
\[\Delta c=\int_{0}^{t_{f}}\|u\|_{2}^{2}dt-\int_{0}^{t_{f}}\|\tilde{u}\|_{2}^{2}dt,\]
where \(u\) is the commanded acceleration of the vehicle operating with a policy developed on the non-private rewards, and \(\tilde{u}\) is defined analogously for the vehicle operating with a policy developed on the private rewards. Here \(t_{f}\) is flight duration of the vehicle when using a policy generated on sensitive, non-private reward \(r\) and \(\tilde{t}_{f}\) is the flight duration of the vehicle when using a policy generated on privatized rewards \(\tilde{r}\). Note that under privacy, trajectories may differ in length, which means that comparing the accelerations at each time step is infeasible and does not yield a meaningful comparison. As a result, we compare the total commanded accelerations over the entire trajectory, shown in Figure 7. With increased privacy, the policy commands the vehicle to off-nominal trajectory waypoints requiring larger overall control efforts. We see \(\Delta c\) steadily declines up to \(\epsilon=2\), indicating that there is minimal performance gain by decreasing the strength of privacy any further, which implies that \(\epsilon=2\) is a reasonable strength of privacy for this example.
## VII Conclusion
In this work we have developed two methods for protecting reward functions in MMDPs from observers by using differential privacy. We have proved both these methods are \((\epsilon,\delta)\)-differentially private and identified input perturbation as the more tractable method. We also examined the accuracy versus privacy trade-off and the computational complexity of computing the performance loss due to privacy, and provided numerical simulations to show implementations of these methods. Future work will consider policy synthesis without a central aggregator and when agents can only observe their own local state and the local state of their neighbors.
|
2303.00029 | Accurate dynamics from self-consistent memory in stochastic chemical
reactions with small copy numbers | We present a method that captures the fluctuations beyond mean field in
chemical reactions in the regime of small copy numbers and hence large
fluctuations, using self-consistently determined memory: by integrating
information from the past we can systematically improve our approximation for
the dynamics of chemical reactions. This memory emerges from a perturbative
treatment of the effective action of the Doi-Peliti field theory for chemical
reactions. By dressing only the response functions and by the self-consistent
replacement of bare responses by the dressed ones, we show how a very small
class of diagrams contributes to this expansion, with clear physical
interpretations. From these diagrams, a large sub-class can be further resummed
to infinite order, resulting in a method that is stable even for large values
of the expansion parameter or equivalently large reaction rates. We demonstrate
this method and its accuracy on single and multi-species binary reactions
across a range of reaction constant values. | Moshir Harsh, Peter Sollich | 2023-02-28T19:10:34Z | http://arxiv.org/abs/2303.00029v2 | Accurate dynamics from self-consistent memory in stochastic chemical reactions with small copy numbers
###### Abstract
We present a method that captures the fluctuations beyond mean field in chemical reactions in the regime of small copy numbers and hence large fluctuations, using self-consistently determined _memory_: by integrating information from the past we can systematically improve our approximation for the dynamics of chemical reactions. This memory emerges from a perturbative treatment of the effective action of the Doi-Peliti field theory for chemical reactions. By dressing only the response functions and by the self-consistent replacement of bare responses by the dressed ones, we show how a very small class of diagrams contributes to this expansion, with clear physical interpretations. From these diagrams, a large sub-class can be further resummed to infinite order, resulting in a method that is stable even for large values of the expansion parameter or equivalently large reaction rates. We demonstrate this method and its accuracy on single and multi-species binary reactions across a range of reaction constant values.
## 1 Introduction
An important problem with wide ranging applications, especially in biology and synthetic chemistry, is the one of treating strong stochasticity in chemical reaction networks. This is most pronounced when the copy number of participating molecules are small, typically of the order of a few molecules [1, 2, 3, 4]. Indeed, thinking of copy number fluctuations as Poissonian the mean \(\bar{n}\) and the standard deviation \(\sqrt{\bar{n}}\) are of the same order when \(\bar{n}=O(1)\), so that fluctuations can never be neglected. Equivalently, the time evolution of lower order moments of the copy number distributions are hierarchically coupled to higher order moments, and this hierarchy cannot be truncated by neglecting relative copy number fluctuations, or by treating them as small enough to be Gaussian.
Recent experiments in living cells, in _gene_ and _protein_ regulation networks [5, 6, 7, 8, 9] have shown the importance of intrinsic stochasticity originating from the operation of these biochemical networks in the limit of small copy number of participating molecules. Indeed, this limit is the natural regime of operation for many of these networks. For example there are only a few copies of each gene coding for a protein in each cell, which transcribe a few copies of mRNA that are later translated into proteins. Sometimes even a few copies of a signalling molecule or transcription factor are enough to trigger signalling pathways in a cell [10]. This regime of small copy numbers is dominated by large fluctuations in the time courses of the molecule numbers, where mean field or mass action kinetics, which is the standard description of chemical reaction dynamics, becomes inaccurate [11, 12], and very few methods [13] exist that are able to accurately calculate the time-dependent moments of the stochastic process. Experiments have even
shown that that random fluctuations in gene expression can lead to very different behaviours of otherwise identical cells [5, 7]. Understanding the dynamics in the regime of strong fluctuations is thus essential if we want to be able to infer the relevant biology from experimental measurements of such systems.
The challenges arising from strong fluctuations become even more acute for spatially resolved dynamics, i.e. reaction-diffusion systems. A common approach [14] to theoretically treat and make inferences from such systems is to divide space into small compartments, and model diffusion as molecules being destroyed in one compartment and created in a neighbouring one. In dilute mixtures, or generally for small enough compartments the number of molecules in each compartment is then always small enough and fluctuations again large. In the context of population dynamics the intrinsic stochasticity from such small populations can lead to features such as noise-induced Turing patterns that are absent in the deterministic limit [15]. Large fluctuations in the dynamics can also lead to extinction or fixation of stochastic populations [16, 17]. Similar rare events also play a role in epidemic models, affecting e.g. the distribution of outbreak sizes [18].
To track large fluctuations, the Chemical Master Equation (CME) is the widely accepted theoretical description for stochastic chemical reactions [19]. It gives the time evolution of the probability of the system to be in a certain state specified by the copy numbers of all species, starting from some initial distribution across states. However, analytical solutions to the CME are available only for very specific cases. Instead one usually has to rely on stochastic simulations such as the rejection-free Gillespie algorithm [20], which can exactly simulate and sample the underlying distribution. In the regime of large fluctuations, a very large number of such simulations have to be carried out to get accurate and reliable statistics, which becomes computationally extremely expensive, especially in the case of multiple reacting species where high-dimensional distributions need to be sampled. In addition, stochastic simulations do not allow one to extract a likelihood function for the probability of a given time course of copy numbers, given a set of reaction constants. Hence they cannot be used to infer such dynamical parameters, which is often an important step towards understanding e.g. the biological function of a reaction network.
Even though analytical solutions of the CME are rare, a few landmark results exist. In particular Renyi [12] gave the time dependent solution of the \(A+B\to C\) binary reaction starting from deterministic initial conditions, and showed that the mass action kinetics description is only approximately valid and breaks down for small copy numbers. McQuarrie et al. [21] fully solved some other simple binary reactions starting from deterministic initial conditions using the technique of generating functions; we refer the reader to the review by McQuarrie [11] for a detailed historical overview of the development of chemical reaction stochastics. The stochastic solution of the Michaelis-Menten enzyme dynamics with a single enzyme was given by Aranyi and Toth [22]. As regards the simpler question of steady state distributions, a straightforwardly solvable case is that of a single species where only one molecule can be created or destroyed in a reaction [23]. Steady state solutions are also available for a number of multi-species systems with binary reactions such as gene regulation or multi-enzyme Michaelis-Menten reactions; we refer the reader to [13] for a full list.
Jahnke and Huisinga [24] solved the CME for a reaction network with an arbitrary number of species with time dependent rates but undergoing only birth, death or conversion reactions with only one reactant and one product molecule. These results were based on guessing the form of the solution, using carefully crafted transformations or exploiting special properties of unary reaction networks, which preserve the Poisson character of copy number distributions. These results are thus not generalizable to include other reactions.
Given the scarcity of exact solutions, approximate approaches to the CME have been widely explored [13]. One popular approximation scheme consists of approximating the CME, a continuous-time Markov jump process on a discrete state space of copy numbers, by a diffusion process for concentrations that can take any non-negative real value. Such a description is obtained by a second order expansion of the CME, originally developed by Kramers [25] and Moyal [26], and yields the so-called Chemical Fokker-Planck equation, with an associated Chemical Langevin equation. The simulation of these equations can be easier than the CME but many challenges remain in the low copy number regime, including the lack of a natural boundary condition at zero copy number and the consequent appearance of imaginary noise terms in the Chemical Langevin Equation. Schnoerr et al [27] showed that some of these problems can be circumvented by formally extending the state space of the Chemical Langevin Equation to complex-valued concentrations.
Another popular approximation method called the system size expansion is due to van Kampen [28] and based on a perturbative expansion of the CME in inverse system volume. This leads to mean concentrations that are given by the solutions of the macroscopic rate equations, with fluctuations scaling as the inverse square root of the system volume. To the leading order in this small parameter the fluctuations are Gaussian, and only retaining these yields the so-called Linear Noise Approximation (LNA) [29, 30]; in the limit of small copy numbers this can have severe deviations from the solution of the underlying CME [32]. Keeping the next order in the expansion of the CME results in Effective Mesoscopic Rate Equations (EMRE) [31] that, unlike the LNA, contain corrections to the mean copy numbers. Higher order corrections in inverse system size can be obtained but are quite cumbersome and computationally expensive to calculate; diagrammatic perturbation theory can be used to make the expansion easier to use [33]. A related technique is the WKB approximation [17], which again studies large systems but instead of the typical Gaussian fluctuations concentrates on large deviations that are exponentially rare in system size.
Moment closure approximations, widely deployed for various stochastic systems, remain one of the most commonly used techniques to deal with stochastic reaction systems [13]. The starting point for these is the hierarchically coupled system of equations for the time evolution of the moments (mean concentrations, mean square concentrations etc.) that can be derived from the CME. In these equations the time evolution of lower order moments depends on higher order moments so they have to be _closed_ by hand at some order, e.g. by assuming some form for the \(n^{\text{th}}\) moments or by setting cumulants beyond some order to zero. The most common approach is the normal moment closure [34], which assumes higher than second order cumulants to be zero, thus effectively imposing a Gaussian form of the distribution of the concentrations, for all times. These approximations, however, have well-documented problems [13] such as concentrations that diverge in time or become negative, negative variances etc.
Vastola [35] has recently used the Doi-Peliti path integral approach to re-derive the results of Jahnke and Huisinga [24], including also arbitrary unary reactions in his analysis. We will also use the Doi-Peliti path integral technique in this work, but will then deploy the tools of statistical field theory and show that under certain approximation, we can obtain very accurate and general results that allows us to treat generic reaction networks made up of binary reactions in addition to arbitrary unary reactions.
In this paper we develop novel approximation methods for chemical reaction networks in the challenging regime of small copy numbers or, equivalently, large fluctuations. The starting point is the Doi-Peliti [36, 37] path integral, which exploits the correspondence between classical statistical systems and quantum systems by using ladder operators to represent the creation and annihilation of molecules. We then apply diagrammatic perturbation theory around the Gaussian part of the path integral, which can capture all unary reactions and corresponds exactly to Poissonian copy number distributions. The perturbation expansion is formally set up using the rates of binary and higher order [38] reactions as small parameters. Nonetheless the applicability of our approach is not limited to this parameter regime because we capture many non perturbative effects by resummation and self-consistency; we will justify this theoretically and also demonstrate it numerically.
Our method relies on identifying a series of key diagrams in the perturbation expansion that can be efficiently resummed. In addition, we self-consistently replace the "bare" response functions that appear by their perturbatively corrected or "dressed" versions. The resulting approximation, which we call self-consistent bubble resummation (SBR), allows us to capture many non-perturbative effects, giving the method a broad scope of applicability. We focus throughout on the first and the second copy number moments, namely the means and the two-time correlation and response functions of the process and show how these can be very accurately calculated at a computational cost that is small relative to that of stochastic simulations. We benchmark all of our results against numerically exact solutions of the underlying chemical master equation.
This manuscript is organized as follows: in section 2 we start from the chemical master equation and construct the Doi-Peliti path integral for a generic chemical reaction network. We demonstrate also some properties of the path integral that will be relevant later in our analysis. We then move on to identify a _baseline_ set of reactions around which we will set up the perturbation theory. In section 3 we introduce the effective action and vertex functions corresponding to the path integral description, and derive a general equation for calculating the time evolution of the mean copy numbers. In section 4 we consider as a paradigmatic example a binary single species reaction \(A+A\to A\). We demonstrate the derivation of our approximation method for this case, and show how it performs in numerical tests against the
exact CME benchmark. We follow this by explaining how our approach extends to the case where other single species reactions are included. In section 5 we extend the method to the multi-species binary reaction \(A+B\to C\) and again demonstrate its numerical performance. We conclude with a summary and outlook in section 6.
## 2 Coherent state path integral for reaction networks
We consider a system of \(N\) molecular species \(X_{i}\) indexed by \(i=1,2,\ldots,N\) with \(n_{i}\) denoting the number of molecules of species \(i\) in the system. The state of the system is given by specifying the number of molecules of all species, \(\boldsymbol{n}=(n_{1},n_{2},\ldots n_{N})\). We allow a general system of reactions where reaction \(\beta\) converts \(r_{1}^{\beta}\) copies of species \(X_{1}\) together with \(r_{2}^{\beta}\) copies of \(X_{2}\) etc. into \(s_{1}^{\beta}\) copies of \(X_{1}\), \(s_{2}^{\beta}\) copies of \(X_{2}\) etc., or in shorthand
\[\sum_{i=1}^{N}r_{i}^{\beta}X_{i}\stackrel{{ k_{\beta}}}{{ \longrightarrow}}\sum_{i=1}^{N}s_{i}^{\beta}X_{i} \tag{2.1}\]
Here \(k_{\beta}\) is the rate for the reaction, with units of inverse time. The order of the \(\beta^{\rm th}\) reaction [38] is defined by to be \(\sum_{i}r_{i}^{\beta}\). We collect the \(r_{i}^{\beta}\) and \(s_{i}^{\beta}\) into vectors \(\boldsymbol{r}^{\beta}\) and \(\boldsymbol{s}^{\beta}\) for the reactant and product stochiometry, respectively. The probability that this reaction will take place in a time interval \((t,t+dt)\) is given by the microscopic _propensity function_, which depends on the state of the system \(\boldsymbol{n}\) as
\[f_{\beta}(\boldsymbol{n})=k_{\beta}\prod_{i}\frac{n_{i}!}{(n_{i}-r_{i}^{\beta })!} \tag{2.2}\]
The ratio of factorials takes care of the appropriate combinatorics, whereby e.g. for the reaction \(2\,X_{1}\to X_{2}\) the reaction probability is proportional to the number \(n_{1}(n_{1}-1)/2=n_{1}!/(n_{1}-2)!/2\) of pairs of \(X_{1}\) molecules that can react. The factor \(1/2\) that appears here compared to eq. (2.2), which in the general case would be \(1/\prod_{i}r_{i}^{\beta}!\), has been included in the definition of \(k_{\beta}\) to make the following expressions shorter.
To illustrate the notation above, a single chemical reaction where a molecule of A reacts with a B molecule to form a C molecule would be represented by
\[A+B\stackrel{{ k}}{{\rightarrow}}C \tag{2.3}\]
where \(r_{A}=1,r_{B}=1,s_{C}=1\) and \(r_{C}=s_{A}=s_{B}=0\), with reaction constant \(k_{\beta}=k\) and propensity function \(f_{\beta}(n_{A},n_{B},n_{C})=kn_{A}n_{B}\).
### Chemical Master Equation
Given a set of reactions \(\beta\) defined as above, the probability of the state of the system \(P(\boldsymbol{n},\tau)\) evolves according to the _Chemical Master Equation_ (CME) [19] given by
\[\frac{\partial P(\boldsymbol{n},\tau)}{\partial\tau}=\sum_{\beta}f_{\beta}( \boldsymbol{n}-\boldsymbol{s}^{\beta}+\boldsymbol{r}^{\beta})P(\boldsymbol{n }-\boldsymbol{s}^{\beta}+\boldsymbol{r}^{\beta},\tau)-\sum_{\beta}f_{\beta}( \boldsymbol{n})P(\boldsymbol{n},\tau) \tag{2.4}\]
Following the seminal work by Doi and Peliti [36, 37], one can - as the first step towards a path integral formulation - cast the master equation in a quantum mechanical "second quantized" form [39] by introducing annihilation and creation operators for species \(i\), \(\hat{a}_{i}\) and \(\hat{a}_{i}^{\dagger}\), respectively, which obey the following commutation relations
\[\left[\hat{a}_{i}^{\dagger},\hat{a}_{j}^{\dagger}\right]=[\hat{a}_{i},\hat{a} _{j}]=0\qquad\left[\hat{a}_{i},\hat{a}_{j}^{\dagger}\right]=\delta_{ij} \tag{2.5}\]
We introduce a ket \(|\boldsymbol{n}\rangle=|n_{1},n_{2},\ldots,n_{i},\ldots,n_{N}\rangle\) that defines the state of the system. The creation and annihilation operators act on the state ket \(|\boldsymbol{n}\rangle\) as (notice that the normalization differs from the standard choice used in quantum mechanics)
\[\hat{a}_{i}|\boldsymbol{n}\rangle = n_{i}\ |n_{1},n_{2},\ldots,n_{i}-1,\ldots,n_{N}\rangle \tag{2.6}\] \[\hat{a}_{i}^{\dagger}|\boldsymbol{n}\rangle = \ \ |n_{1},n_{2},\ldots,n_{i}+1,\ldots,n_{N}\rangle \tag{2.7}\]
The state \(|\mathbf{n}\rangle\) can therefore be obtained by acting on the zero or "vacuum" state \(|\mathbf{0}\rangle\) with the appropriate product of creation operators \(\hat{a}_{i}^{\dagger}\),
\[|\mathbf{n}\rangle=\prod_{i}(\hat{a}_{i}^{\dagger})^{n_{i}}|\mathbf{0}\rangle \tag{2.8}\]
The operator \(\hat{n}_{i}\) that counts the number of molecules of species \(i\) is given by
\[\hat{n}_{i}=\hat{a}_{i}^{\dagger}\hat{a}_{i} \tag{2.9}\]
To obtain the quantum mechanical form of the CME, one identifies the probability distribution \(P(\mathbf{n},\tau)\) across states with the vector \(|P(\tau)\rangle=\sum_{\mathbf{n}}P(\mathbf{n},\tau)|\mathbf{n}\rangle\). As the CME is a linear equation for the \(P(\mathbf{n},\tau)\), it can then equivalently be written in the form of an imaginary time Schrodinger equation,
\[\partial_{\tau}|P(\tau)\rangle=\hat{H}|P(\tau)\rangle \tag{2.10}\]
with an effective Hamilton operator \(\hat{H}\). Comparing with the original CME, one can read off \(\hat{H}\) (see [39]) as
\[\hat{H}=\sum_{\beta}k_{\beta}\left[\prod_{i}(\hat{a}_{i}^{\dagger})^{s^{ \beta}_{i}}(\hat{a}_{i})^{r^{\beta}_{i}}-\prod_{i}(\hat{a}_{i}^{\dagger})^{r^{ \beta}_{i}}(\hat{a}_{i})^{r^{\beta}_{i}}\right] \tag{2.11}\]
which e.g. for the system defined in eq. (2.3) would reduce to
\[\hat{H}=k\left[\hat{a}_{C}^{\dagger}-\hat{a}_{A}^{\dagger}\hat{a}_{B}^{\dagger }\right]\hat{a}_{A}\hat{a}_{B} \tag{2.12}\]
The formal solution for the time evolution of the system to time \(t\) is then generally given by
\[|P(t)\rangle=e^{\hat{H}t}|P(0)\rangle \tag{2.13}\]
We will consider throughout an initial state in which the copy number of each molecular species \(i\) has independent Poisson fluctuations around a mean \(\bar{n}_{0i}\). Such states can be written in the simple form
\[|P(0)\rangle=\sum_{\mathbf{n}}\prod_{i}\left(e^{-\bar{n}_{0i}}\frac{\bar{n}_{0i}^ {n}}{n_{i}!}\right)|\mathbf{n}\rangle=e^{\sum_{i}\bar{n}_{0i}(\hat{a}_{i}^{\dagger }-1)}|\mathbf{0}\rangle \tag{2.14}\]
where the second version follows from eq. (2.8).
### The path integral and the action
To construct the path integral, one can split the quantum mechanical time evolution by \(e^{\hat{H}t}\) into small discrete time intervals \(\Delta t\), and insert at each time step resolutions of the identity operator, expressed as integrals over an (overcomplete) set of coherent states [36, 37, 39]. We generalize this construction using two sets of generating fields \(\theta\) and \(\tilde{\theta}\) defined for each species at each time step, namely \(\theta_{i}(\tau),\tilde{\theta}_{i}(\tau)\). Inserting then factors of
\[e^{\sum_{i}\theta_{i}(\tau)\Delta t\,\hat{a}_{i}}\,\mathbb{I}e^{\sum_{i}\tilde {\theta}_{i}(\tau)\Delta t\,(\hat{a}_{i}^{\dagger}-1)} \tag{2.15}\]
at each discretized time \(\tau\) allows one to generate averages such as means, correlation functions etc. by taking derivatives w.r.t the generating fields. Defining here the \(\tilde{\theta}\) factor with \((\hat{a}^{\dagger}-1)\) rather than \(\hat{a}^{\dagger}\) will simplify a number of formulas below. The steps of the method are detailed in appendix A.4 and lead to the following path integral for the _generating function_ or _partition function_,
\[\mathcal{Z}(\tilde{\theta},\theta)=\lim_{\Delta t\to 0}\mathcal{N}^{-1}\int \prod_{i,\tau}d\phi_{i}^{*}(\tau)d\phi_{i}(\tau)e^{S(\phi^{*},\phi)} \tag{2.16}\]
The normalization constant \(\mathcal{N}\) ensures that \(\mathcal{Z}(0,0)=1\). The integrations are over the (time-discretized) paths of the complex fields \(\phi_{i}(\tau)\) defining the coherent states. The action \(S\) depends on these fields and their complex conjugates \(\phi_{i}^{*}(\tau)\) and reads
\[\begin{split} S\left(\phi^{*},\phi\right)&=\Delta t \sum_{\tau=\Delta t}^{t}H(\mathbf{\phi}^{*}(\tau),\mathbf{\phi}(\tau_{-}))+\sum_{i} \Big{\{}\phi_{i}(t)+\bar{n}_{0i}(\phi_{i}^{*}(0)-1)-\phi_{i}(0)\phi_{i}^{*}(0) \\ &-\Delta t\sum_{\tau=\Delta t}^{t}\phi_{i}^{*}(\tau)\Delta_{\tau }\phi_{i}(\tau)+\Delta t\sum_{\tau=0}^{t}\left[\tilde{\theta}_{i}(\tau)\left( \phi_{i}^{*}(\tau)-1\right)+\theta_{i}(\tau)\phi_{i}(\tau)\right]\Big{\}} \end{split} \tag{2.17}\]
where \(\tau_{-}\equiv\tau-\Delta t\), \(t\) is the total time of the dynamics considered, and we use \(\Delta_{\tau}\phi_{i}(\tau)=\frac{1}{\Delta t}(\phi_{i}(\tau)-\phi(\tau_{-}))\) as a shorthand for the discrete time derivative. \(H(\mathbf{\phi}^{*}(\tau),\mathbf{\phi}(\tau_{-}))\) is obtained from the Hamiltonian \(\hat{H}\) in eq. (2.11) by replacing \(\hat{a}^{\dagger}_{i}\) by \(\phi^{*}_{i}(\tau)\) and \(\hat{a}\) by \(\phi_{i}(\tau_{-})\).
One can now apply a _Doi-shift_[39, 40] by replacing \(\phi^{*}_{i}(\tau)=1+\tilde{\phi}_{i}(\tau)\). This turns out to make the average of \(\mathbf{\tilde{\phi}}=0\) and can be justified by appropriate rearrangements in the partition function before the coherent states are introduced [39]. If we continue to use the same label for the function \(H\) evaluated at \(\mathbf{\phi}^{*}=1+\mathbf{\tilde{\phi}}\), then in terms of the new variables the action reads
\[\begin{split} S(\tilde{\phi},\phi)&=\Delta t\sum_{ \tau=\Delta t}^{t}H(\mathbf{\tilde{\phi}}(\tau),\mathbf{\phi}(\tau_{-}))+\sum_{i} \Big{\{}\bar{n}_{0i}\tilde{\phi}_{i}(0)-\phi_{i}(0)\tilde{\phi}_{i}(0)\\ &-\Delta t\sum_{\tau=\Delta t}^{t}\tilde{\phi}_{i}(\tau)\Delta_{ \tau}\phi_{i}(\tau)+\Delta t\sum_{\tau=0}^{t}\Big{[}\tilde{\theta}_{i}(\tau) \tilde{\phi}_{i}(\tau)+\theta_{i}(\tau)\phi_{i}(\tau)\Big{]}\,\Big{\}}\end{split} \tag{2.18}\]
We will work in discrete time and only take the continuous time limit in the final equations of motion, but one can also take this limit in the expression for the action to obtain
\[\begin{split} S(\tilde{\phi},\phi)&=\int_{0}^{t}d \tau\,H(\mathbf{\tilde{\phi}}(\tau),\mathbf{\phi}(\tau_{-}))+\sum_{i}\Big{(}\bar{n}_{ 0i}\tilde{\phi}_{i}(0)-\phi_{i}(0)\tilde{\phi}_{i}(0)\\ &\qquad\qquad\qquad\qquad\qquad+\int_{0}^{t}d\tau\left[-\tilde{ \phi}_{i}(\tau)\partial_{\tau}\phi_{i}(\tau)+\tilde{\theta}_{i}(\tau)\tilde{ \phi}_{i}(\tau)+\theta_{i}(\tau)\phi_{i}(\tau)\right]\Big{)}\end{split} \tag{2.19}\]
### Response and correlation functions
Once the path integral is defined over the fields \(\mathbf{\phi}\) and \(\mathbf{\tilde{\phi}}\), we need to show how the average values of observables such as mean copy numbers of species \(i\), its variance or other two time quantities are related to the statistics of these fields. We start by taking derivatives w.r.t. \(\tilde{\theta}_{i}(\tau_{+})\) (with \(\tau_{+}\equiv\tau+\Delta t\)) and \(\theta_{i}(\tau)\) in eq. (A.18) such that the operators are _normal ordered_,
\[\frac{1}{(\Delta t)^{2}}\left(\frac{\partial}{\partial\tilde{\theta}_{i}(\tau_ {+})}+1\right)\frac{\partial}{\partial\theta_{i}(\tau)}\mathcal{Z}\bigg{|}_{ \theta,\delta=0}=\langle\mathbf{1}|\hat{a}^{\dagger}_{i}e^{\hat{H}\Delta t}\hat {a}_{i}e^{\hat{H}\tau}|P(0)\rangle=\langle n_{i}(\tau)\rangle \tag{2.20}\]
where \(n_{i}(\tau)\) is the copy numbers of species \(i\) at time \(\tau\). The last equality applies in the limit \(\Delta t\to 0\), where the short-time propagator \(e^{\hat{H}\Delta t}\) can be ignored. In the path integral one can take the same derivatives, which gives
\[\frac{1}{(\Delta t)^{2}}\left(\frac{\partial}{\partial\tilde{\theta}_{i}(\tau_ {+})}+1\right)\frac{\partial}{\partial\theta_{i}(\tau)}\mathcal{Z}\bigg{|}_{ \theta,\tilde{\theta}=0}=\langle[\tilde{\phi}_{i}(\tau_{+})+1]\phi_{i}(\tau)\rangle \tag{2.21}\]
This average simplifies to \(\langle\phi_{i}(\tau)\rangle\) because of a _causality property_ of the path integral: any average of a product of \(\phi\) or \(\tilde{\phi}\) fields vanishes when the last factor - the one associated with the latest of all times that occur - is a \(\tilde{\phi}\).
To see this causality property, consider a reaction where a molecule of species \(i\) is created spontaneously with rate \(k_{1i}\). From eq. (2.11), this corresponds to a term \(k_{1i}(a^{\dagger}_{i}-1)\) in \(\hat{H}\), and hence, generalizing to a time-dependent creation rate, to a contribution \(\Delta t\sum_{\tau}k_{1i}(\tau)\tilde{\phi}_{i}(\tau)\) in the action. Comparing this with the generating term \(\Delta t\sum_{\tau}\tilde{\theta}_{i}(\tau)\tilde{\phi}_{i}(\tau)\) from the \(\tilde{\theta}_{i}(\tau)\)-field shows that, whenever we take derivatives of the partition function, we have the identity
\[\frac{1}{\Delta t}\frac{\partial}{\partial k_{1i}(\tau)}=\frac{1}{\Delta t} \frac{\partial}{\partial\tilde{\theta}_{i}(\tau)} \tag{2.22}\]
Now any product \(f_{<\tau}\) of fields evaluated at times before \(\tau\) can be generated by a sequence of appropriate derivatives of \(\mathcal{Z}\). Taking then an additional derivative as in our last identity and setting the generating fields to zero afterwards shows that
\[\frac{1}{\Delta t}\frac{\partial}{\partial k_{1i}(\tau)}\langle f_{<\tau} \rangle=\langle\tilde{\phi}_{i}(\tau)f_{<\tau}\rangle \tag{2.23}\]
However, the l.h.s. vanishes by causality - the average of \(f_{<\tau}\) cannot depend on a creation rate at a later time - and therefore so does the r.h.s., as claimed.
Summarizing so far, then, and introducing the symbol \(\mu\) for the means of our fields, we have
\[\begin{split}\mu_{i}(\tau)&=\langle\phi_{i}(\tau) \rangle=\langle n_{i}(\tau)\rangle\\ \tilde{\mu}_{i}(\tau)&=\langle\tilde{\phi}_{i}(\tau )\rangle=0\end{split} \tag{2.24}\]
The second line follows again from the causality property. In fact, applying the property inductively, one sees that the average of any product of \(\tilde{\phi}\)-factors vanishes, e.g. \(\langle\tilde{\phi}(\tau)\tilde{\phi}(\tau^{\prime})\rangle=0\ \forall\ \tau,\tau^{\prime}\).
Similarly calculating the fourth order derivatives in terms of the number operator and from the path integral we have for \(\tau^{\prime}<\tau_{-}\),
\[\begin{split}\frac{1}{(\Delta t)^{4}}\left(\frac{\partial}{ \partial\tilde{\theta}_{i}(\tau_{+})}+1\right)\frac{\partial}{\partial\theta _{i}(\tau)}\left(\frac{\partial}{\partial\tilde{\theta}_{i}(\tau^{\prime}_{+} )}+1\right)\frac{\partial}{\partial\theta_{i}(\tau^{\prime})}\mathcal{Z} \Bigg{|}_{\theta,\tilde{\theta}=0}&=\langle n_{i}(\tau)n_{i}( \tau^{\prime})\rangle\\ &=\langle[\tilde{\phi}_{i}(\tau_{+})+1]\phi_{i}(\tau)[\tilde{ \phi}(\tau^{\prime}_{+})+1]\phi_{i}(\tau^{\prime})\rangle\end{split} \tag{2.25}\]
while the analogue for \(\tau^{\prime}=\tau\) is
\[\frac{1}{(\Delta t)^{4}}\left(\frac{\partial}{\partial\tilde{\theta}_{i}(\tau _{+})}+1\right)\frac{\partial}{\partial\theta_{i}(\tau)}\left(\frac{\partial} {\partial\tilde{\theta}_{i}(\tau_{+})}+1\right)\frac{\partial}{\partial\theta _{i}(\tau)}\mathcal{Z}\Bigg{|}_{\theta,\tilde{\theta}=0}=\langle n_{i}^{2}( \tau)\rangle-\langle n_{i}(\tau)\rangle=\langle[\tilde{\phi}_{i}(\tau_{+})+1]^ {2}\phi_{i}^{2}(\tau)\rangle \tag{2.26}\]
These can now be linked to the connected response and correlation functions for each species \(i\),
\[\begin{split} R_{i}(\tau,\tau^{\prime})&=\langle \phi_{i}(\tau)\tilde{\phi}_{i}(\tau^{\prime})\rangle-\langle\phi_{i}(\tau) \rangle\langle\tilde{\phi}_{i}(\tau^{\prime})\rangle=\langle\delta\phi_{i}( \tau)\delta\tilde{\phi}_{i}(\tau^{\prime})\rangle\\ C_{i}(\tau,\tau^{\prime})&=\langle\phi_{i}(\tau) \phi_{i}(\tau^{\prime})\rangle-\langle\phi_{i}(\tau)\rangle\langle\phi_{i}( \tau^{\prime})\rangle=\langle\delta\phi_{i}(\tau)\delta\phi_{i}(\tau^{\prime}) \rangle\end{split} \tag{2.27}\]
The corresponding disconnected functions do not have the subtraction of the product of the averages; note though that since \(\langle\tilde{\phi}\rangle=0\), the connected and the disconnected response functions are identical. On the r.h.s. of eq. (2.26) we have, by causality again, the disconnected correlator \(\langle\phi_{i}^{2}(\tau)\rangle=C_{i}(\tau,\tau)+\mu_{i}(\tau)^{2}\). Subtracting the squared mean shows
\[\mathrm{Var}(n_{i}(\tau))-\langle n_{i}(\tau)\rangle=C_{i}(\tau,\tau) \tag{2.28}\]
The l.h.s. vanishes for Poissonian copy number fluctuations, so a non-zero equal-time correlator of the \(\phi\)-field indicates deviations from Poisson statistics.
For the \(\tau^{\prime}<\tau_{-}\) case in eq. (2.25) we have from causality
\[\langle n_{i}(\tau)n_{i}(\tau^{\prime})\rangle=\langle\phi_{i}(\tau)\phi_{i} (\tau^{\prime})\rangle+\langle\phi_{i}(\tau)\tilde{\phi}(\tau^{\prime}_{+}) \phi_{i}(\tau^{\prime})\rangle \tag{2.29}\]
or
\[\langle\delta n_{i}(\tau)\delta n_{i}(\tau^{\prime})\rangle=\langle n_{i}(\tau )n_{i}(\tau^{\prime})\rangle-\langle n_{i}(\tau)\rangle\langle n_{i}(\tau^{ \prime})\rangle=C_{i}(\tau,\tau^{\prime})+\langle\phi_{i}(\tau)\tilde{\phi}( \tau^{\prime}_{+})\phi_{i}(\tau^{\prime})\rangle \tag{2.30}\]
where the copy number fluctuations are defined as \(\delta n(\tau)=n(\tau)-\mu(\tau)\). The final three-point correlator cannot be simplified in general, but for Gaussian field statistics one can use Wick's theorem and causality to express it as \(\langle\phi_{i}(\tau)\tilde{\phi}(\tau^{\prime}_{+})\phi_{i}(\tau^{\prime}) \rangle=\langle\phi_{i}(\tau)\tilde{\phi}(\tau^{\prime}_{+})\rangle\langle\phi_ {i}(\tau^{\prime})\rangle\) so that
\[\langle\delta n_{i}(\tau)\delta n_{i}(\tau^{\prime})\rangle=R_{i}(\tau,\tau^{ \prime})\mu_{i}(\tau^{\prime})+C_{i}(\tau,\tau^{\prime})\quad\text{ for }\tau^{\prime}<\tau \tag{2.31}\]
### Baseline action and Gaussian path integral
Equipped with the path integral for describing chemical reactions, we now look at a concrete set of chemical reactions that we call the _baseline_ because the Hamiltonian of eq. (2.11) associated with these reactions will be quadratic in creation and annihilation operators. We will treat other reactions as
perturbations of this quadratic baseline Hamiltonian. If we consider the set of only creation and (unary) destruction reactions for each of the species with the following rate constants
\[\emptyset\xrightarrow{k_{1i}}X_{i}\qquad X_{i}\xrightarrow{k_{2i}}\emptyset \tag{2.32}\]
then from eq. (2.11) with the operators replaced by the fields, the Doi-shifted Hamiltonian in the action is
\[H_{0}(\vec{\phi},\mathbf{\phi})=\sum_{i}\left(k_{1i}\tilde{\phi}_{i}-k_{2i}\tilde{ \phi}_{i}\phi_{i}\right) \tag{2.33}\]
It is clearly decoupled across the different species and only has linear and quadratic terms in \(\phi,\tilde{\phi}\). We will later use it a baseline for perturbation theory (with the linear terms omitted), hence the notation \(H_{0}\). The corresponding baseline action \(S_{0}\) from eq. (2.18) becomes
\[S_{0}(\tilde{\phi},\phi) = \sum_{i}\left\{\bar{n}_{0i}\tilde{\phi}_{i}(0)-\phi_{i}(0)\tilde{ \phi}_{i}(0)+\Delta t\sum_{\tau=\Delta t}^{t}\left[-\tilde{\phi}_{i}(\tau) \Delta_{\tau}\phi_{i}(\tau)+k_{1i}\tilde{\phi}_{i}(\tau)-k_{2i}\tilde{\phi}_{i }(\tau)\phi_{i}(\tau_{-})\right]\right. \tag{2.34}\] \[+\left.\Delta t\sum_{\tau=0}^{t}\left[\tilde{\theta}_{i}(\tau) \tilde{\phi}_{i}(\tau)+\theta_{i}(\tau)\phi_{i}(\tau)\right]\frac{}{}\right\}\]
Because the path integral is Gaussian, we can calculate the means of the fields by extremizing the action
\[\left.\frac{\partial S_{0}}{\partial\tilde{\phi}_{i}(\tau)}\right|_{\begin{subarray} {c}\phi_{i}(\tau)=\mu_{i}(\tau)\\ \phi_{i}(\tau)=\tilde{\mu}_{i}(\tau)\end{subarray}}=0\quad\text{and}\quad\left. \frac{\partial S_{0}}{\partial\phi_{i}(\tau)}\right|_{\begin{subarray}{c} \phi_{i}(\tau)=\mu_{i}(\tau)\\ \tilde{\phi}_{i}(\tau)=\tilde{\mu}_{i}(\tau)\end{subarray}}=0 \tag{2.35}\]
The derivatives with respect to \(\tilde{\phi}_{i}(\tau)\) for \(0<\tau\leq t\) give the equations of motion for the means as
\[\mu_{i}(\tau)-\mu_{i}(\tau-\Delta t)=\Delta t\left(k_{1i}-k_{2i}\mu_{i}(\tau- \Delta t)+\tilde{\theta}_{i}(\tau)\right) \tag{2.36}\]
while for \(\tau=0\) one obtains the initial condition
\[\mu_{i}(0)=\bar{n}_{0i}+\tilde{\theta}_{i}(0)\Delta t \tag{2.37}\]
Conversely, the \(\phi_{i}(\tau)\) derivatives yield for \(0\leq\tau<t\)
\[\tilde{\mu}_{i}(\tau)-\tilde{\mu}_{i}(\tau+\Delta t)=\Delta t\left(-k_{2i} \tilde{\mu}_{i}(\tau+\Delta t)+\theta_{i}(\tau)\right) \tag{2.38}\]
and for \(\tau=t\) one obtains a final condition
\[\tilde{\mu}_{i}(t)=\theta_{i}(t)\Delta t \tag{2.39}\]
The equations for the physical means thus have to be solved forwards in time as expected, while the ones for the conjugate means are solved backwards starting from the final condition.
The physical problem corresponds to zero value of generating fields, from which we can recover the initial condition that species \(i\) has initial mean \(\bar{n}_{0i}\) and the fact that \(\tilde{\mu}_{i}(\tau)=0\)\(\forall\)\(\tau\) as expected. In the continuous time limit one also obtains for the physical means the expected equation of motion
\[\partial_{\tau}\mu_{i}(\tau)=k_{1i}-k_{2i}\mu_{i}(\tau) \tag{2.40}\]
with two terms on the r.h.s. reflecting creation and destruction of particles of species \(i\). This equation is also valid if we generalize to time-dependent creation rates \(k_{1i}(\tau)\)
\[\partial_{\tau}\mu_{i}(\tau)=k_{1i}(\tau)-k_{2i}\mu_{i}(\tau) \tag{2.41}\]
and gives the solution
\[\mu_{i}(\tau)=\bar{n}_{0i}e^{-k_{2i}\tau}+\int_{0}^{\tau}d\tau^{\prime}e^{-k_{ 2i}(\tau-\tau^{\prime})}k_{1i}(\tau^{\prime}) \tag{2.42}\]
We can calculate the response function using the relation demonstrated in eq. (2.23), which incidentally has a direct analogue for the Martin-Siggia-Rose-Jansen-de Dominicis (MSRDJ) path integral [41, 42, 43, 44]:
\[R_{0i}(\tau,\tau^{\prime})=\langle\phi(\tau)\tilde{\phi}(\tau^{\prime}) \rangle_{0}=\frac{1}{\Delta t}\frac{\partial\langle\phi_{i}(\tau)\rangle_{0}}{ \partial k_{1i}(\tau^{\prime})}=\frac{1}{\Delta t}\frac{\partial\mu_{i}(\tau)} {\partial k_{1i}(\tau^{\prime})} \tag{2.43}\]
where all averages are evaluated at zero generating fields, \(\tilde{\theta}=\theta=0\), and the zero subscripts indicate evaluation within the baseline path integral. In the continuous time limit the derivative becomes a functional derivative w.r.t. \(k_{1i}(\tau^{\prime})\) and one finds explicitly from eq. (2.42)
\[R_{0i}(\tau,\tau^{\prime})=e^{-k_{2i}(\tau-\tau^{\prime})}\Theta(\tau-\tau^{ \prime}) \tag{2.44}\]
with \(\Theta(\cdot)\) the Heaviside step function. This response function is causal as expected,
\[R_{0i}(\tau,\tau^{\prime})=0\ \mbox{if}\ \tau<\tau^{\prime} \tag{2.45}\]
i.e. the real field \(\phi\) needs to be ahead in time of the conjugate field \(\tilde{\phi}\), and its equal-time limit is unity, \(\lim_{\tau^{\prime}\to\tau^{-}}R_{0}(\tau,\tau^{\prime})=1\). We note for later also the operator inverse of the response function
\[R_{0i}^{-1}(\tau,\tau^{\prime})=(\partial_{\tau}+k_{2i})\delta(\tau-\tau^{ \prime}) \tag{2.46}\]
For a direct calculation of the response function in discrete time by inversion of the precision matrix of the Gaussian path integral, see appendix A.5.
We consider finally the correlation functions calculated within our baseline path integral. These are zero because the baseline action only has quadratic terms of the form \(\tilde{\phi}\phi\) (see appendix A.5), i.e. the _bare correlation functions_ are
\[C_{0i}(\tau,\tau^{\prime})=\langle\phi_{i}(\tau)\phi_{i}(\tau^{\prime}) \rangle_{0}-\langle\phi_{i}(\tau)\rangle_{0}\langle\phi_{i}(\tau^{\prime}) \rangle_{0}=0 \tag{2.47}\]
In our subsequent analysis we will consider only the bare correlation function, that is we will approximate \(C_{i}\) as generally defined in eq. (2.27) to be always equal to \(C_{0i}\). From eq. (2.28) this implies that \(\mbox{Var}(n_{i}(\tau))=\langle n_{i}(\tau)\rangle\), so we are then effectively approximating the variance of every copy number by its value for a Poisson distribution. The full distribution can in fact also be shown to be Poissonian for our baseline path integral (see appendix A.7 for a proof). In the general case \(C_{i}(\tau,\tau)\) quantifies the deviation from this behaviour and our approximation is valid if \(C_{i}(\tau,\tau)\ll\langle n_{i}(\tau)\rangle\); see section 4.7 and fig. 4 for a numerical test of this approximation.
## 3 Interactions and effective action
We now consider higher order reactions. The associated "interacting" Hamiltonian \(H_{\rm int}\) will no longer be quadratic and we will treat it as a perturbation to our baseline \(H_{0}\), introducing a parameter \(\alpha\) to set up a perturbation theory:
\[H_{\alpha}=H_{0}+\alpha H_{\rm int} \tag{3.1}\]
We will call the associated action \(S_{\alpha}\). To simplify the perturbation theory it is useful to have a baseline with zero means. We therefore define \(H_{0}\) and \(S_{0}\) from here on to contain only the _quadratic_ terms from the baseline Hamiltonian and action, respectively. Any linear terms such as \(\int d\tau\,k_{1i}\tilde{\phi}_{i}(\tau)\) from particle creation will be included in \(H_{\rm int}\) and treated perturbatively.
We now need a formalism to calculate means and response functions within the interacting, non-Gaussian path integral. A useful starting point is the generating function of the connected \(n\)-point functions \({\cal W}\), also called the free energy and given by
\[{\cal W}(\tilde{\theta},\theta)=\ln{\cal Z}(\tilde{\theta},\theta) \tag{3.2}\]
This is a function of the set of generating fields \(\tilde{\theta}\) and \(\theta\) for all species \(i\) and all times \(\tau\). The one- and two-point functions are just the by now familiar field means and response and correlation functions
\[\langle\phi_{i}(\tau)\rangle=\frac{\delta{\cal W}(\tilde{\theta},\theta)}{ \delta\theta_{i}(\tau)},\quad\langle\tilde{\phi}_{i}(\tau)\rangle=\frac{ \delta{\cal W}(\tilde{\theta},\theta)}{\delta\tilde{\theta}_{i}(\tau)} \tag{3.3}\]
\[R_{ij}(\tau,\tau^{\prime})=\langle\delta\phi_{i}(\tau)\delta\tilde{\phi}_{j}( \tau^{\prime})\rangle=\frac{\delta^{2}{\cal W}(\tilde{\theta},\theta)}{ \delta\theta_{i}(\tau)\delta\tilde{\theta}_{j}(\tau^{\prime})},\quad C_{ij}( \tau,\tau^{\prime})=\langle\delta\phi_{i}(\tau)\delta\phi_{j}(\tau^{\prime}) \rangle=\frac{\delta^{2}{\cal W}(\tilde{\theta},\theta)}{\delta\theta_{i}( \tau)\delta\theta_{j}(\tau^{\prime})} \tag{3.4}\]
and generally depend on \(\tilde{\theta}\), \(\theta\). We use continuous time notation in this section for brevity.
From the free energy one can define the _effective action_\(\Gamma\) as the Legendre transform of the free energy with respect to generic field means (also known as "background fields" in field theory) that we collectively denote \(\tilde{\mu}\) and \(\mu\), to represent the set of means for all species and all times,
\[\Gamma(\tilde{\mu},\mu)=\mathop{\rm e}\limits_{\theta,\theta}^{\rm extr}\left\{ \ln\int d\tilde{\phi}\,d\phi\,\exp\left[S_{\alpha}(\tilde{\phi},\phi)-\sum_{i} \left(\int d\tau\,\tilde{\mu}_{i}(\tau)\tilde{\theta}_{i}(\tau)+\int d\tau\, \mu_{i}(\tau)\theta_{i}(\tau)\right)\right]\right\} \tag{3.5}\]
The extremization conditions yield
\[\mu_{i}(\tau)=\frac{\delta{\cal W}(\tilde{\theta},\theta)}{\delta\theta_{i}( \tau)}\qquad\tilde{\mu}_{i}(\tau)=\frac{\delta{\cal W}(\tilde{\theta},\theta)} {\delta\tilde{\theta}_{i}(\tau)} \tag{3.6}\]
which by comparison with eq. (3.3) shows that \(\mu\) and \(\tilde{\mu}\) are indeed the means of the field variables. The generating fields \(\tilde{\theta},\theta\) can be viewed as Lagrange multipliers whose value is determined by extremizing the free energy with the constraint that the fields \(\tilde{\phi}_{i}(\tau)\) and \(\phi_{i}(\tau)\) have mean values \(\tilde{\mu}_{i}(\tau)\) and \(\mu_{i}(\tau)\)\(\forall\ i,\tau\). From the Legendre transform property, the values of the generating fields are given by
\[\theta_{i}(\tau)=-\frac{\delta\Gamma(\tilde{\mu},\mu)}{\delta\mu_{i}(\tau)} \qquad\tilde{\theta}_{i}(\tau)=-\frac{\delta\Gamma(\tilde{\mu},\mu)}{\delta \tilde{\mu}_{i}(\tau)} \tag{3.7}\]
From eq. (3.7) one then obtains the _variational equations_ for the means,
\[\frac{\delta\Gamma(\tilde{\mu},\mu)}{\delta\mu_{i}(\tau)}=0\qquad\frac{\delta \Gamma(\tilde{\mu},\mu)}{\delta\tilde{\mu}_{i}(\tau)}=0 \tag{3.8}\]
by asserting that the physical problem has no generating fields, i.e. \(\tilde{\theta}_{i}(\tau)=\theta_{i}(\tau)=0\ \forall\ i,\tau\). These equations tell us that the physical means \(\tilde{\mu}\) and \(\mu\) are those that make the value of \(\Gamma(\tilde{\mu},\mu)\) an extremum, motivating the name effective action for this quantity. The variational equations define the equations of motion of \(\mu_{i}(\tau)\) and \(\tilde{\mu}_{i}(\tau)\). For the conjugate means we know that the solution will be \(\tilde{\mu}_{i}(\tau)=0\ \forall\ i,\tau\) from the arguments in section 2.3.
By considering more general derivatives of the effective action, one obtains the so-called vertex functions. We write the definition for the case of a single species but this can be easily extended to the multiple species case with extra indices in \(\Gamma^{l,m}\):
\[\Gamma^{l,m}(\tau_{1},\ldots,\tau_{l},\tau^{\prime}_{1},\ldots,\tau^{\prime}_ {m})=\frac{\delta^{(l+m)}\Gamma}{\delta\tilde{\mu}(\tau_{1})\ldots\delta \tilde{\mu}(\tau_{l})\delta\mu(\tau^{\prime}_{1})\ldots\delta\mu(\tau^{\prime }_{m})}\Bigg{|}_{\mu(\tau)=\tilde{\mu}(\tau)=0\ \forall\ \tau} \tag{3.9}\]
Unlike in typical quantum field theory settings, the physical means of the \(\phi\)-fields will be nonzero in our case. Writing these as \(\mu^{*}\), a second set of vertex functions can be defined via derivatives evaluated not at zero but at the physical means:
\[\Gamma^{l,m*}(\tau_{1},\ldots,\tau_{l},\tau^{\prime}_{1},\ldots,\tau^{\prime}_ {m})=\left.\frac{\delta^{(l+m)}\Gamma}{\delta\tilde{\mu}(\tau_{1})\ldots \delta\tilde{\mu}(\tau_{l})\delta\mu(\tau^{\prime}_{1})\ldots\delta\mu(\tau^{ \prime}_{m})}\right|_{\mu(\tau)=\mu^{*}(\tau),\tilde{\mu}(\tau)=0\ \forall\ \tau} \tag{3.10}\]
We will now describe how to calculate the physical means \(\mu^{*}(\tau)\) using the vertex functions. The effective action \(\Gamma\) can be reconstructed from the set of vertex functions by a Taylor expansion around \(\mu=\tilde{\mu}=0\):
\[\Gamma(\tilde{\mu},\mu)=\sum_{j,k}\frac{1}{k!j!}\Gamma^{k,j}:\tilde{\mu}^{k} \mu^{j} \tag{3.11}\]
where \(\Gamma^{k,j}\) is the vertex function at zero means as before. The ":" notation indicates contraction across all the temporal "indices" of the vertex function and is a shorthand for
\[\Gamma^{k,j}:\tilde{\mu}^{k}\mu^{j}\equiv\int d\tau_{1}\cdots d\tau_{k}d\tau^ {\prime}_{1}\cdots d\tau^{\prime}_{j}\Gamma^{k,j}(\tau_{1},\ldots,\tau_{k}, \tau^{\prime}_{1},\ldots,\tau^{\prime}_{j})\tilde{\mu}(\tau_{1})\cdots\tilde {\mu}(\tau_{k})\mu(\tau^{\prime}_{1})\cdots\mu(\tau^{\prime}_{j}) \tag{3.12}\]
Differentiating eq. (3.11) w.r.t \(\tilde{\mu}(\tau)\) and setting \(\tilde{\mu}=\tilde{\mu}^{*}\) and \(\mu=\mu^{*}\) we have for the first order vertex function at the physical means
\[\Gamma^{1,0*}(\tau)=\sum_{j,k}\frac{1}{(k-1)!j!}\Gamma^{k,j}(\tau,\ldots):( \tilde{\mu}^{*})^{k-1}(\mu^{*})^{j} \tag{3.13}\]
Since the physical means of the conjugate fields vanish, \(\tilde{\mu}^{*}=0\), we only need to keep the \(k=1\) term and obtain
\[\Gamma^{1,0*}(\tau)=\sum_{j}\frac{1}{j!}\Gamma^{1,j}(\tau,\dots):(\mu^{*})^{j} \tag{3.14}\]
Now from the variational equation of motion (3.8), \(\Gamma^{1,0*}(\tau)=0\ \forall\ \tau\). Writing the \(j=0\) and \(j=1\) terms on the right explicitly then gives as the equation for the physical means \(\mu^{*}\),
\[0=\Gamma^{1,0}(\tau)+\Gamma^{1,1}(\tau,\cdot):\mu^{*}+\sum_{j\geq 2}\frac{1}{j! }\Gamma^{1,j}(\tau,\dots):(\mu^{*})^{j} \tag{3.15}\]
The \(\Gamma^{1,1}\)-term can now be simplified using the generic Legendre transform relation that the second derivatives of \({\cal W}\) and \(\Gamma\) - which give the connected two-point correlation functions and the two-point vertex functions, respectively - are, up to a minus sign, inverses of each other [45, 46]. Writing this result in \(2\times 2\)-block form for the \(\tilde{\theta}\) and \(\theta\) components, respectively, we have for the physical (\(\tilde{\theta}=\theta=0\)) solution with response and correlation functions \(R^{*}\) and \(C^{*}\):
\[\begin{pmatrix}0&(R^{*})^{\rm T}\\ R^{*}&C^{*}\end{pmatrix}=-\begin{pmatrix}\Gamma^{2,0*}&\Gamma^{1,1*}\\ (\Gamma^{1,1*})^{\rm T}&\Gamma^{0,2*}\end{pmatrix}^{-1} \tag{3.16}\]
This implies \(\Gamma^{1,1*}=-(R^{*})^{-1}\) and \(\Gamma^{0,2*}=0\). It turns out that the same relations also hold at zero means, i.e. \(\Gamma^{1,1}=-R^{-1}\) and \(\Gamma^{0,2}=0\). We thus have
\[R^{-1}(\tau,\cdot):\mu^{*}=\Gamma^{1,0}(\tau)+\sum_{j\geq 2}\frac{1}{j!}\Gamma^{ 1,j}(\tau,\dots):(\mu^{*})^{j} \tag{3.17}\]
We can simplify further on the l.h.s. by using the Feynman-Dyson equation discussed below (see eq. (4.20)), i.e. \(R^{-1}=R_{0}^{-1}-\Sigma\) with the response self-energy \(\Sigma\), like \(R\), evaluated at \(\mu=0\) and \(R_{0}\) the bare propagator,
\[R_{0}^{-1}(\tau,\cdot):\mu^{*}=\Gamma^{1,0}(\tau)+\Sigma(\tau,\cdot):\mu^{*}+ \sum_{j\geq 2}\frac{1}{j!}\Gamma^{1,j}(\tau,\dots):(\mu^{*})^{j} \tag{3.18}\]
We also write an equivalent form obtained by multiplying by the bare response function, which will have a direct diagrammatic analogue:
\[\mu^{*}(\tau)=R_{0}(\tau,\cdot):\Gamma^{1,0}(\cdot)+R_{0}(\tau,\cdot):\Sigma( \cdot,\cdot):\mu^{*}+\sum_{j\geq 2}\frac{1}{j!}R_{0}(\tau,\cdot):\Gamma^{1,j}( \cdot,\dots):(\mu^{*})^{j} \tag{3.19}\]
If we define a new function of a single time argument, \(\Omega^{1,j}(\tau)\), as the contraction of \(\Gamma^{1,j}\) with \(j\) factors of \(\mu^{*}\),
\[\Omega^{1,j}(\tau)=\frac{1}{j!}\Gamma^{1,j}(\tau,\dots):(\mu^{*})^{j} \tag{3.20}\]
then the equation of motion for the means can be written more explicitly as
\[\int_{0}^{\tau}d\tau^{\prime}\,R_{0}^{-1}(\tau,\tau^{\prime})\mu^{*}(\tau^{ \prime})=\Gamma^{1,0}(\tau)+\int_{0}^{\tau}d\tau^{\prime}\,\Sigma(\tau,\tau^{ \prime})\mu^{*}(\tau^{\prime})+\sum_{j\geq 2}\Omega^{1,j}(\tau) \tag{3.21}\]
Using the form of \(R_{0}^{-1}\) from eq. (2.46) and dropping the asterisks on the physical means again for notational simplicity, we have finally
\[(\partial_{\tau}+k_{2})\mu(\tau)=\Gamma^{1,0}(\tau)+\int_{0}^{\tau}d\tau^{ \prime}\,\Sigma(\tau,\tau^{\prime})\mu(\tau^{\prime})+\sum_{j\geq 2}\Omega^{1,j}(\tau) \tag{3.22}\]
We will deploy the above equation to determine the mean copy numbers of the dynamics, \(\mu(\tau)\), by constructing the vertex functions \(\Gamma^{1,m}\) (hence \(\Omega^{1,m}\)) and the self-energy \(\Sigma\) using diagrammatic perturbation theory. Independently of specific calculations, what is notable is that eq. (3.22) contains _memory corrections_ in the last two terms on the r.h.s., which depend on the entire history of the copy numbers up to time \(\tau\). Such memory terms thus arise naturally when applying the effective action formalism to Doi-Peliti field theory.
Single species interactions: \(A+A\to A\)
We will now consider a system with particles of only a single species, \(A\), with the baseline creation and destruction reactions \(\varnothing\xrightleftharpoons[k_{2}]{k_{1}}{\sum_{k_{2}}}A\) and in addition the binary coagulation reaction
\[A+A\xrightarrow{k_{3}}A \tag{4.1}\]
with rate \(k_{3}\). As throughout we will assume that the initial distribution of copy numbers is Poissonian. With the binary reaction present, this system is not exactly solvable in our formalism because the path integral is no longer Gaussian. Instead it must be dealt with perturbatively, as we will illustrate in this section.
### Internal vertices and Feynman rules
As explained above, we include the term \(k_{1}\tilde{\phi}\) from the creation reaction in the interaction Hamiltonian \(H_{\rm int}\) so that \(H_{0}\) is purely quadratic. The same applies to the initial condition term \(\bar{n}_{0}\tilde{\phi}(0)\). For the coagulation reaction the Hamiltonian operator eq. (2.11) is \(k_{3}[\hat{a}^{\dagger}\hat{a}^{2}-(\hat{a}^{\dagger})^{2}\hat{a}^{2}]\), which after replacing operators by fields and Doi-shift becomes \(-k_{3}(\tilde{\phi}+\tilde{\phi}^{2})\phi^{2}\), giving for the interaction part of the action
\[S_{\rm int}=\Delta t\sum_{\tau=\Delta t}^{t}\Bigl{[}k_{1}\tilde{\phi}(\tau)-k _{3}\tilde{\phi}(\tau)\phi(\tau_{-})\phi(\tau_{-})-k_{3}\tilde{\phi}(\tau) \tilde{\phi}(\tau)\phi(\tau_{-})\phi(\tau_{-})\Bigr{]}+\bar{n}_{0}\tilde{\phi }(0) \tag{4.2}\]
Diagrammatically this is represented by the following three internal vertices:
(4.3)
Here we use outgoing arrows to indicate \(\tilde{\phi}\) legs; legs without arrows are \(\phi\) legs. The \(\phi\) legs are always one time step behind the \(\tilde{\phi}\) legs at the same vertex. We have three types of vertices:
1. The one-legged vertex with just a \(\tilde{\phi}\) leg is a source term and indicates the creation of \(A\) with the rate \(k_{1}(\tau)\) at time \(\tau\). We have included the initial condition term here by defining a time-dependent creation rate \(k_{1}(\tau)=k_{1}+\delta_{\tau,0}\bar{n}_{0}/\Delta t\).
2. The three-legged vertex with one \(\tilde{\phi}\) leg and two \(\phi\) legs reflects the coagulation reaction we are actually considering, with two incoming A molecules at time \(\tau_{-}\) that react at time \(\tau\) with rate \(k_{3}\) to form one A molecule.
3. In analogy with the MSRJD path integral and the associated Langevin equation with multiplicative noise [40, 47], the four-legged vertex can be interpreted as describing the noise in the system whereby two \(A\) molecules meet at rate \(k_{3}\) but do not react, resulting in two outgoing \(A\) molecules.
The perturbative expansion is now set up by expressing averages in the full non-Gaussian path integral as \(\langle\ldots\rangle=\langle\ldots e^{\alpha S_{\rm int}}\rangle_{0}\) where the second average is taken across the Gaussian baseline defined by the quadratic action \(S_{0}\). One then expands in powers of \(\alpha\) or equivalently \(S_{\rm int}\) and evaluates the resulting Gaussian averages using Wick's theorem. Each factor in a Wick pairing corresponds to a bare second order correlation function or "propagator", in our case specifically a bare response function.
We then have the following **rules for constructing diagrams** in the perturbative expansion:
1. Feynman diagrams can be constructed using the internal vertices of the interacting action; a summation over internal time indices of the vertex functions is always implied.
2. Given the causal nature of the bare response function, we can only join a \(\tilde{\phi}\) leg at an earlier time, marked by an outgoing arrow, with a \(\phi\) leg (without arrow) at the same or later time. This forms a response function, \(R_{0}\), which connects two vertices.
3. Two \(\tilde{\phi}\) or two \(\phi\) legs cannot be joined to each other because they result in \(\langle\tilde{\phi}\tilde{\phi}\rangle_{0}\) or \(\langle\phi\phi\rangle_{0}\) correlations, both of which are zero because the baseline action only has \(\phi\tilde{\phi}\) terms.
4. A \(\tilde{\phi}\) leg cannot connect to \(\phi\) leg at the same vertex because the \(\phi\) leg is a time step behind, resulting in a vanishing response function.
5. Again because of the causality of the response function, there must not be any closed time loops in the diagrams. This implies there must be a consistent flow of time along the response functions; we will draw this from right (early times) to left (later times) below.
6. The vertex functions \(\Gamma^{l,m}\) are drawn with \(l\) amputated \(\tilde{\phi}\) legs (with outgoing arrows) and \(m\) amputated \(\phi\) legs (without arrows) where propagators can connect, to form the appropriate diagrams that contribute to connected \(n\)-point correlation functions. When connecting propagators, the direction of the arrows must be respected.
7. For the vertex functions \(\Gamma^{l,m}\), it is known from field theory [46] that these are constructed using only one particle irreducible (1PI) diagrams, that is diagrams that cannot be split into two separate components when any propagator (i.e. response) line is cut. In particular such diagrams cannot contain tadpoles [48], i.e. sub-diagrams with only internal vertices that are connected to the rest of the diagram by a single propagator line. For \(\Gamma^{l,m*}\), on the other hand, tadpoles are included, i.e. here all diagrams contribute that are 1PI in the broader sense that they cannot be separated by cutting a single response line, into two components that each contain at least one external connection via an amputated leg.
### Vertex functions and equation of motion for the mean
We begin by noting that the self-energy \(\Sigma\) that appears in eq. (3.19) vanishes for the reaction \(A+A\to A\) because there is no vertex with only one outgoing and only one incoming leg; we will see this more explicitly in section 4.3. The diagrammatic analogue of eq. (3.19) is then
(4.4)
The l.h.s. diagrammatically represents the mean or one point function at time \(\tau\), i.e \(\mu(\tau)\). In the diagrams on the r.h.s. the shaded circles represent the vertex functions \(\Gamma^{1,m}\) connected to the left external vertex \(\phi(\tau)\), by a line in the middle which is the response function with the arrow specifying the direction of time, from right to left. The sum over the internal time index where the response function connects to the vertex function is implied. The empty circles represent \(\mu\) as on the left, and we use dashed lines to connect them to the vertex functions, to indicate that there is no propagator there. These together make up the \(\Omega^{1,m}\) as in eq. (3.20), which are the vertex functions with the \(m\) amputated \(\phi(\tau_{1}),\dots,\phi(\tau_{m})\) legs replaced by \(m\) factors of \(\mu(\tau_{1}),\dots,\mu(\tau_{m})\) and summed over the internal times \(\tau_{1},\dots\tau_{m}\).
The 1PI one-point vertex function with one \(\tilde{\phi}\) leg, i.e \(\Gamma^{1,0}\), for our example system is represented diagrammatically as
\[\Gamma^{1,0}\quad=\quad\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig-1.eps}}\qquad=\quad\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig-2.eps}} \tag{4.5}\]
The first diagram gives our generic diagrammatic notation for such a vertex function; the second equality asserts that for our system there is only a single 1PI diagram contributing to \(\Gamma^{1,0}\) with the value \(\alpha k_{1}(\tau)\), which is of \(O(\alpha)\) as it contains one internal vertex; we do not write these powers of \(\alpha\) explicitly in the
diagrams. The dashed lines indicate the amputated \(\tilde{\phi}\) leg where a propagator can connect. The arrow also indicates the direction of the flow of time in the vertex function.
Taking only the first term on the r.h.s of eq. (4.4), the mean is given by
\[\mu(\tau)=\Delta t\sum_{\tau^{\prime}}R_{0}(\tau,\tau^{\prime})\alpha k_{1}(\tau ^{\prime}) \tag{4.6}\]
Multiplying by \(R_{0}^{-1}\) we get
\[\Delta t\sum_{\tau}R_{0}^{-1}(\tau,\tau^{\prime})\mu(\tau^{\prime})=\alpha k_{ 1}(\tau) \tag{4.7}\]
In continuous time this gives the equation of motion for the mean,
\[(\partial_{\tau}+k_{2})\mu(\tau)=\alpha k_{1}(\tau) \tag{4.8}\]
which is the same result as eq. (2.40) obtained in the baseline theory. In continuous time \(k_{1}(\tau)=k_{1}+\bar{n}_{0}\delta(\tau)\) and the \(\delta(\tau)\) term just fixes the initial condition \(\mu(\tau=0^{+})=\bar{n}_{0}\) as expected. We leave this initial condition term implicit throughout the rest of the paper and write \(k_{1}\) again instead of \(k_{1}(\tau)\).
To account for the effect of the interactions, we include the other terms on the r.h.s of eq. (4.4). For this, we consider the vertex functions \(\Gamma^{1,m}\), i.e. with one amputated \(\tilde{\phi}\) leg and any number of amputated \(\phi\) legs, at zero mean. The simplest case is \(m=2\), which up to \(O(\alpha^{5})\) has the expansion \(\Gamma^{1,2}\), given by
\[\Gamma^{1,2}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{
So far we have only considered \(O(\alpha)\) diagrams, which therefore have a single internal vertex. Higher order diagrams involve loops built using the four-legged internal vertex, such as the diagrams in eq. (4.9). The \(O(\alpha^{2})\) contribution there is
\[\Gamma^{1,2}(\tau,\tau^{\prime},\tau^{\prime\prime})|_{\alpha^{2}}=\raisebox{- 15.0pt}{\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig//fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/fig/figfig/figfig/fig/figfigfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig
Using the inverse bare response \(R_{0}^{-1}(\tau,\tau^{\prime})=(\partial_{\tau}+k_{2})\delta(\tau-\tau^{\prime})\) we then have
\[\partial_{\tau}R(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime})-k_{2}R(\tau, \tau^{\prime})+\int d\tau^{\prime\prime}\,\Sigma(\tau,\tau^{\prime\prime})R( \tau^{\prime\prime},\tau^{\prime}) \tag{4.23}\]
An exactly analogous relation also holds between the _physical dressed propagator_\(R^{*}\) and the corresponding self-energy \(\Sigma^{*}\):
\[\partial_{\tau}R^{*}(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime})-k_{2}R^{*} (\tau,\tau^{\prime})+\int d\tau^{\prime\prime}\,\Sigma^{*}(\tau,\tau^{\prime \prime})R^{*}(\tau^{\prime\prime},\tau^{\prime}) \tag{4.24}\]
Both \(R^{*}\) and \(\Sigma^{*}\) are defined at the physical means \(\mu=\mu^{*}\), i.e. with tadpoles included in the diagrams as explained in section 4.1.
For the \(A+A\to A\) reaction as pointed out earlier we have \(\Sigma=0\) because there are no vertices with only one \(\phi\) leg. Diagrams for the physical self-energy \(\Sigma^{*}\) can also contain the \(k_{1}\)-vertex but as this has only one leg, at least one other internal vertex is required and we can include the three- and four-legged internal vertices from the interacting action to form diagrams for the self-energy. To organize these diagrams it is useful to recall the lowest order diagrams (to \(O(\alpha^{3})\)) for the vertex functions \(\Gamma^{1,m}\):
(4.25)
From these we can now construct diagrams for \(\Sigma^{*}\), that is the self-energy with tadpoles, by replacing \(m-1\) amputated \(\phi\) legs by \(m-1\) factors of \(\mu\): this leaves one amputated \(\tilde{\phi}\) leg and one amputated \(\phi\) leg as required for self-energy diagrams. We obtain in this way
(4.26)
The value of these self-energy diagrams to \(O(\alpha^{2})\) is
\[\Sigma^{*}(\tau,\tau^{\prime})\Big{|}_{\alpha^{2}}=2\frac{\delta_{\tau,-\tau^ {\prime}}}{\Delta t}(-\alpha k_{3})\mu(\tau^{\prime})+4(-\alpha k_{3})^{2}R_{0 }^{2}(\tau_{-},\tau_{+}^{\prime})\mu(\tau^{\prime}) \tag{4.27}\]
The equation for \(R^{*}\) to \(O(\alpha^{2})\) in continuous time thus becomes
\[\partial_{\tau}R^{*}(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime})-k_{2}R^{* }(\tau,\tau^{\prime})-2\alpha k_{3}\mu(\tau)R^{*}(\tau,\tau^{\prime})+4(- \alpha k_{3})^{2}\int d\tau^{\prime\prime}R_{0}^{2}(\tau,\tau^{\prime\prime}) \mu(\tau^{\prime\prime})R^{*}(\tau^{\prime\prime},\tau^{\prime}) \tag{4.28}\]
In the next section we will see how we can replace the bare response in our diagrams by the physical dressed response, as discussed in this section, which sums an infinite hierarchy of diagrams leading to the self-consistent response function approximation.
### Self-consistent response function approximation
Up to this point, the equations for the mean copy number \(\mu\) we have derived involve the bare response \(R_{0}\). To implicitly include higher order diagrams, a standard approach is the Hartree-Fock (HF) approximation [44], where \(R_{0}\) is replaced by the dressed response at zero mean, \(R\). However, in the system we are considering, the two are identical as \(\Sigma=0\) so nothing would be gained.
We will therefore proceed differently and self-consistently replace the bare propagator \(R_{0}\) by the dressed _physical_ propagator \(R^{*}\), order by order in \(\alpha\). In contrast to the standard HF approach we also do this not just to one-loop order but in all the diagrams that we keep, irrespective of the number of loops they have. Tadpole diagrams in \(R^{*}\) then produce diagrams, in e.g. \(\Omega^{1,2}\), that are not part of this quantity as originally defined, and we are effectively including contributions from \(\Omega^{1,3}\) etc. in \(\Omega^{1,2}\).
For notational simplicity and as we will no longer need to refer to the propagator at zero mean, we will from now on use \(R\) to denote \(R^{*}\), the dressed propagator at the physical copy number means. In the diagrams we will similarly use double lines with an arrow as in eq. (4.20) to denote \(R^{*}\).
Every self-consistent replacement of \(R_{0}\) by \(R\) sums up an infinite series of diagrams, which means we only have to explicitly consider a subset of diagrams in the diagrammatic expansion of the vertex functions. We will consider only the following series of diagrams for \(\Gamma^{1,2}\) or equivalently \(\Omega^{1,2}\),
\[\Omega^{1,2}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}}+\ldots \tag{4.29}\]
where we have replaced the bare \(R_{0}\) by the dressed \(R\), denoted by the double lines. To see the effect of this replacement we can look at the lowest order diagram that is affected, namely the \(O(\alpha^{2})\) diagram here. Expanding the dressed propagator also to \(O(\alpha^{2})\) - by stringing together self-energy diagrams from eq. (4.27) - we get up to \(O(\alpha^{4})\)
(4.30)
Keeping the analogous diagrams in \(\Sigma^{*}\) and making the same self-consistent approximation gives
(4.31)
Again one can expand the dressed response in each of these diagrams to generate the infinite series of diagrams that we have summed up. For example, expanding the propagator to \(O(\alpha^{2})\) in the \(O(\alpha^{2})\) diagram above yields to \(O(\alpha^{4})\)
(4.32)
Both in the vertex functions and the self-energy, the self-consistent replacement procedure thus implicitly includes an infinite series of diagrams. To avoid double counting we must therefore exclude any diagrams
containing self-energy parts, because these are automatically generated by the self-consistent replacement procedure. As any self-energy part can be separated from a diagram by cutting two lines, namely its incoming and outgoing response line, this can be formalized in the following _additional Feynman rule_:
For vertex functions and the self-energy, we keep all diagrams that do not separate when two lines are cut. Diagrams that do separate in this way are also kept, unless one of the parts has one incoming and one outgoing line, as this would form a self-energy contribution.
For example, all the "bubble" diagrams in eq. (4.9) do separate if we cut two response lines in the same loop, but neither of the resulting pieces are self-energy components. On the other hand, if we cut the bottom two lines of the diagram in eq. (4.19) then we get a self-energy component and hence that diagram must be excluded.
### Self-consistent bubble resummation (SBR)
We are now ready to state the approximation we will use to determine copy number means as well as (physical) response functions. We retain in the exact equation of motion eq. (3.22) only the \(\Omega^{1,2}\) term and discard \(\Omega^{1,m}\) with \(m\geq 3\). Bearing in mind the interpretation that \(\Omega^{1,2}\) gives all possible ways in which two \(A\) particles interact in a time delayed fashion to form one particle, we are thus neglecting effective interactions between more than two particles. This is physically reasonable as there are no direct reactions between more than two particles in the \(A+A\to A\) system.
In the diagrammatic series eq. (4.9) for \(\Gamma^{1,2}\) we have only _bubble diagrams_. Writing out the corresponding \(\Omega^{1,2}\) we have
\[\Omega^{1,2}(\tau) =-\alpha k_{3}\mu^{2}(\tau_{-})+2(-\alpha k_{3})^{2}\Delta t\sum_ {\tau^{\prime}}R_{0}^{2}(\tau_{-},\tau^{\prime})\mu^{2}(\tau_{-}^{\prime})+4( -\alpha k_{3})^{3}(\Delta t)^{2}\sum_{\tau^{\prime},\tau^{\prime\prime}}R_{0}^ {2}(\tau_{-},\tau^{\prime})R_{0}^{2}(\tau_{-}^{\prime},\tau^{\prime\prime})\mu ^{2}(\tau_{-}^{\prime\prime})\] \[+8(-\alpha k_{3})^{4}(\Delta t)^{3}\sum_{\begin{subarray}{c}\tau ^{\prime},\tau^{\prime\prime}\\ \tau^{\prime\prime\prime}\end{subarray}}R_{0}^{2}(\tau_{-},\tau^{\prime})R^{2 }(\tau_{-}^{\prime},\tau^{\prime\prime})R_{0}^{2}(\tau_{-}^{\prime\prime}, \tau^{\prime\prime\prime})\mu^{2}(\tau_{-}^{\prime\prime\prime})+\ldots \tag{4.33}\]
As every order here acquires one extra factor of \((-2\alpha k_{3})R_{0}^{2}\), this series of diagrams can be summed as a geometric series (see appendix A.6), giving in the continuous time limit
\[\Omega^{1,2}(\tau)=-\alpha k_{3}\int_{0}^{\tau}d\tau^{\prime}\left(\delta( \tau-\tau^{\prime})+2\alpha k_{3}R_{0}^{2}(\tau,\tau^{\prime})\right)^{-1}\mu ^{2}(\tau^{\prime}) \tag{4.34}\]
where the inverse is now in the operator sense, generalizing the matrix inverse one has in the discrete time case. A similar geometric series summation has also been considered by Cardy [40] in the context of asymptotic vertex renormalization.
The resulting equation of motion for \(\mu\), i.e eq. (3.22) is then simplified to
\[(\partial_{\tau}+k_{2})\mu(\tau)=\alpha k_{1}+\Omega^{1,2}(\tau) \tag{4.35}\]
This gives all quadratic contributions in \(\mu\) on the r.h.s. We call this the _bubble resummation (BR)_ approximation.
Finally we consider the same series of diagrams for \(\Omega^{1,2}\) but with the self-consistent replacement of the bare response \(R_{0}\) by the physical dressed response \(R\) as in eq. (4.29); the result is as in eq. (4.33) but with \(R_{0}\) replaced by \(R\), giving
\[\Omega^{1,2}(\tau)=-\alpha k_{3}\int_{0}^{\tau}d\tau^{\prime}\left(\delta(\tau -\tau^{\prime})+2\alpha k_{3}R^{2}(\tau,\tau^{\prime})\right)^{-1}\mu^{2}( \tau^{\prime}) \tag{4.36}\]
As explained in the previous section, this effectively includes an infinite series of further diagrams that correspond to contributions from \(\Omega^{1,m}\) with \(m\geq 3\).
The full response \(R\) then also needs to be determined via the self-energy \(\Sigma^{*}\) and we use the analogous series of self-consistent bubble diagrams for this, as shown in eq. (4.31). These are the only diagrams
containing only a single factor of \(\mu\). One could interpret these as self-energy contributions where one particle interacts with an "external" particle to coagulate, possibly with some time delay, into a single particle. The resulting self-energy is
\[\Sigma^{*}(\tau,\tau^{\prime}) =2\delta_{\tau_{-},\tau^{\prime}}(-\alpha k_{3})\mu(\tau^{\prime}) +4(-\alpha k_{3})^{2}R^{2}(\tau_{-},\tau^{\prime}_{+})\mu(\tau^{\prime})+8(- \alpha k_{3})^{3}\Delta t\sum_{\tau^{\prime\prime}}R^{2}(\tau_{-},\tau^{\prime \prime})R^{2}(\tau^{\prime\prime}_{-},\tau^{\prime}_{+})\mu(\tau^{\prime})\] \[+16(-\alpha k_{3})^{4}(\Delta t)^{2}\sum_{\tau^{\prime\prime},\tau ^{\prime\prime\prime}}R^{2}(\tau_{-},\tau^{\prime\prime})R^{2}(\tau^{\prime \prime}_{-},\tau^{\prime\prime\prime})R^{2}(\tau^{\prime\prime\prime}_{-}, \tau^{\prime}_{+})\mu(\tau^{\prime})+\ldots \tag{4.37}\]
Again this can be summed in the continuous time limit (see appendix A.6) to yield
\[\Sigma^{*}(\tau,\tau^{\prime})=\left(-2\alpha k_{3}\right)\left(\delta(\tau- \tau^{\prime})+2\alpha k_{3}R^{2}(\tau,\tau^{\prime})\right)^{-1}\mu(\tau^{ \prime}) \tag{4.38}\]
The physical response is then given by the Feynman-Dyson equation
\[(\partial_{t}+k_{2})R(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime})+\int_{0} ^{\tau}d\tau^{\prime\prime}\Sigma^{*}(\tau,\tau^{\prime\prime})R(\tau^{\prime \prime},\tau^{\prime}) \tag{4.39}\]
and the last two equations have to be solved self-consistently, i.e. simultaneously, with eq. (4.35) and eq. (4.36). We call this approach of summing infinitely many bubble diagrams and replacing the bare response by the physical dressed response, the _self-consistent bubble resummation (SBR)_ method. As explained, the self-consistency captures in the equations for both means and response some but not all (see section 4.6) corrections of higher than quadratic order in \(\mu\).
Eq. (4.39) for the response \(R\) is an integro-differential equation of Kadanoff-Baym type [49] as occurs in Quantum Field Theory, but in what would be imaginary time there. Within the scope of this paper we simply integrate it on a two-dimensional time grid with a fixed step size, but there are more efficient and adaptive methods available that could be used in future, see for example [50, 51]. Even with our straightforward numerical approach, the results in section 4.7 will show superior performance and stability of SBR over the mean field solution and other common approximation methods.
### Excluded diagrams
The series of bubble diagrams with the self-consistent physical dressed propagator replacement still leaves out some diagrams in the vertex functions \(\Gamma^{1,m}\) with \(m\geq 3\), starting at \(O(\alpha^{4})\). These diagrams do separate if we cut two propagator lines, but the resulting pieces do not form a part of self-energy, that is they either have two incoming or two outgoing response lines. Up to \(O(\alpha^{5})\) the only omitted diagrams are contributions to \(\Gamma^{1,3}\):
\[\parbox{113.811024pt}{\includegraphics[width=113.811024pt]{figs/2.eps}}+ \parbox{113.811024pt}{\includegraphics[width=113.811024pt]{figs/2.eps}} \tag{4.40}\]
Starting at \(O(\alpha^{6})\) there are omitted diagrams from even higher order vertex functions like \(\Gamma^{1,4}\), such as
\[\parbox{113.811024pt}{\includegraphics[width=113.811024pt]{figs/2.eps}} \tag{4.41}\]
Similarly in the physical self-energy \(\Sigma^{*}\) there are diagrams we have omitted, with at least two factors of \(\mu\) and of \(O(\alpha^{3})\) or higher:
(4.42)
We have verified the diagrammatic calculations via a short-time expansion of the exact dynamics up to \(O(t^{5})\), using the hierarchy of coupled equations for the moments of the copy number distribution derived from the master equation. This can be compared to the corresponding expansion of the SBR eqs. (4.35), (4.36), (4.38) and (4.39). We find that the missing terms are accounted for precisely by the diagrams shown above. For brevity, these calculations are not shown in this paper.
### Numerical results
In this paper we focus on the challenging regime of small copy numbers of molecules. To demonstrate the power of the SBR method (see section 4.5), we compare the mean copy number and the two-time number-number correlator for the system considered so far, i.e. a single molecular species \(A\) undergoing the reaction \(A+A\xrightarrow{k_{3}}A\) with the baseline reactions \(A\xrightarrow{k_{2}}\emptyset\) and \(\emptyset\xrightarrow{k_{1}}A\) and with an initial Poisson distribution for the number of molecules of \(A\). We compare our results and report the error relative to the numerical solution of the master equation. The latter is obtained by finite space projection [52], i.e. by truncating the state space at a sufficiently large copy number (see appendix A.8). For the SBR predictions, we use an Euler integration scheme to jointly integrate the discrete time equations for mean \(\mu(\tau)\) and Response \(R(\tau,\tau^{\prime})\), performing the inversions required in eqs. (4.36) and (4.38) as matrix inversions of the corresponding discretized quantities.
Unless otherwise stated, the simulations for this system are carried out with the following _standard parameter set_\(k_{1}=1\), \(k_{2}=1\), \(k_{3}=1\), \(\alpha=1\), time step \(\Delta t=0.002\), total integration time \(t=2\) and
Figure 1: Average copy number dynamics \(\mu(\tau)\) for the \(A+A\to A\) system with baseline creation and destruction. The system was simulated with the stochastic Gillespie algorithm, two trajectories of which are shown as orange and green dashed lines. The average and standard deviation over \(10^{6}\) such trajectories are plotted as the blue solid line and the shaded area, respectively. The mean from the numerical integration of the master equation is shown as the red dash-dotted line. Inset: enlarged part of the graph from \(\tau=0.8\) to \(\tau=1\) with the standard error as the shaded area, showing that master equation and Gillespie numerics are consistent. The simulation parameters are from the standard set given in section 4.7.
\(\mu(0)=4/3\), so that the initial copy number is Poisson with this mean and variance as shown in fig. 1. As a consistency check on the numerical solution of the master equation, we also run \(10^{6}\) stochastic Gillespie simulations [20] and calculate the average and the standard deviation across these runs. As the dynamics proceeds from the initial time, the average copy number decreases to about \(0.8\) in the steady state, with the standard deviation remaining of the same order as the mean throughout. We are therefore in the regime of large stochastic fluctuations as illustrated by the two sample trajectories in fig. 1. In the inset we show an enlarged section of the mean copy number time course from \(\tau=0.8\) to \(\tau=1\), which demonstrates that the Gillespie simulations and the master equation solution are consistent within their error bars as they should be. Because we are in the large fluctuation regime, note that even after averaging over a million stochastic trajectories, the Gillespie mean still has discernible fluctuations. This highlights the inefficiency of stochastic simulations, which becomes even more pronounced in the case of multiple species. Since we are interested in small errors, we will then rely on comparisons with the numerical solution of the master equation instead of stochastic simulations. The effects of the integration time step \(\Delta t\) are discussed in appendix A.1.
With the master equation solution as ground truth in place, we compare the performance of different approximation methods in fig. 2. We plot the actual trajectories on the left and the relative deviation on the right. The latter is defined as \(\frac{\hat{\mu}(\tau)-\mu(\tau)}{\mu(\tau)}\) where \(\hat{\mu}\) is the estimator for the mean copy number as provided by the different methods and \(\mu\) is the true mean. The SBR method has the second best performance, followed by the (bare) bubble resummation (BR), which is obtained by simply solving eqs. (4.34) and (4.35) with the bare response given by eq. (2.44). The mean field approximation or mass action kinetics (MAK), obtained by integrating eq. (4.13), is stable but has an error an order of magnitude worse than SBR. The results of _normal moment closure_ are also plotted. This method takes all cumulants of the copy number distribution of higher than second order as zero and was implemented using the package [53]. This works well only for small times, after which its copy number dynamics diverges and can even enter the negative copy number regime, a problem well known for moment closure methods. We also compare the results to the Effective Mesoscopic Rate Equations (EMRE) [31] at unit volume, which for this system has the best performance. (The full power of SBR and its advantages over EMRE will become clear for reaction networks with multiple species, see section 5.3.) Finally, we also plot the results for a diagrammatic expansion to \(O(\alpha^{2})\) without resummation, thus keeping only the (bare) \(\alpha^{2}\) term in eq. (4.14). This gives poor performance because the bare response function used is not sensitive to relaxation effects from the \(A+A\to A\) reaction (see appendix A.2). Comparing with the BR method, we conclude for this example that the resummation of an infinite series of bubble diagrams is more important for accuracy than the self-consistent replacement of the response function.
To understand the physical effects of \(k_{3}\) we plot in fig. 3 (left) the actual (ground truth) mean copy number dynamics at different values of \(k_{3}\) as a function of \(k_{3}\tau\). Note that as we increase \(k_{3}\), the dynamics becomes faster due to the higher reaction rate for the \(A+A\to A\) reaction. We therefore correspondingly decrease the total integration time \(t\) and the time step \(\Delta t\). The number of time steps (in our case \(B=1000\)) is then the same for each \(k_{3}\), as is \(k_{3}t\). In fig. 3 (right) we use this to compare the performance of the different methods as we change the binary interaction rate \(k_{3}\) as compared to the baseline rates. This clearly demonstrates that SBR not only works for large values of \(k_{3}\) but outperforms other common methods
Figure 2: Dynamics of average copy number \(\hat{\mu}\) (left) as predicted by different approximation methods and relative deviation (right) to the ground truth \(\mu\) (grey dashed line). The error of SBR is an order of magnitude lower than the MAK. The simulation parameters are from the standard set given in section 4.7.
over a large range of \(k_{3}\). This is because even though we are starting from a perturbative expansion, the SBR method includes many non-perturbative effects both via the resummation and the self-consistent response function replacement. EMRE has a slightly better performance at large \(k_{3}\) values, while SBR is much better at smaller \(k_{3}\). The performance metric shown is the time-averaged relative deviation in mean copy number,
\[\epsilon=\frac{1}{B}\sum_{\tau}\left|\frac{\tilde{\mu}(\tau)-\mu(\tau)}{\mu( \tau)}\right| \tag{4.43}\]
where \(B\) is the total number of time steps. As in our previous comparison at fixed \(k_{3}\), the bare bubble resummation has the second lowest error behind SBR, and both outperform mass action kinetics and normal moment closure. For small \(k_{3}\) the \(A+A\to A\) reaction is the slowest process and the steady state is reached within our time window. For large \(k_{3}\), on the other hand, the decay of the mean copy number becomes largely independent of \(k_{3}\) when plotted against \(k_{3}\tau\): the fast \(A+A\to A\) reaction dominates here rather than being a small perturbation, so it is encouraging that SBR still performs well. Note that outside the window shown there is then a slow (in relative terms) approach to the steady state determined primarily by the baseline creation and destruction rates \(k_{1}\) and \(k_{2}\) (see appendix A.3).
Moving on to the second order statistics, in fig. 4(left) we plot the copy number variance or equivalently the equal-time connected correlator \(\langle\delta n(\tau)\delta n(\tau)\rangle\) as a function of time \(\tau\) for the ground truth (see appendix A.8), MAK, SBR and the Linear Noise Approximation (LNA) [28, 29] at unit volume. Both
Figure 4: The equal time connected number correlator, i.e. copy number variance (left) and the copy numbers plus standard deviation error bars at three time points in the dynamics (right; error bars are scaled by 0.5 for better visibility of the \(y\)-scale), for the master equation ground truth and SBR and MAK approximations as shown in the legend. On the left we additionally plot the Linear Noise Approximation (LNA) result. The standard parameter set is used. Both the SBR and MAK approximations work with the bare correlation functions \(C_{0}\equiv 0\) and so predict a Poissonian variance \(\langle\delta n(t)\delta n(t)\rangle=\tilde{\mu}(t)\).
Figure 3: Average copy number dynamics from the master equation \(\mu\) (left) and time-averaged absolute relative deviation \(\epsilon\), eq. (4.43), of different approximation methods (right) for different binary interaction rate \(k_{3}\). To measure errors systematically as we increase \(k_{3}\), we decrease total integration time \(t\) and time step \(\Delta t\) by the same factor, keeping the number of time steps and \(k_{3}t\) the same for each copy number dynamics time course. From the error measurement on the right plot we see that the SBR method outperforms other common methods over a range of five orders of magnitude of \(k_{3}\). Its also outperforms EMRE at small \(k_{3}\), and has comparable performance at large \(k_{3}\) values.
SBR and MAK assign to the correlation functions the bare value \(C_{0}\equiv 0\), so from eq. (2.28) predict a Poissonian variance \(\langle\delta n(\tau)\delta n(\tau)\rangle=\mu(\tau)\). In the ground truth one finds a smaller variance, i.e. anti-Poisson effects. This leads to MAK coincidentally predicting a variance closer to the ground truth (for \(\tau>0\)) than SBR, by a cancellation of two errors: on the one hand it underestimates the mean copy number, and on the other hand it ignores the anti-Poisson effects so overestimates the variance. The LNA makes the best prediction for the equal-time connected correlator (i.e. the variance), but we will see that in the comparison to SBR this is no longer the case for the two-time correlator, see fig. 6.
The SBR estimates for the mean copy numbers are more accurate so its predictions for the copy number error bars (plotted as mean \(\pm\) half standard deviation) overlap well with the ground truth as shown in fig. 4(right). There are deviations, which can be traced back to the fact that the true correlation functions \(C(\tau,\tau)\) do not vanish, but we observe that these are quantitatively moderate. To avoid these deviations one could calculate the correlation functions in perturbation theory and include them in the variance calculation, as we briefly discuss in section 6.
Now we look at the two-time quantities. We start with the response function and plot \(R(\tau,\tau^{\prime})\) in fig. 5 for fixed values of the time \(\tau^{\prime}\) at which the perturbation is created and as a function of the lag time \(\tau-\tau^{\prime}\) between perturbation and response. Since we are running our dynamics until the final time \(t\), we can only calculate the response till \(\tau-\tau^{\prime}=t-\tau^{\prime}\). From left to right the response functions from the master equation (appendix A.8) and MAK and SBR methods are plotted. In the response from the
Figure 5: Response functions \(R(\tau,\tau^{\prime})\) obtained from the master equation (left) and the MAK (centre) and SBR (right) approximations, for the standard parameter set. The response functions are plotted against time lag \(\tau-\tau^{\prime}\) at several fixed values of the time \(\tau^{\prime}\) where the perturbation is applied. The different curves overlap for MAK because it predicts a time translation invariant response. The true response (left) is not invariant under time translation and SBR (right) correctly captures this effect.
Figure 6: Response functions \(R(\tau,\tau^{\prime})\) and two-time connected number correlator \(\langle\delta n(\tau)\delta n(\tau^{\prime})\rangle\) (normalized by the equal-time connected correlator \(\langle\delta n(\tau^{\prime})\delta n(\tau^{\prime})\rangle\)) as obtained from the master equation, the MAK, SBR and LNA methods, for the standard parameter set. The response functions are plotted as a function of the lag time \(\tau-\tau^{\prime}\) between perturbation and response, at fixed values of the perturbation time \(\tau^{\prime}=1.6,1.2,0.8,0.4\) (left to right). The MAK, SBR and the LNA predict that the response and the normalized correlator are identical to each other, while in the true dynamics the two quantities are slightly different (solid and dashed grey lines). The SBR method very closely reproduces the true response function and correlator over the whole range of \(\tau\) and \(\tau^{\prime}\), while the exponential MAK response decays too slowly and the LNA deviates in the other direction.
master equation, the curves for different \(\tau^{\prime}\) do not overlap, consistent with the fact that we have transient dynamics that breaks time translation invariance: it matters when in absolute terms a perturbation is created, not just how long ago. SBR is able to capture this effect but MAK, which uses the bare response function and so is time translation invariant, does not.
From the response function, we can calculate the two time connected number correlator \(\langle\delta n(\tau)\delta n(\tau^{\prime})\rangle\) using eq. (2.31). Both SBR and MAK replace the field correlation function \(C\) by its bare counterpart \(C_{0}\). This is zero, giving \(\langle\delta n(\tau)\delta n(\tau^{\prime})\rangle/\langle\delta n(\tau^{ \prime})\delta n(\tau^{\prime})\rangle=R(\tau,\tau^{\prime})\). In fig. 6 we plot the response and normalized correlator again as a function of the time lag, but now comparing the different methods at each fixed perturbation time. The procedure for calculating the correlator from the master equation is explained in appendix A.8. The resulting true copy number correlator and response are slightly different due to non-Poissonian effects. SBR ignores these but nonetheless captures very closely the true response and correlation, demonstrating that the approximation of keeping only the bare field correlation \(C_{0}\) is rather accurate. MAK, on the other hand, deviates substantially, giving decays in time that are too slow. The LNA also predicts identical response and correlation functions in the case of a single reacting chemical species, and underestimates these quantities when compared to their true values from the master equation.
### Inverse reaction volume expansion
The propensity function in eq. (2.2) uses rates \(k_{\beta}\) of physical dimension of inverse time, \(1/t\), as so far we have worked with absolute molecule numbers. In chemical reaction kinetics when molecule numbers are large one typically works with concentrations, i.e. number densities. We now translate our formalism into this framework and make contact with earlier work on expansions using \(1/V\) as a small parameter, where \(V\) is the reaction volume. One such work by Thomas et al. [33] also used diagrammatic perturbation theory, expanding in orders of \(1/V\) around the (time translation invariant) steady state predicted by mass action kinetics.
Starting with the binary reaction \(A+B\to C\) as an example, the mass action description in terms of densities \(\rho=n/V\) would be \(\partial_{\tau}\rho_{A}=j_{\beta}\rho_{A}\rho_{B}\). Converting to an equation for the change in particle number \(n=\rho V\) we get \(\partial_{\tau}n_{A}=\frac{j_{\beta}}{V}n_{A}n_{B}\). In the general case one would accordingly write the propensity function as originally defined in eq. (2.2) as
\[f_{\beta}(\mathbf{n})=j_{\beta}V\prod_{i}\frac{n_{i}!}{(n_{i}-r_{i}^{\beta})!{V^{ \prime}}^{\beta}} \tag{4.44}\]
with the rate constant \(j_{\beta}\) having dimension \(1/(V^{\sum_{i}r_{i}^{\beta}-1}t)\). This is again easiest to understand from examples:
1. For a creation reaction such as \(\emptyset\to A\), \(f_{\beta}(\mathbf{n})=j_{\beta}V\)
2. For a unary destruction reaction such as \(A\to\emptyset\), \(f_{\beta}(\mathbf{n})=j_{\beta}n_{A}\)
3. For a binary reaction such as \(A+B\to C\), \(f_{\beta}(\mathbf{n})=\frac{j_{\beta}}{V}n_{A}n_{B}\)
With this change in notation to standard rate constants \(j_{\beta}\), the MAK equation for \(\rho\) in the \(A+A\to A\) system together with the baseline creation and destruction reactions is
\[\partial_{\tau}\rho=j_{1}-j_{2}\rho-j_{3}\rho^{2} \tag{4.45}\]
with \(j_{1}=k_{1}/V\), \(j_{2}=k_{2}\) and \(j_{3}=k_{3}V\). On the r.h.s. we have here the terms up to \(O(\alpha)\) from our expansion but we do not write powers of \(\alpha\) explicitly. The next correction, of \(O(\alpha^{2})\), was of the form \((-\alpha k_{3})^{2}\int_{0}^{\tau}d\tau^{\prime}R_{0}^{2}(\tau,\tau^{\prime}) \mu^{2}(\tau^{\prime})\), giving in terms of density
\[\partial_{\tau}\rho=j_{1}-j_{2}\rho-j_{3}\rho^{2}+\frac{2j_{3}^{2}}{V}\int_{0 }^{\tau}d\tau^{\prime}R^{2}(\tau,\tau^{\prime})\rho^{2}(\tau^{\prime}) \tag{4.46}\]
Viewed as the beginning of an expansion in terms of \(1/V\), the correction terms is \(O(1/V)\) because the response function is of order unity. All further corrections like the two bubble diagram in eq. (4.9) have
additional \(V^{-1}\) factors and thus give smaller corrections for large \(V\). One could then be tempted to stop after the first correction in the equation for \(\mu\) or equivalently density \(\rho\), which is the one loop diagram in eq. (4.9). However, as we have seen, that correction only works for small values of \(k_{3}=j_{3}/V\), i.e. large volumes; the SBR approach can be viewed as "dressing" the leading order correction by including all delayed two particle interactions. The final form of the SBR, eqs. (4.36) and (4.38), is then stable even for large values of \(k_{3}\). Notice that the SBR method is not a straightforward expansion in \(1/V\) as there are diagrams that we have neglected that are of the same order as some that we have considered. For example, the two loop diagram that is included in SBR, of the form \(k_{3}^{3}\mu^{2}\) in eq. (4.29), makes a contribution of order \(V^{-2}\) to the equation for the density. The diagram that we have neglected such as the first diagram of eq. (4.40), of the form \(k_{3}^{3}\mu^{3}\), is also of order \(V^{-2}\). Thus SBR does include corrections of all orders in \(V^{-1}\), but is not a systematic way to capture all corrections at each order.
We note finally that there are subtleties regarding the interpretation of the Doi-Peliti path integral in terms of density fluctuations [54]. The \(\phi\) field cannot be directly interpreted as a density. Nonetheless, performing a Cole-Hopf transformation on the \(\phi,\tilde{\phi}\) in the action and terminating the expansion of the transformed action at \(O(1/V)\) leads to a Langevin equation for the density [47] that matches the system size expansion result by van Kampen [28]. In our approach, on the other hand, we calculate \(\mu=\langle\phi\rangle\) directly and the mean density can be obtained afterwards simply by dividing by system volume \(V\).
### Adding a back reaction: \(A+A\rightleftharpoons A\)
So far we have illustrated our approach with a system with baseline particle creation and destruction as well as coagulation \(A+A\to A\). To generalize, we now add the back reaction of particle branching, \(A\stackrel{{ k_{3}^{\prime}}}{{\rightarrow}}A+A\). We then have the following additional vertices in the interacting action:
\[S_{\rm int}=\cdots+\Delta t\sum_{\tau}\left(\begin{array}{c}\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par
For the physical self-energy \(\Sigma^{*}\) we use the same approximation, i.e. we add to eq. (4.38) the sum of the diagrams in eq. (4.48) with \(R_{0}\) replaced by \(R\), giving
\[\Sigma^{*}(\tau,\tau^{\prime})=(-2\alpha k_{3}\mu(\tau^{\prime})+\alpha k_{3}^{ \prime})\left(\delta(\tau-\tau^{\prime})+2\alpha k_{3}R^{2}(\tau,\tau^{\prime })\right)^{-1} \tag{4.53}\]
From this self-energy the physical dressed response is then determined by eq. (4.39).
Note that even with the self-consistency, there are some new \(\Omega^{1,m\geq 2}\) made from the three legged vertex in eq. (4.47) and other vertices of eq. (4.3) that are not accounted for within SBR.
We now consider the above reaction system for the parameters \(k_{1}=1\), \(k_{2}=1\), \(k_{3}=1\), \(k_{3}^{\prime}=1\), with time step \(\Delta t=0.004\), total integration time \(t=4\) and \(\mu(0)=4/3\). As in section 4.7 we plot in fig. 7 the mean copy number dynamics as obtained from different approximation methods, and their respective relative deviations from the solution of the master equation. SBR again outperforms the other methods including MAK; normal moment closure diverges, and BR without self-consistency is second best to SBR. EMRE (at unit volume) has a similar performance as the SBR. These trends are confirmed in fig. 8 where we again change the perturbation parameter \(k_{3}\), keeping \(k_{3}=k_{3}^{\prime}\), across four orders of magnitude, and plot the time-averaged absolute relative deviation as defined in eq. (4.43) of the different approximation methods. Fig. 8(left) shows the corresponding exact mean copy number dynamics as calculated from the master equation. Times and time ranges have been scaled as in fig. 3 and, as there, SBR and BR still perform well for large \(k_{3}\) and \(k_{3}^{\prime}\) where the coagulation and branching reactions are dominant rather than small perturbations.
Figure 8: System with added branching reaction \(A\to A+A\) for a range of coagulation and branching rates \(k_{3}=k_{3}^{\prime}\). Mean copy number dynamics from the master equation (left) and time-averaged absolute relative deviation as defined in eq. (4.43) of different approximation methods (right). Times and time ranges are scaled as in fig. 3. The bubble resummed approaches (SBR and BR) perform similarly to EMRE and outperform all other methods over a large range of \(k_{3}\) and \(k_{3}^{\prime}\).
Figure 7: Dynamics of mean copy numbers \(\mu\), \(\hat{\mu}\) for the system with added branching reaction \(A\to A+A\) (left); relative deviation (right) of different approximation methods \(\hat{\mu}\) from the master equation solution \(\mu\) (grey dashed line). SBR has a significantly lower error in the transient compared to the other methods. The normal moment closure method very quickly becomes unstable and predicts negative copy numbers. System parameters are given in section 4.9.
Multiple species interactions: \(A+B\to C\)
Having illustrated our approach so far for systems with a single molecular species, we now extend our considerations to multiple species. Unary reactions with conversion of one molecule to another like \(A\to B\) will only produce two-legged vertices with one \(\tilde{\phi}\) and one \(\phi\) legs, like the first vertex of eq. (4.47). These cannot appear in any vertex functions \(\Gamma^{1,m}\) with \(m\geq 2\) and just give rise to mean field (\(O(\alpha)\)) corrections that are straightforward to include. Binary reactions where two different molecules react, like \(A+B\to A\), \(A+B\to C\) or \(A+B\to C+D\) require additional analysis that we will now present.
Focussing on the example \(A+B\to C\) from now on, the interacting action as obtained from eq. (2.11) after applying the Doi-Shift is
\[\begin{split} S_{\text{int}}=\Delta t\sum_{\tau=\Delta t}^{t} \Big{\{}k_{1A}\tilde{\phi}_{A}(\tau)+k_{1B}\tilde{\phi}_{B}(\tau)+k_{1C}\tilde {\phi}_{C}(\tau)+\\ \phi_{A}(\tau_{-})\phi_{B}(\tau_{-})\Big{[}k_{3}\tilde{\phi}_{C}( \tau)-k_{3}\tilde{\phi}_{A}(\tau)-k_{3}\tilde{\phi}_{B}(\tau)-k_{3}\tilde{\phi }_{A}(\tau)\tilde{\phi}_{B}(\tau)\Big{]}\Big{\}}\end{split} \tag{5.1}\]
where the first line contains the particle creation reactions with rates \(k_{1A}\), \(k_{1B}\) and \(k_{1C}\), respectively, and the second line the reaction \(A+B\to C\) with rate \(k_{3}\). Note that as in section 4 we treat the initial condition implicitly by considering a creation rate \(k_{1i}(\tau)=k_{1i}+\bar{n}_{0i}\delta(\tau)\). Diagrammatically we can represent the interacting action similarly to the previous single species case, but now using different colours to represent the field legs for different species, specifically \(A\to\) blue, \(B\to\) green, \(C\to\) red:
\[S_{\text{int}}=\Delta t\sum_{\tau}\left(\parbox{142.26378pt}{ \includegraphics[width=142.26378pt]{figs/2
\[\Gamma^{1,2}_{C,AB}= \tag{5.6}\]
Using these we can construct the elements of \(\Omega^{1,2}\). For example \(\Omega^{1,2}_{A,AB}\) is to leading order
\[\Omega^{1,2}_{A,AB}(\tau)\Big{|}_{\alpha}=-\alpha k_{3}\mu_{A}(\tau_{-})\mu_{B}( \tau_{-}) \tag{5.7}\]
and \(\Omega^{1,2}_{B,AB}\), \(\Omega^{1,2}_{C,AB}\) are obtained analogously. Neglecting \(\Omega^{1,m}\) with \(m\geq 3\), we have the mean copy number equation of motion
\[(\partial_{\tau}+k_{2i})\mu_{i}(\tau)=\alpha k_{1i}+\Omega^{1,2}_{i,AB}(\tau) \tag{5.8}\]
for each species \(i=A,B,C\). To \(O(\alpha)\) this gives the MAK equations as expected:
\[\partial_{\tau}\mu_{A}(\tau) =\alpha k_{1A}-k_{2A}\mu_{A}(\tau)-\alpha k_{3}\mu_{A}(\tau)\mu_{ B}(\tau) \tag{5.9}\] \[\partial_{\tau}\mu_{B}(\tau) =\alpha k_{1B}-k_{2B}\mu_{B}(\tau)-\alpha k_{3}\mu_{A}(\tau)\mu_ {B}(\tau)\] \[\partial_{\tau}\mu_{C}(\tau) =\alpha k_{1C}-k_{2C}\mu_{C}(\tau)+\alpha k_{3}\mu_{A}(\tau)\mu_ {B}(\tau)\]
Looking then at the first non-trivial corrections, of \(O(\alpha^{2})\), we have from the one loop diagram of eq. (5.4)
\[\Omega^{1,2}_{A,AB}(\tau)\Big{|}_{\alpha^{2}}=(-\alpha k_{3})^{2}\Delta t\sum _{\tau^{\prime}}R_{0A}(\tau_{-},\tau^{\prime})R_{0B}(\tau_{-},\tau^{\prime}) \mu_{A}(\tau^{\prime}_{-})\mu_{B}(\tau^{\prime}_{-}) \tag{5.10}\]
Together with the analogous results for other species this yields the continuous time equations for the means to \(O(\alpha^{2})\) as
\[\partial_{\tau}\mu_{A}(\tau) =\alpha k_{1A}-k_{2A}\mu_{A}(\tau)-\alpha k_{3}\mu_{A}(\tau)\mu_{ B}(\tau)+(-\alpha k_{3})^{2}\int_{0}^{\tau}d\tau^{\prime}R_{0A}(\tau,\tau^{ \prime})R_{0B}(\tau,\tau^{\prime})\mu_{A}(\tau^{\prime})\mu_{B}(\tau^{\prime}) \tag{5.11}\] \[\partial_{\tau}\mu_{B}(\tau) =\alpha k_{1B}-k_{2B}\mu_{B}(\tau)-\alpha k_{3}\mu_{A}(\tau)\mu_ {B}(\tau)+(-\alpha k_{3})^{2}\int_{0}^{\tau}d\tau^{\prime}R_{0A}(\tau,\tau^{ \prime})R_{0B}(\tau,\tau^{\prime})\mu_{A}(\tau^{\prime})\mu_{B}(\tau^{\prime})\] \[\partial_{\tau}\mu_{C}(\tau) =\alpha k_{1C}-k_{2C}\mu_{C}(\tau)+\alpha k_{3}\mu_{A}(\tau)\mu_ {B}(\tau)-(\alpha k_{3})^{2}\int_{0}^{\tau}d\tau^{\prime}R_{0A}(\tau,\tau^{ \prime})R_{0B}(\tau,\tau^{\prime})\mu_{A}(\tau^{\prime})\mu_{B}(\tau^{\prime})\]
Following our approach for the single species \(A+A\to A\) system, we will not explicitly include \(\Omega^{1,m}\) terms with \(m\geq 3\) and concentrate instead on the bubble series and its resummation.
### Self-consistent response function approximation
#### 5.2.1 Using single species response functions
The above expansion to \(O(\alpha^{2})\) can be improved following our previous strategy: we sum an infinite series of bubble diagrams, and we self-consistently replace the bare responses \(R_{0i}\) by the physical dressed responses \(R_{i}\). For \(\Omega^{1,2}_{A,AB}\) this means
\[\Omega^{1,2}_{A,AB}= \tag{5.12}\]
and summing the geometric series gives
\[\Omega^{1,2}_{A,AB}(\tau)=(-\alpha k_{3})\int_{0}^{\tau}d\tau^{\prime}\left( \delta(\tau-\tau^{\prime})+\alpha k_{3}R_{A}(\tau,\tau^{\prime})R_{B}(\tau, \tau^{\prime})\right)^{-1}\mu_{A}(\tau^{\prime})\mu_{B}(\tau^{\prime}) \tag{5.13}\]
The equation of motion for the mean copy number \(\mu_{A}\) of species \(A\) is then obtained by inserting this into eq. (5.8), with analogous expressions for the other species.
The dressed response functions at the nonzero, physical means are determined as before from the Feynman-Dyson equation using the self-energy \(\Sigma^{*}_{ij}\). This now also carries two species indices. Since we are considering only single species response functions, we only use \(\Sigma^{*}_{ij}\) with \(i=j\) and expand
\[R_{i}=R_{0i}+R_{0i}\Sigma^{*}_{ii}R_{0i}+R_{0i}\Sigma^{*}_{ii}R_{0i}\Sigma^{*}_{ ii}R_{0i}+\ldots \tag{5.14}\]
which yields in continuous time
\[\partial_{\tau}R_{i}(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime})-k_{2i}R_{i }(\tau,\tau^{\prime})+\int d\tau^{\prime\prime}\,\Sigma^{*}_{ii}(\tau,\tau^{ \prime\prime})R_{i}(\tau^{\prime\prime},\tau^{\prime}) \tag{5.15}\]
If we then express e.g. \(\Sigma^{*}_{AA}\) using again self-consistently replaced dressed responses \(R\) we obtain to \(O(\alpha^{3})\),
\[\Sigma^{*}_{AA}= \tag{5.16}\]
Note that because in the expansion eq. (5.14) we have only included response function factors for the _same_ species \(i\), we have to include here diagrams like the second and fourth that contain single response function links for _other_ species. In other words, in the present approach a 1PI diagram for \(\Sigma^{*}_{ii}\) has to be defined as one that cannot be split in two by cutting a response line of the same species \(i\). The values of the first four diagrams of the above series term by term are
\[\Sigma^{*}_{AA}(\tau,\tau^{\prime}) =-\frac{\delta_{\tau-\tau^{\prime}}}{\Delta t}\alpha k_{3}\mu_{B} (\tau^{\prime})+(-\alpha k_{3})^{2}\mu_{A}(\tau_{-})R_{B}(\tau_{-},\tau^{ \prime}_{+})\mu_{B}(\tau^{\prime})+(-\alpha k_{3})^{2}R_{A}(\tau_{-},\tau^{ \prime}_{+})R_{B}(\tau_{-},\tau^{\prime}_{+})\mu_{B}(\tau^{\prime})\] \[+(-\alpha k_{3})^{3}\sum_{\tau^{\prime\prime}}\mu_{A}(\tau_{-})R_ {B}(\tau_{-},\tau^{\prime\prime})\mu_{A}(\tau^{\prime\prime}_{-})R_{B}(\tau^{ \prime\prime}_{-},\tau^{\prime}_{+})\mu_{B}(\tau^{\prime})+\ldots \tag{5.17}\]
Following the approach in section 4.5 this series can again be summed to all orders to obtain
\[\Sigma^{*}_{AA}(\tau,\tau^{\prime})=(-\alpha k_{3})\left[\delta(\tau-\tau^{ \prime})+\alpha k_{3}(\mu_{A}(\tau)R_{B}(\tau,\tau^{\prime})+R_{A}(\tau,\tau^{ \prime}))R_{B}(\tau,\tau^{\prime})\right]^{-1}\mu_{B}(\tau^{\prime}) \tag{5.18}\]
Together with the corresponding expressions for \(\Sigma^{*}_{BB}\) and \(\Sigma^{*}_{CC}\) we then have a closed system of self-consistent equations for the means \(\mu_{i}(\tau)\) and physical response functions \(R_{i}(\tau,\tau^{\prime})\) of all species \(i\). For e.g. species \(A\) one integrates eq. (5.8) and eq. (5.15) with \(\Omega\) and \(\Sigma\) given by eqs. (5.13) and (5.18). We call this approximation SBR-S where the additional S indicates that we are using single species responses only. Note that the \(\Omega^{1,2}\) contribution is again a memory term: it integrates past values of the product \(\mu_{A}\mu_{B}\), which is exactly the MAK term for the reaction \(A+B\to C\), weighted by a self-consistently determined memory function.
#### 5.2.2 Including mixed response functions
The SBR-S approximation can be improved further by including mixed response functions, \(R_{ij}\) for \(i\neq j\). The resulting diagrammatic series for \(\Omega^{1,2}_{A,AB}\) is, to \(O(\alpha^{3})\):
\[\Omega^{1,2}_{A,AB} = \tag{5.19}\] \[+\] \[+\]
Here the half blue and half green double lines represent the mixed dressed response functions \(R_{AB}\) and \(R_{BA}\), with the species indices being in the same order as the colours of the double lines. The first additional term compared to the treatment in the previous section occurs at \(O(\alpha^{2})\) and is
\[\Omega^{1,2}_{A,AB}(\tau)\Big{|}_{\alpha^{2}}=\dots+(-\alpha k_{3})^{2}\Delta t \sum_{\tau^{\prime}}R_{AB}(\tau_{-},\tau^{\prime})R_{BA}(\tau_{-},\tau^{\prime })\mu_{A}(\tau^{\prime}_{-})\mu_{B}(\tau^{\prime}_{-}) \tag{5.20}\]
As before we can resum all higher orders in \(\alpha\) to get, in continuous time,
\[\Omega^{1,2}_{A,AB}(\tau)=(-\alpha k_{3})\int_{0}^{\tau}d\tau^{\prime}\left\{ \delta(\tau-\tau^{\prime})+\alpha k_{3}\left[R_{AA}(\tau,\tau^{\prime})R_{BB} (\tau,\tau^{\prime})+R_{AB}(\tau,\tau^{\prime})R_{BA}(\tau,\tau^{\prime}) \right]\right\}^{-1}\mu_{A}(\tau^{\prime})\mu_{B}(\tau^{\prime}) \tag{5.21}\]
which has to be inserted into eq. (5.8) to get the final equation of motion for the mean copy number of species \(A\); the other species \(B,C\) can be treated in exactly the same manner.
To find the mixed response functions \(R_{AB}\) and \(R_{BA}\) that appear above, we now also need the general version of the Feynman-Dyson equation eq. (4.24). This is obtained by treating the species index analogously to the time indices, giving
\[(\partial_{\tau}+k_{2i})R_{ij}(\tau,\tau^{\prime})=\delta(\tau-\tau^{\prime}) \delta_{ij}+\int d\tau^{\prime\prime}\ \sum_{k}\Sigma^{*}_{ik}(\tau,\tau^{\prime\prime})R_{kj}(\tau^{\prime\prime}, \tau^{\prime}) \tag{5.22}\]
Looking at the diagrammatic expansion of the self-energies \(\Sigma^{*}_{ij}\) one realizes that, because no internal vertex has an incoming \(\phi_{C}\) leg (\(C\) is not consumed in any reaction, only produced), \(\Sigma_{AC}\), \(\Sigma_{BC}\) and \(\Sigma_{CC}\) are all zero. Of the remaining self-energy entries we only draw the diagrams for \(\Sigma^{*}_{AA}\) and \(\Sigma^{*}_{AB}\) below as the diagrams for \(\Sigma^{*}_{BA}\), \(\Sigma^{*}_{BB}\), \(\Sigma^{*}_{CAA}\) and \(\Sigma^{*}_{CB}\) are analogous. To \(O(\alpha^{2})\) we have then
\[\Sigma^{*}_{AA}=\]
\[=\]
[MISSING_PAGE_POST]
These series can again be summed geometrically to all orders in \(\alpha\) as we did in section 4.5 to obtain
\[\Sigma^{*}_{AA}(\tau,\tau^{\prime}) =(-\alpha k_{3})\left\{\delta(\tau-\tau^{\prime})+\alpha k_{3}\left[ R_{AA}(\tau,\tau^{\prime})R_{BB}(\tau,\tau^{\prime})+R_{AB}(\tau,\tau^{\prime})R_{ BA}(\tau,\tau^{\prime})\right]\right\}^{-1}\mu_{B}(\tau^{\prime}) \tag{5.27}\] \[\Sigma^{*}_{AB}(\tau,\tau^{\prime}) =(-\alpha k_{3})\left\{\delta(\tau-\tau^{\prime})+\alpha k_{3} \left[R_{AA}(\tau,\tau^{\prime})R_{BB}(\tau,\tau^{\prime})+R_{AB}(\tau,\tau^{ \prime})R_{BA}(\tau,\tau^{\prime})\right]\right\}^{-1}\mu_{A}(\tau^{\prime}) \tag{5.28}\]
The resulting overall system of equations for copy number means \(\mu_{i}\) and responses \(R_{ij}\) defines the SBR-M approximation, where M indicates the inclusion of mixed responses. For e.g. species \(A\), one integrates eq. (5.8) and eq. (5.22) with \(\Omega\) and \(\Sigma\) given by eqs. (5.21), (5.27) and (5.28).
### Numerical results
For numerical tests of the above two approximation methods we again focus on the regime of small copy numbers. We call that we are considering the \(A+B\to C\) reaction system, with baseline creation and destruction reactions with rates \(k_{1i}\) and \(k_{2i}\), respectively. Unless otherwise stated we use the standard parameter set \(k_{1A}=4\), \(k_{1B}=4\), \(k_{1C}=3\), \(k_{2A}=3\), \(k_{2B}=2\), \(k_{2C}=3\), \(k_{3}=1\), \(\alpha=1\), with time step \(\Delta t=0.001\) and total integration time \(t=1\). The initial condition is a product of independent Poisson distributions for each species, with means \(\mu_{i}(0)=k_{1i}/k_{2i}\) for \(i=A,B,C\).
In fig. 9 (left) we plot the time courses of the mean copy numbers for the three species. As we start the system in what would be the steady state without the \(A+B\to C\) reaction, it makes sense that the copy numbers of \(A\) and \(B\) decrease in time, while those of \(C\) increase. All mean copy numbers are below 2 so we clearly are in the strongly fluctuating regime. We plot the time courses as obtained by numerical integration of the master equation (ME) (with an appropriately truncated state space) and by the MAK and SBR-M methods. The deviations of MAK from the true dynamics are evident, while the SBR-M predictions are essentially identical with the ground truth on the scale of the plot. For a more quantitative assessment we plot in fig. 9 (right) the absolute value of the relative deviations from the ground truth, averaged over the \(N=3\) species, i.e.
\[\epsilon_{N}(\tau)=\frac{1}{N}\sum_{i}\left|\frac{\hat{\mu}_{i}(\tau)-\mu_{i}( \tau)}{\mu_{i}(\tau)}\right| \tag{5.29}\]
SBR-M is seen to perform best, followed closely by SBR-S, i.e. SBR with only single species response functions, and then EMEA. Normal moment closure is stable for this system and the next accurate method, followed by BR; the latter is obtained by integrating eq. (5.8) with \(\Omega\) given by eq. (5.13) but using the bare response functions. If we only keep the \(O(\alpha^{2})\) term in the dynamics with bare propagators (integrating eq. (5.11)), we get the next best approximation.
We next consider two-time quantities. In fig. 10 we show the single species response function \(R_{ii}(\tau,\tau^{\prime})\) for several fixed values of \(\tau^{\prime}\), the time at which the perturbation is applied, against time lag \(\tau-\tau^{\prime}\) between
Figure 9: Dynamics of average copy numbers (left) for the three species \(A\) (blue), \(B\) (orange) and \(C\) (green). Results are shown for the numerically exact solution of the master equation (ME, solid translucent lines), MAK (dash-dotted lines) and SBR-M (dashed lines). Absolute relative deviation (right) from the master equation solution averaged over the \(N=3\) species, \(\epsilon_{N}(\tau)\) as defined in eq. (5.29). SBR-M and SBR-S have significantly lower errors than most other methods. Standard system parameters are used as given in section 5.3.
perturbation and response. The SBR-S method and the EMRE closely reproduce the true response functions calculated from the master equation, while MAK shows appreciable deviations. \(R_{CC}\) is the same for all methods because the concentration of species \(C\) does not appear in any propensity function and hence changing the creation rate of \(C\) temporarily trivially changes its concentration, with the effect decaying exponentially only because of the baseline destruction reaction \(C\to\emptyset\).
With the more sophisticated SBR-M method we have access to all response functions, including the mixed responses. In fig. 11 we plot these in the same format as in fig. 10, comparing to the ground truth from the solution of the master equation. The single species responses (top row) are quite accurately predicted by SBR-M and LNA. These responses decay from an initial value of unity as expected. In the middle and bottom rows the mixed responses \(R_{ij},i\neq j\) are plotted. \(R_{BC}\) and \(R_{AC}\) are zero throughout the dynamics in SBR-M, LNA and the master equation solution because perturbations of species \(C\) affect neither \(A\) nor \(B\). In the middle row, \(R_{AB}\) and \(R_{BA}\) start from zero and become negative because increasing the creation rate of either \(A\) or \(B\) enables more \(A+B\to C\) reactions to take place, thus decreasing the concentration of the other species. In the bottom row \(R_{CA}\) and \(R_{CB}\) start from zero and then take positive values because increasing the creation rate of either \(A\) or \(B\)_increases_ the concentration of \(C\), again because more \(A+B\to C\) reactions take place. The non-trivial non-monotonic behaviour of these response functions is reproduced by SBR-M for the whole time range, while such cross-responses would be identically zero within e.g. MAK. Predictions for the mixed responses are possible only by including the corresponding mixed self-energies, and are stabilized by the bubble resummation procedure. This behaviour is also captured by the LNA, but deviates quantitatively from the true values at larger lag times.
Finally in fig. 12 we compare the performance of different methods as we change the rate of the non-trivial reaction \(A+B\to C\) perturbation parameter \(k_{3}\), that is changing the interaction rate while keeping the
Figure 10: Single species response functions for the \(A+B\to C\) system with the standard parameter set: \(R_{AA}\) (top), \(R_{BB}\) (middle) and \(R_{CC}\) (bottom) as obtained from the master equation, and the MAK, LNA and SBR-S approximations. The response functions are plotted at fixed values of the second time argument \(\tau^{\prime}\) at \(\tau^{\prime}=0.8,0.6,0.4,0.2\) from left to the right, as a function of the lag \(\tau-\tau^{\prime}\). SBR-S and LNA very closely reproduce the true response function as obtained from the master equation across the whole temporal range while the exponentially decaying MAK response drops off too slowly. \(R_{CC}\) is the same from all methods because the dynamics of the mean copy number \(\langle n_{C}\rangle\) of \(C\) is not dependent on higher moments involving \(n_{C}\).
baseline rates constant. On the left we plot the numerically exact mean copy number time courses for the three species for different values of \(k_{3}\). For \(k_{3}=1\) we use total time \(t=2\) and time step \(\Delta t=0.002\); for other \(k_{3}\) we scale these as in fig. 3 keeping the number of time steps the same, and accordingly plot the results against \(k_{3}\tau\). On the right we plot the absolute relative error of the different methods from the master equation averaged over species and over time, i.e. \(\epsilon=\frac{1}{B}\sum_{\tau}\epsilon_{N}(\tau)\) as a function of \(\alpha\); \(B=t/\Delta t=1000\) is the total number of time steps as before. We observe that the SBR-M and SBR-S methods outperform all others across four orders of magnitude in \(k_{3}\). In this multispecies case, it is worthy to note that the SBR methods have a clear advantage over the EMRE, thus highlighting the power and accuracy of the SBR to capture the dynamics of multispecies reaction networks with binary reactions in the challenging regime of small copy numbers.
## 6 Conclusions and discussion
We have used Doi-Peliti field theory methods in this paper to construct accurate approximations for the dynamics of chemical reaction networks in the challenging regime of large fluctuations. This approach leads to equations of motion for the mean copy numbers that involve _memory_ to past (mean) copy numbers, and we determine this memory self-consistently via appropriate response functions.
Technically, we work with diagrammatic perturbation theory around a baseline that only has response (\(\phi\tilde{\phi}\)) lines in the bare propagator, while the bare field correlations (\(\phi\phi\)) are zero. By focussing on only means and response functions, we have managed to construct diagrams with a consistent flow of time. This significantly restricts the class of diagrams in the expansion for the \(n\)-point and vertex functions, and often the diagrams have simple physical interpretations. (One can of course also calculate the correlation
Figure 11: Analog of fig. 10 for single species and mixed response functions \(R\) as obtained from the numerically exact master equation (dashed lines), the LNA (dashed-dotted lines) and the SBR-M approximation (solid lines). Single species responses \(R_{AA}\), \(R_{BB}\) and \(R_{CC}\) are quite accurately reproduced by SBR-M and LNA (top). Mixed responses \(R_{AB}\), \(R_{BA}\) and \(R_{BC}\) (middle) and \(R_{CA}\), \(R_{CB}\) and \(R_{AC}\) (bottom) are all reproduced by SBR-M including their non-monotonic behaviour, while in simpler approximations like MAK all mixed responses would be predicted as identically zero. The LNA also reproduces the correct trends but has quantitative deviations at large time lags.
function in this formalism and it does not vanish in general, but it also does not change the dynamics of the mean nor of the response.) By self-consistently replacing the bare response functions with the physical dressed responses in the vertex functions, even fewer diagrams need to be considered. We finally approximate by summing up a carefully chosen infinite sub-series of the diagrams for vertex functions and self-energies, giving overall what we call the "self-consistent bubble resummation" method (SBR).
As an alternative we could have, in the diagrams, kept the bare (\(\phi\phi\)) correlator, which is zero, and then self-consistently replaced it by its non-zero dressed counterpart. But this approach, which would be more analogous to the traditional diagrammatic approach for MSRJD path integrals where the bare (\(\phi\phi\)) correlator is non-zero, would involve many more diagrams. These diagrams also would not have a consistent flow of time, would contain self-loops, and offer no obvious route to resummation.
For the binary reactions we consider, the SBR provides all \(O(\mu)\) and \(O(\mu^{2})\) terms in the equations of motion for the mean copy numbers, which can be interpreted as time-delayed analogues of the terms appearing in standard mass-action kinetics. Higher order corrections in \(\mu\) are included implicitly via the memory functions that weight these time-delayed terms.
We give explicitly the diagrammatic Feynman rules for a single-species example system but, as we demonstrate, these generalize to multi-species cases. In such cases one has a choice of whether or not to include (dressed, physical) mixed response functions between different species. If one does, then for \(N\) species one has to consider \(N^{2}\) response functions and their corresponding equations of motion, which for large \(N\) becomes computationally expensive. As an alternative we propose a method where only dressed single species response functions are considered; in our numerical examples this is slightly less accurate, but of course also computationally less expensive as only \(N\) response functions need to be tracked.
We numerically demonstrate the superior performance of our SBR method in calculating mean copy numbers compared to other methods, over a very large range of parameter values and over the whole time range we consider. We also calculate the two-time response and correlation functions. Within our SBR approximation these quantities can easily be estimated and in numerical tests also prove to be rather accurate. The fact that we neglect field-field correlators implicitly means that all copy number variances are taken as equal to the corresponding means, as would be the case for Poisson statistics. Our results suggest that this is a reasonable approximation to the actual underlying copy number distributions, but of course it remains an approximation nonetheless.
A key equation we derive and use extensively in this paper is eq. (3.22), which gives the corrections to the mean coming from the past, i.e. the memory effects. Of course the underlying microscopic dynamics
Figure 12: Time courses of mean copy numbers \(\mu_{A}\), \(\mu_{B}\) and \(\mu_{C}\) (left) for different values of the interaction rate \(k_{3}\) as calculated from the master equation, and absolute relative deviation of the predictions from different methods (right) averaged over the three species and over time plotted against \(k_{3}\). The total time and the time step have been rescaled analogously to fig. 3. Both our SBR-M and SBR-S methods outperform every other approach over the entire range of \(k_{3}\) shown.
we consider is Markovian, with the rate for each chemical reaction depending only on the current state of the system via the propensity function. Nonetheless the equations for copy number means and responses do contain memory terms, and this is a natural consequence of the fact that a representation in terms of means and responses is effectively a lower-dimensional _projection_ of the full microscopic dynamics of the system as specified by the joint distribution of the copy numbers for all species and times.
A number of interesting research perspectives arise from our work. Considering first numerical questions, the straightforwardly time-discretized implementation of the SBR method that we have used in this paper has a computational complexity (see appendix A.6) of \(O(B^{3})\) where \(B\) is the number of time steps. This can become expensive when long trajectories need to be simulated with small time steps, as could be the case when reaction rates in a system vary widely. More sophisticated techniques including adaptive time steps or integration order [51], or low rank compression [55] could be explored to address this.
We have also implicitly assumed that our reaction systems are well mixed rather than diffusion-limited, so that only overall copy numbers of each species affect the reaction rates. This assumption does not always hold, e.g. the binding and unbinding of transcription factors to DNA in gene regulation networks can be significantly influenced by the spatial arrangement of the DNA [56, 57]. It is, however, straightforward to include space in our analysis by considering a spatial grid of compartments, with molecules in different compartments treated effectively as different species. Diffusion is then just a unary conversion reaction from a species in one compartment to one in a neighbour compartment. This introduces terms quadratic in \(\tilde{\phi},\phi\) in the Hamiltonian and does not give rise to memory corrections. In the continuous time and space limit, the Hamiltonian simply acquires an additional term \(-D\tilde{\phi}\nabla^{2}\phi\), with \(D\) the diffusion coefficient and \(\phi,\tilde{\phi}\) now also dependent on position. Doi-Peliti field theory with diffusing particles preserves their discrete particle identities [58], i.e. the field theory maintains the particle nature of the degrees of freedom. This approach is therefore well suited to the study of chemical reactions especially at low copy numbers, because it does not require any prior spatial coarse graining.
Our formalism also allows us to consider time-dependent reaction rates such as \(k_{1}(t)\), \(k_{2}(t)\) or \(k_{3}(t)\). These modify the equations of motion in relatively simple ways, with corrections from integrals over the past then constructed with reaction rates at the corresponding times, see e.g. eqs. (A.40) and (A.45). This extension could be deployed in situations where the relaxation time scale of the system is comparable to that of any variation in the interactions or the time period of periodic driving, as in time-dependent branching processes [59]. It is also straightforward to treat non-Poisson initial conditions using our method. This will change the initial overlap from eq. (A.16) and introduce different \(t=0\) boundary terms in the action in eq. (2.19).
At the more technical level, it will be interesting to explore whether our approach can be used also for Martin-Siggia-Rose-Jansen-de Dominicis (MSRJD) [41, 42, 43] path integrals for classical interacting particle systems with Langevin dynamics. These share some similarities with the Doi-Peliti approach and a diagrammatic expansion can be constructed with only response functions if noise sources are treated as perturbations [60] so that approaches analogous to the ones developed here should be applicable. We also intend to investigate the connections between the formalism we have presented here and the two-particle irreducible (2PI) effective action known from field theory [61].
Finally, since our method predicts marginal Poisson distributions for all species, we have access to the entire time trajectory of the copy number distribution. It will be interesting to deploy this for _inference_, by extending it to approximations for the likelihood of a time series of observed copy numbers of molecular species, especially in biochemical reaction networks. This inference problem becomes more challenging in the not uncommon situation when not all species can be observed, and one then expects further memory effects from the projection onto the observable part of the reaction network [62, 63, 64, 65, 66, 67, 68]. |
2301.00650 | Hybrid Deep Reinforcement Learning and Planning for Safe and Comfortable
Automated Driving | We present a novel hybrid learning method, HyLEAR, for solving the
collision-free navigation problem for self-driving cars in POMDPs. HyLEAR
leverages interposed learning to embed knowledge of a hybrid planner into a
deep reinforcement learner to faster determine safe and comfortable driving
policies. In particular, the hybrid planner combines pedestrian path prediction
and risk-aware path planning with driving-behavior rule-based reasoning such
that the driving policies also take into account, whenever possible, the ride
comfort and a given set of driving-behavior rules. Our experimental performance
analysis over the CARLA-CTS1 benchmark of critical traffic scenarios revealed
that HyLEAR can significantly outperform the selected baselines in terms of
safety and ride comfort. | Dikshant Gupta, Mathias Klusch | 2022-12-30T15:19:01Z | http://arxiv.org/abs/2301.00650v1 | # Hybrid Deep Reinforcement Learning and Planning for Safe and Comfortable Automated Driving
###### Abstract
We present a novel hybrid learning method, HyLEAR, for solving the collision-free navigation problem for self-driving cars in POMDPs. HyLEAR leverages interposed learning to embed knowledge of a hybrid planner into a deep reinforcement learner to faster determine safe and comfortable driving policies. In particular, the hybrid planner combines pedestrian path prediction and risk-aware path planning with driving-behavior rule-based reasoning such that the driving policies also take into account, whenever possible, the ride comfort and a given set of driving-behavior rules. Our experimental performance analysis over the CARLA-CTS1 benchmark of critical traffic scenarios revealed that HyLEAR can significantly outperform the selected baselines in terms of safety and ride comfort.
## 1 Introduction
**Problem.** We consider the basic problem of collision-free navigation (CFN) of a self-driving car as, in short, to navigate on a driveable path to a given goal in minimal time and collisions with objects such as other cars or pedestrians in a partially observable traffic environment. The CFN problem can be modelled as a partially observable Markov decision process (POMDP) to be solved by the car online and subject to the given car and pedestrian model. In this work, we adopt the POMDP definition and models for the CFN problem from [17] and address the additional challenge to compute driving policies that are not only safe but, whenever possible, also passenger comfortable.
**Related work.** Current CFN methods for self-driving cars can leverage either a deep reinforcement learner (DRL) [10, 6], or an approximate POMDP planner (APPL) [15, 1], or a hybrid combination of thereof [17]. While a hybrid system of DRL-supported online planning for CFN can outperform its individual components for CFN in terms of safety [17], it may suffer from long training and online planning time. However, it is not known, whether some alternative hybrid system of planning-supported DRL for CFN can be more efficient and safe. The rule-interposed learning framework in [21] was shown to enable faster training of vanilla DRL but has not been applied to hybrid DRL for CFN yet. On the other hand, many works and user studies investigated the effect of various road and load disturbance factors including human-perceived risk [18, 9, 11] on passenger or ride comfort [19, 3, 7]. In this work, we measure ride comfort based on the load disturbance factors smoothness and human- perceived risk of the ride by the car. However, the potential of hybrid CFN methods taking ride comfort into account without compromising safety in critical scenarios remains unclear. Besides, self-driving cars are expected to navigate whenever possible in compliance with a given set of default driving-behavior or traffic rules, such as not to drive on sidewalks, or to keep the lane. In some critical situations, however, safe driving may require the violation of certain rules as experienced human drivers might do without making it a habit and still being able to explain their decision for the exceptional trajectory. In [5], a hierarchical rulebook is used for explainable (production rule) reasoning for filtering rule-compliant safe trajectories but without integrated hybrid CFN learning and ride comfort.
**Contributions.** To this end, we developed HyLEAR, a novel hybrid planning-assisted DRL method for the purpose of safe navigation of self-driving cars in POMDPs considering, whenever possible, risk-aware ride comfort
and driving-behavior rule compliance. For our experiments, we created the CARLA-CTS benchmark of critical traffic scenarios based on the GIDAS accident study [2] and simulated in the driving simulator CARLA.
## 2 Method HyLEAR
**Overview.** HyLEAR consists of a soft actor-critic deep reinforcement learner, named NavSAC, that is assisted by a hybrid planner. The hybrid planner leverages functional modules for (a) risk-sensitive planning of alternative safe and short paths, (b) driving behavior rule-based reasoning for the selection of one path with minimal risk and rule violations, and (c) online POMDP planning of an optimal speed action for the next step on this path. In particular, at each time step of driving simulation during training and testing of HyLEAR in CARLA, the risk-aware path planner uses an anytime weighted hybrid A* k-path planner and a pedestrian intention estimator M2P3 [16] to determine three alternative safe and short paths together with their estimated human-perceived risk values. These risk values are computed based on the driver risk fields for the paths as in [11] together with the respective cost-maps for planning. While the first path is planned with given problem related cost-map, the planning of the other two relies on modified cost-maps with lower costs of driving on free sidewalks, respectively, additional predicted pedestrian positions. The rule-based reasoner performs hierarchical rule reasoning on a given rulebook [5] of initially four priority-ordered default driving-behavior rules (avoid driving on sidewalks, minimize risk (blockage), keep the lane, take shortest path) to select the safe path with minimal human-perceived risk and rule violations. For the next full car control action, the steering angle is extracted from this selected path and the optimal speed action is determined during training by the POMDP planner IS-DESPOT [15, 1] and during testing by the trained learner NavSAC of HyLEAR.
**Training.** The training of HyLEAR with the architecture shown in Figure 1 follows the general interposed learning framework [21]: At each time step \(t\), a full car control action \(a_{t}\) is generated by combining steering angle from the selected path and speed action sampled from categorical distribution initialized with the hybrid planner speed policy. Action \(a_{t}\) gets executed in the CARLA driving simulation environment and the respective POMDP belief state transition tuple \((o_{v}\,a_{v}\,\,R_{+},\,O_{v+})\) is stored in the DRL memory buffer \(D\). The deep reinforcement learner NavSAC takes as input (a) a small RGB segmentation image as snippet from the planning cost-map that includes observed environmental information of the actual scene together with past and future path of the car and pedestrian, as well as (b) the current reward, speed, and previous action to learn to output the optimal speed action policy. For this purpose, it processes
Figure 1: HyLEAR training architecture.
the image with a convolutional neural network, and samples a set of transitions from \(D\) for its off-policy soft actor-critic learning with the respective loss functions
\[J_{V}(\psi) =\!\!\mathrm{E}_{o_{t}\sim D}\!\left[\tfrac{1}{2}\!\left(V^{\psi}(o_ {t})\!-\!\mathbb{E}_{a_{t}\sim\pi^{\phi}}\!\left[Q^{\theta}(o_{t},\!a_{t})\!-\! \log\pi^{\phi}(a_{t}|o_{t})\right)\right]\!\right)^{2}\right]\] \[J_{Q}(\theta) =\!\!\mathbb{E}_{(o_{t},a_{t})\sim D}\!\left[\tfrac{1}{2}\!\left(Q ^{\theta}(o_{t},a_{t})\!-\!\hat{Q}(o_{t},a_{t})\right)^{2}\right]\!;\] \[J_{\pi}(\phi) =\!\!\mathbb{E}_{o_{t}\sim D}\!\left[\log\pi^{\phi}(a_{t}|o_{t}) \!-\!Q^{\theta}(o_{t},a_{t})\right]\]
where observation \(o_{t}\) and action \(a_{t}\) are sampled from \(D\), \(V^{\psi}\) is state value function, \(Q^{\theta}\) is Q-function, \(\pi^{\theta}\) (speed) actor policy function, and \(\psi\), \(\theta\) and \(\phi\) are the NavSAC network weights to be trained. This hybrid planning-assisted informed state-action space sampling may reduce the training time of NavSAC by avoiding unnecessary explorations.
**Testing.** The testing architecture of HyLEAR is the same as for training except that it does not include the computationally expensive planner IS-DESPOT to determine an optimal speed actions for given situation and path. This capability is now embedded in and performed much faster by the trained learner NavSAC. During testing, the hybrid planner of HyLEAR generates the shortest safe path with minimal human-perceived risk and rule violations as input for the trained NavSAC such that the extracted steering angle together with the optimal speed action determined now by the trained NavSAC instead of the planner IS-DESPOT is then executed as a full control action by the car in the driving simulator CARLA.
## 3 Evaluation
**Experimental setting.** Our experimental comparative performance evaluation of HyLEAR has been conducted over the synthetic benchmark CARLA-CTS1, which consists of twelve parameterized scenarios with about thirty thousand scenes in total simulated in the driving simulator CARLA. Most of the traffic scenarios are taken from the GIDAS accident study [2] where the car is confronted with street crossing pedestrians, possibly occluded by some parking car, an incoming car and intersections. The scenes per scenario are generated with varying speed and crossing distance of pedestrians from the car. The selected baselines are (a) the individual FCN action planning and learning methods IS-DESPOT-p and NavSAC-p each of which guided by a hybrid A* path planner, as well as the socially-aware DRL method A2C-CADRL [8] and the hybrid learning-assisted planning method HyLEAP [17] for CFN. The performance of each method is measured in terms of (a) overall safety index (SI) defined as total number of scenarios in which the method is below given percentages of crashes and near-misses (5, 10); (b) crash and near-miss rates, and time to goal (TTG); (c) ride comfort defined as being inversely proportional to the equally weighted sum of jerks and human-perceived risk of planned trajectory, risk normalized to [0,1] with risk threshold set to 0.1 [11]; and (d) training and execution time. HyLEAR is implemented in Python and PyTorch framework; all methods were trained and tested on the DL supercomputer NVIDIA DGX-1 at DFKI Kaiserslautern.
**Results.** The overall results of our experiments, averaged across all scenarios, are shown in Table 1.
In general, HyLEAR provided a safer and more comfortable ride than the other selected baselines, following comparatively the fastest training and with a relatively acceptable time to goal and execution time. In some cases, taking the safe and more comfortable but not shortest route by HyLEAR may come at the expense of minimal time to goal compared to the fastest method A2C-CADRL. The latter, however, performed worse on safety due to local minima of always accelerating to reach the goal, which resulted in second best comfort due to zero jerks. On average, the n-step look-ahead action planning with IS-DESPOT-p was as safe as the learner NavSAC-p and with more ride comfort, in particular in scenarios with temporarily occluded pedestrians but required extremely more execution time due to online planning than both DRL methods. The interposed learning allowed HyLEAR to learn the optimal speed action for given situation and safe path faster than the other DRL methods, while due to its hybrid planning HyLEAR performed best in terms of safety and ride comfort, only driving on safe paths through free sidewalks if there are no alternatives with acceptable human-perceived risk. While both hybrid methods, HyLEAP and HyLEAR, are by far more safe than the tested individual planning and DRL methods, the hybrid planning-assisted learning of HyLEAR outperformed the DRL-assisted online planning of HyLEAP in safety, comfort, time to goal, training and execution time.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & **Safety (SI)** & **Crash (\%)** & **Near-miss (\%)** & **TTG (s)** & **Comfort** & **Training (d)** & **Exec (ms)** \\ \hline NavSAC-p & 1 & 21.44 & 9.24 & 17.57 & 0.262 & 10 & 60.29 \\ \hline IS-DESPOT-p & 1 & 21.01 & 6.05 & 16.20 & 0.689 & N/A & 259.56 \\ \hline A2C-CADRL-p & 0 & 25.14 & 11.47 & **14.26** & 1.010 & 4 & **58.57** \\ \hline HyLEAP & 4 & **19.26** & **7.41** & 16.16 & 0.803 & 5 & 215.80 \\ \hline HyLEAR & **5** & 19.88 & 8.49 & 15.86 & **1.064** & **3** & 71.50 \\ \hline \end{tabular}
\end{table}
Table 1: Overview of results on the CARLA-CTS1 benchmark
## 4 Conclusion
We presented HyLEAR, the first hybrid planning-assisted deep reinforcement learning method for collision-free driving policies for self-driving cars that also take into account, whenever possible, the ride comfort and a given set of driving-behavior rules. The experimental results over the CARLA-CTS benchmark revealed that HyLEAR can outperform the selected baselines in safety and ride comfort with faster training and acceptable execution time.
**Acknowledgments.** This work has been funded by the German Ministry for Education and Research (BMB+F) in the project MOMENTUM.
|
2309.10261 | Visualizing quantum coherence and decoherence in nuclear reactions | Differential cross sections of nuclear reactions often exhibit characteristic
oscillations in the angular distribution originated from an interference of two
indistinguishable processes. Here we propose a novel method to visualize
origins of such oscillations. This is achieved by taking Fourier transform of
scattering amplitudes, following the idea in wave optics. We apply this method
to elastic scattering of $^{16}$O+$^{16}$O and $^{18}$O+$^{18}$O at energies
above the Coulomb barrier. The former system shows strong oscillations in the
angular distribution due to the nearside-farside interferences, while the
oscillations are largely suppressed in the latter system due to a stronger
absorption. We show that the image of the former and the latter systems
corresponds to a double-slit and a single-slit problems in quantum mechanics,
respectively. | K. Hagino, T. Yoda | 2023-09-19T02:34:39Z | http://arxiv.org/abs/2309.10261v1 | # Visualizing quantum coherence and decoherence in nuclear reactions
###### Abstract
Differential cross sections of nuclear reactions often exhibit characteristic oscillations in the angular distribution originated from an interference of two indistinguishable processes. Here we propose a novel method to visualize origins of such oscillations. This is achieved by taking Fourier transform of scattering amplitudes, following the idea in wave optics. We apply this method to elastic scattering of \({}^{16}\)O+\({}^{16}\)O and \({}^{18}\)O+\({}^{18}\)O at energies above the Coulomb barrier. The former system shows strong oscillations in the angular distribution due to the nearside-farside interferences, while the oscillations are largely suppressed in the latter system due to a stronger absorption. We show that the image of the former and the latter systems corresponds to a double-slit and a single-slit problems in quantum mechanics, respectively.
In quantum mechanics, when two or more indistinguishable processes are involved, the probability is computed by taking the absolute square of the total amplitude, which is given as a sum of the amplitude of each process. This leads to the interference of each process due to the cross terms. This is referred to as quantum coherence, and this is one of the most fundamental features of quantum mechanics. In addition to the famous double-slit problem, a textbook example for this is scattering of two identical particles, for which a detector cannot distinguish scattering at angle \(\theta\) from scattering at angle \(\pi-\theta\). In this case, the differential cross sections are given by \(d\sigma/d\Omega=|f(\theta)\pm f(\pi-\theta)|^{2}\), where \(f(\theta)\) and \(f(\pi-\theta)\) are scattering amplitudes for the angles \(\theta\) and \(\pi-\theta\), respectively, and the sign of the superposition depends on the statics of the particles. Due to the interference between \(f(\theta)\) and \(f(\pi-\theta)\), the differential cross sections exhibit characteristic oscillations as a function of the scattering angle \(\theta\). Such oscillations have been actually observed e.g., in elastic scattering of \({}^{16}\)O+\({}^{16}\)O at energies below the Coulomb barrier[1]. At such energies, the nuclear effect can be neglected, and the experimental data can be well accounted for by taking a superposition of the Rutherford scattering amplitudes at \(\theta\) and \(\pi-\theta\).
Besides the interference due to the exchange of two identical particles, there are many other interference phenomena known in low-energy nuclear reactions. These include, the Coulomb-nuclear interference [2], the nearside-farside interference [3; 4; 5], and the barrier-wave-internal-wave interference[6]. In particular, an analogy between the nearside-farside interference and the double-slit problem has been discussed in Ref. [5]. Here, the nearside component corresponds to scattering at a positive scattering angle with a positive impact parameter while the farside component corresponds to scattering with a negative impact parameter. Due to a strong absorption inside a nucleus, scattering takes place only at the edge of a nucleus, which corresponds to scattering through two slits in a double slit problem.
In this paper, we propose a novel way to visualize an origin of oscillations in the angular distribution of nuclear reactions. The idea of this method is to take Fourier transform of a scattering amplitude, similar to what is done in wave optics. A similar method has been applied in particle physics, in which the images of string scattering [7] and that of black holes in the AdS/CFT correspondence [8; 9] have been discussed. In particular, it has been demonstrated that the image of string scattering corresponds to a double slit [7]. Here we apply a similar method to elastic scattering of \({}^{16}\)O+\({}^{16}\)O and \({}^{18}\)O+\({}^{18}\)O at energies above the Coulomb barrier, and show how the quantum coherence in \({}^{16}\)O+\({}^{16}\)O is decohered in \({}^{18}\)O+\({}^{18}\)O by nuclear absorption.
Following Ref. [7], we take an image of scattering using a lens located at the direction \((\theta_{0},\varphi_{0})\) from the scattering center. To this end, we take Fourier transform of scattering amplitude in a form of
\[\Phi(X,Y) = \frac{1}{S}\int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta \theta}d\theta e^{ik(\theta-\theta_{0})X}f(\theta) \tag{1}\] \[\times\int_{\varphi_{0}-\Delta\varphi}^{\varphi_{0}+\Delta \varphi}d\varphi\,e^{ik(\varphi-\varphi_{0})Y},\]
where \((X,Y)\) is the coordinate on the virtual screen behind the lens and \(k\) is the wave number, \(k=\sqrt{2\mu E/\hbar^{2}}\), \(\mu\) and \(E\) being the reduced mass and the energy in the center of mass frame, respectively. See Supplemental Material for a derivation of this formula. We have assumed that the scattering amplitude \(f\) is independent of the angle \(\varphi\). In Eq. (1), \(S\) is the angular area of the lens given by
\[S=\int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta\theta}d\theta\int_{\varphi_ {0}-\Delta\varphi}^{\varphi_{0}+\Delta\varphi}d\varphi\,=4(\Delta\theta)( \Delta\varphi). \tag{2}\]
The actual image is given by \(I(X,Y)=|\Phi(X,Y)|^{2}\). In this paper, following Ref.[7], we take \(\Delta\varphi=\Delta\theta/\sin\theta_{0}\), that corresponds to a square lens.
Since the scattering amplitude does not depend on the angle \(\varphi\), the integral for \(\varphi\) is trivial in Eq. (1) and is given by
\[\int_{\varphi_{0}-\Delta\varphi}^{\varphi_{0}+\Delta\varphi}d\varphi\,e^{ik( \varphi-\varphi_{0})Y}=2\Delta\varphi\,\frac{\sin(kY\Delta\varphi)}{kY\Delta \varphi}. \tag{3}\]
This function is peaked at \(Y=0\) and has a width of \(2\pi/(k\Delta\varphi)\)[7]. The resolution of the image in the \(Y\) direction is thus determined by the quantity \(k\Delta\varphi\). Notice
that Eq. (3) is independent of \(\varphi_{0}\). For a flat angular distribution with \(f(\theta)\)=const., the same argument holds for the position and the resolution of the peak in the \(X\) direction.
Let us first apply Eq. (1) to Rutherford scattering, that is, scattering with a pure Coulomb potential, \(V(r)=Z_{1}Z_{2}e^{2}/r\), where \(Z_{1}\) and \(Z_{2}\) are atomic numbers of two colliding nuclei, for which the scattering amplitude \(f(\theta)\) is known analytically, see e.g. Ref. [10]. Figure 1 shows the image of Rutherford scattering for \({}^{16}\)O+\({}^{16}\)O at \(E_{\rm c.m.}=8.8\) MeV. Even though this is a system with identical bosons, the symmetrization of the wave function is not taken into account here. For the image, we take \(\theta_{0}\)=90 degrees with \(\Delta\theta=\Delta\varphi\)=30 degrees. One can see that the image has a peak at \(X=5.65\) fm. This is actually close to the classical impact parameter for Rutherford scattering of this system at \(\theta_{0}\)=90 degree, \(b_{\rm cl}=5.24\) fm. As we derive in Supplemental Material, for Rutherford scattering the peak of the image indeed coincides with the classical impact parameter in the limit of \(\Delta\theta\to 0\). We expect that this holds in general for heavy-ion reactions, for which the Coulomb interaction plays an important role in determining the reaction dynamics.
Notice that Eq. (1) for \(\theta_{0}=\pi/2\) has a property that \(\Phi(X,Y)\) with \(f(\pi-\theta)\) is identical to \(\Phi(-X,Y)\) with \(f(\theta)\). Therefore the image of \({}^{16}\)O+\({}^{16}\)O scattering with the symmetrized scattering amplitude \(f(\theta)+f(\pi-\theta)\) has two symmetric peaks at \(X_{\rm peak}\) and \(-X_{\rm peak}\), just as in the double-slit problem discussed in Ref. [7]. See the Supplemental Material for details.
Let us next discuss elastic scattering of \({}^{16}\)O+\({}^{16}\)O and \({}^{18}\)O+\({}^{18}\)O at energies above the Coulomb barrier, at which both the Coulomb and the nuclear interactions play a role. The upper panel of Fig. 2 shows the angular distribution of \({}^{16}\)O+\({}^{16}\)O elastic scattering at \(E_{\rm c.m.}=26.5\) MeV [11]. With a standard global nuclear potential, the height of the Coulomb barrier for this system is estimated to be around 10.3 MeV, and thus this energy is about 2.6 times the barrier height. The experimental angular distributions for this system show a strong oscillatory pattern. We fit this with a deep squared Woods-Saxon potential for the nuclear part of internucleus potential [13; 14],
\[V_{N}(r)=-V_{0}\,g(R_{R},a_{R},r)^{2}-iW_{0}\,g(R_{W},a_{W},r)^{2}, \tag{4}\]
with
\[g(R,a,r)=1/(1+\exp[(r-R)/a]). \tag{5}\]
The solid line in the upper panel is obtained with the parameters \(V_{0}\)=421.28 MeV, \(R_{R}=4.12\) fm, \(a_{R}=1.52\) fm, \(W_{0}\)=157.1 MeV, \(R_{W}=4.39\) fm, and \(a_{W}=0.151\) fm, together with the radius of the uniform charge distribution of 5.54 fm. The observed oscillations are reasonably well accounted for with this parameter set. The lower panel of Fig. 2 shows the angular distribution for the \({}^{18}\)O+\({}^{18}\)O
Figure 2: (the upper panel) The angular distribution of the \({}^{16}\)O+\({}^{16}\)O elastic scattering at \(E_{\rm c.m.}=26.5\) MeV. The solid line shows a fit with a deep squared Woods-Saxon potential, while the dashed line shows the unsymmetrized cross sections obtained with the same potential. The experimental data are taken from Ref. [11]. (the lower panel) The same as the upper panel, but for the \({}^{18}\)O+\({}^{18}\)O elastic scattering at \(E_{\rm c.m.}=26\) MeV. The surface imaginary potential is also added to the optical potential. The experimental data are taken from Ref. [12].
at a similar energy as the one shown in the upper panel for \({}^{16}\)O+\({}^{16}\)O [12]. For this system, the oscillatory pattern is much less pronounced (see also Ref. [15]), and the same squared Woods-Saxon potential as that for the \({}^{16}\)O+\({}^{16}\)O system does not fit well the experimental data. This is most likely due to the two extra neutrons outside the doubly magic \({}^{16}\)O nucleus, with which the \({}^{18}\)O nuclei are excited more easily than the \({}^{16}\)O nuclei. A stronger absorption is necessary to fit the data[16; 17], and for this purpose we introduce a surface imaginary potential,
\[W_{S}(r)=-iW_{s}\,dg(R_{s},a_{s},r)/dr. \tag{6}\]
The solid line in the lower panel is obtained with the parameters \(W_{s}=94.01\) MeV, \(R_{s}=5.61\) fm, and \(a_{s}=0.734\) fm, together with the potential given by Eq. (4) with a scaling of \(R_{R}\) and \(R_{W}\) by a factor of 1.04 to account for the mass number dependence of the nuclear radii. One can see that this calculation well accounts for the data for the \({}^{18}\)O+\({}^{18}\)O system.
We notice that the unsymmetrized cross sections, obtained only with the scattering amplitude \(f(\theta)\), show strong oscillations for the \({}^{16}\)O+\({}^{16}\)O system (see the dashed line in the upper panel). This indicates that the effect of symmetrization due to the identical bosons play a minor role at this energy in the oscillations for the \({}^{16}\)O+\({}^{16}\)O system, even though the small oscillations around \(\theta=\pi/2\) for the \({}^{18}\)O+\({}^{18}\)O system is certainly due to the symmetrization of the wave function. To investigate the origin for the oscillations in the \({}^{16}\)O+\({}^{16}\)O, we decompose the scattering amplitude into the nearside and the farside components by using the Legendre functions of the second kind [3]. The solid lines in Fig. 3 show the unsymmetrized cross sections for the \({}^{16}\)O+\({}^{16}\)O (the upper panel) and the \({}^{18}\)O+\({}^{18}\)O (the lower panel) systems, while the dashed and the dotted lines show their decompositions into the nearside and the farside components, respectively. The upper panel indicate that the nearside and the farside components cross each other at around \(\theta=51\) degrees, and the strong oscillations are indeed caused by an interference between the nearside and the farside components. On the other hand, the farside component is largely suppressed in the \({}^{18}\)O+\({}^{18}\)O system due to the strong absorption, and the scattering amplitude is almost solely given by the nearside component. In this way, the quantum coherence observed in the \({}^{16}\)O+\({}^{16}\)O system is decohered in the \({}^{18}\)O+\({}^{18}\)O system due to the couplings to the internal degrees of freedom, that may be regarded as an internal environment.
The images of the unsymmetrized cross sections for the \({}^{16}\)O+\({}^{16}\)O and the \({}^{18}\)O+\({}^{18}\)O systems are shown in the upper and the lower panels of Fig. 4, respectively. These are obtained with \(\theta_{0}\)=55 degrees and \(\Delta\theta\)=15 degrees. Here we set \(\theta_{0}\) close to the crossing point of the nearside and the farside components in the \({}^{16}\)O+\({}^{16}\)O so that both the components contribute with similar magnitudes. For the \({}^{16}\)O+\({}^{16}\)O system, the image has two distinct peaks. The analysis with the nearside and the farside amplitudes indicates that the peak at a positive value of \(X\) corresponds to the nearside component while the peak at a negative \(X\) corresponds to the farside component. As we have discussed with the Rutherford scattering, the peak of an image corresponds to the classical impact parameter of scattering. Fig. 4 therefore agrees with the physical picture of the nearside and the farside components, that is, the nearside and the farside components correspond to a positive and a negative impact parameters, respectively. This can also be seen in the \({}^{18}\)O+\({}^{18}\)O system, for which only the nearside component contributes significantly to the cross sections. The image for this system has a peak at a positive value of \(X\), reflecting a positive impact parameter for the nearside component.
In summary, we have proposed a novel way to image nuclear reactions. Based on an idea in wave optics, as had been advocated in the field of particle physics, the image can be obtained by performing Fourier transform of a scattering amplitude. For an angle-independent scattering amplitude, the image is peaked at the origin with the widths determined by parameters in the Fourier transform. For Rutherford scattering, the peak of the image is shifted to the position corresponding to the classical impact parameter of scattering. We have applied this method to elastic scattering of the \({}^{16}\)O+\({}^{16}\)O
Figure 3: The unsymmtrized cross sections for elastic scattering of the \({}^{16}\)O+\({}^{16}\)O (the upper panel) and the \({}^{18}\)O+\({}^{18}\)O (the lower panel) systems. The solid lines show the total cross sections, while the dashed and the dotted lines denote their decompositons into the nearside and the farside components, respectively.
and \({}^{18}\)O+\({}^{18}\)O systems at energies about 2.6 times the Coulomb barrier. The image for the \({}^{16}\)O+\({}^{16}\)O system has been found to have two peaks, corresponding to the near-side and the farside components of the reaction process. The quantum interference between the two components is largely decohered in the \({}^{18}\)O+\({}^{18}\)O system due to the strong absorption originated from the two extra neutrons outside \({}^{16}\)O. The image for this systems has been found to have a single peak, corresponding solely to the near-side component. Elastic scattering for the \({}^{16}\)O+\({}^{16}\)O and \({}^{18}\)O+\({}^{18}\)O systems at these energies therefore have close analogies to a double slit and a single slit problems in quantum mechanics, respectively.
In this way, the imaging proposed in this paper provides an intuitive understanding of the origin and the underlying dynamics of quantum interference phenomena in nuclear reactions. Of course, a scattering amplitude is not an observable, unlike cross sections. However, one can make an attempt to fit data with an optical model, from which one can obtain a scattering amplitude to be used for imaging. There are a variety of interference phenomena in nuclear reactions. We leave applications of the imaging to these phenomena for interesting future works. An application to inelastic scattering [18] will be another interesting direction for future works.
We thank Koji Hashimoto for useful discussions. This work was supported in part by JSPS KAKENHI Grants No. JP19K03861 and No. JP23K03414. The work of T. Y. was supported in part by JSPS KAKENHI Grant No. JP22H05115 and JP22KJ1896.
## References
* (1) D. A. Bromley, J. A. Kuehner, and E. Almqvist, Phys. Rev. **123**, 878 (1961).
* (2) D. M. Brink, _Semi-Classical Methods for Nucleus-Nucleus Scattering_, Cambridge Monographs on Mathematical Physics (Cambridge University Press, 1985).
* (3) R. C. Fuller, Phys. Rev. C **12**, 1561 (1975).
* (4) N. Rowley and C. Marty, Nucl. Phys. A **266**, 494 (1976).
* (5) M. Hussein and K. McVoy, Prog. in Part. and Nucl. Phys. **12**, 103 (1984).
* (6) D. Brink and N. Takigawa, Nucl. Phys. A **279**, 159 (1977).
* (7) K. Hashimoto, Y. Matsuo, and T. Yoda, Prog. Theor. Exp. Phys. **2023**, 043B04 (2023).
* (8) K. Hashimoto, S. Kinoshita, and K. Murata, Phys. Rev. Lett. **123**, 031602 (2019).
* (9) K. Hashimoto, S. Kinoshita, and K. Murata, Phys. Rev. D **101**, 066018 (2020).
* (10) K. Konishi and G. Paffuti, _Quantum Mechanics: A New Introduction_ (OUP Oxford, 2009).
* (11) R. H. Siemssen, J. V. Maher, A. Weidinger, and D. A. Bromley, Phys. Rev. Lett. **19**, 369 (1967).
* (12) R. Vandenbosch, W. Reisdorf, and P. Lau, Nucl. Phys. A **230**, 59 (1974).
* (13) Y. Kondo, B. Robson, and R. Smith, Phys. Lett. B **227**, 310 (1989).
* (14) S. Ohkubo and K. Yamashita, Phys. Rev. C **66**, 021301 (2002).
* (15) R. W. Shaw, R. Vandenbosch, and M. K. Mehta, Phys. Rev. Lett. **25**, 457 (1970).
* (16) C. Von Charzewski, V. Hnizdo, and C. Toepffer, Nucl. Phys. A **307**, 309 (1978).
* (17) F. Haas and Y. Abe, Phys. Rev. Lett. **46**, 1667 (1981).
* (18) D. R. Dean and N. Rowley, J. of Phys. G **10**, 493 (1984).
Figure 4: The images of the unsymmtrized cross sections for elastic scattering of the \({}^{16}\)O+\({}^{16}\)O (the upper panel) and the \({}^{18}\)O+\({}^{18}\)O (the lower panel) systems. The angles in Eq. (1) are set to be \(\theta_{0}\)=55 degrees and \(\Delta\theta\)=15 degrees.
Supplemental Material
### Derivation of Eq. (1)
In a scattering problem, one considers the asymptotic wave function in a form of
\[\psi(\mathbf{r})\to e^{ikz}+f(\theta)\frac{e^{ikr}}{r}\ \ \ \ \ (r\to\infty), \tag{7}\]
where \(k=\sqrt{2\mu E/\hbar^{2}}\) is the wave number with \(E\) and \(\mu\) being the energy in the center of mass frame and the reduced mass, respectively. Here we have taken the \(z\)-axis for the direction of the incident wave and assumed that the scattering amplitude \(f(\theta)\) depends only on the angle \(\theta\).
We put a convex lens at the distance \(L^{\prime}\) from the origin in the direction of \((\theta_{0},\varphi_{0})\) and take an image on the screen located at the distance \(L\) from the origin (see Fig. 5). In Fig. 6, the center of the lens is denoted as \(P^{\prime}\), while the center of the screen is denoted as \(P\), both of them are in the direction \((\theta_{0},\varphi_{0})\) from the origin. We use the two-dimensional Cartesian coordinate systems \((\xi,\eta)\) and \((X_{s},Y_{s})\) to express the position of a point on the lens and the screen, respectively. We put the lens in the tangential direction of the sphere at the point \(P^{\prime}\) and take the \(\xi\) and the \(\eta\) axis in the \(-\theta\) and the \(\varphi\) directions, respectively. \(\xi\) and \(\eta\) are then expressed as \(\xi\sim-L^{\prime}(\theta-\theta_{0})\) and \(\eta\sim(L^{\prime}\sin\theta_{0})(\varphi-\varphi_{0})\), respectively, for large values of \(L^{\prime}\).
We assume that \(L^{\prime}\) is much larger than the size of the lens such that the wave which is incident on the lens can be approximately regarded as a plane wave. The role of the lens is to convert a plane wave to an incoming spherical wave (see Fig. 4 in Ref. [1]). Assuming that the lens is infinitely thin, the amplitude at the point \((X_{s},Y_{s})\) on the screen then reads,
\[\Psi_{s}(X_{s},Y_{s}) = \int_{-d_{\xi}}^{d_{\xi}}d\xi\int_{-d_{\eta}}^{d_{\eta}}d\eta\, A(\xi,\eta) \tag{8}\] \[\times e^{-ik[(X_{s}-\xi)^{2}+(Y_{s}-\eta)^{2}+(L-L^{\prime})^{2}]^{ 1/2}},\]
where \(A(\xi,\eta)\) is the amplitude for the scattering wave at the point \((\xi,\eta)\) on the lens, and the size of the lens is taken to be \(d_{\xi}\times d_{\eta}\). We further assume that the size of the lens is much smaller than \(L-L^{\prime}\). Eq. (8) is then transformed to
\[\Psi_{s}(X_{s},Y_{s}) \sim e^{-ik(L-L^{\prime})}e^{-ik\frac{X_{s}^{2}+Y_{s}^{2}}{2(L-L^{ \prime})}} \tag{9}\] \[\times\int_{-d_{\xi}}^{d_{\xi}}d\xi\int_{-d_{\eta}}^{d_{\eta}}d \eta\,e^{ik\frac{X_{s}+y_{s}}{L-D^{\prime}}}A(\xi,\eta).\]
Using the relations \(\xi\sim-L^{\prime}(\theta-\theta_{0})\) and \(\eta\sim L^{\prime}\sin\theta_{0}(\varphi-\varphi_{0})\), and by substituting the scattering amplitude \(f(\theta)\) to \(A(\xi,\eta)\), one finds
\[\Psi_{s}(X_{s},Y_{s}) \sim e^{-ik(L-L^{\prime})}e^{-ik\frac{X_{s}^{2}+Y_{s}^{2}}{2(L-L^{ \prime})}} \tag{10}\] \[\times\int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta\theta}d \theta\int_{\varphi_{0}-\Delta\varphi}^{\varphi_{0}+\Delta\varphi}d\varphi\] \[\times e^{ik\frac{-L^{\prime}(\theta-\theta_{0})X_{s}+L^{\prime} \sin\theta_{0}Y_{s}(\varphi-\varphi_{0})}{L-L^{\prime}}}f(\theta),\]
with \(\Delta\theta=d_{\xi}/L^{\prime}\) and \(\Delta\varphi=d_{\eta}/(L^{\prime}\sin\theta_{0})\). Introducing scaled coordinates \(X\equiv-L^{\prime}X_{s}/(L-L^{\prime})\) and \(Y\equiv L^{\prime}\sin\theta_{0}Y_{s}/(L-L^{\prime})\), one finally obtains Eq. (1), up to a phase functor. Notice that the relation \(\Delta\varphi=\Delta\theta/\sin\theta_{0}\) holds for a square lens, \(d_{\xi}=d_{\eta}\).
### The image of Rutherford scattering
We evaluate the image in the \(X\) direction,
\[\Phi(X)=\int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta\theta}d\theta\,e^{ ik(\theta-\theta_{0})X}f(\theta), \tag{11}\]
Figure 5: A schematic view of the set up of a lens and a screen for the imaging. The angle of the lens from the \(z\) axis is \(\theta_{0}\) and thus the distance of the lens from the \(z\) axis is \(L^{\prime}\sin\theta_{0}\).
Figure 6: The definition of the coordinate systems \((\xi,\eta)\) and \((X_{s},Y_{s})\) for the imaging. The direction of \(\xi\) and \(X_{s}\) is taken to be in the \(-\theta\)-direction, while the direction of \(\eta\) and \(Y_{s}\) is in the \(\varphi\)-direction.
for small values of \(\Delta\theta\). To this end, we expand \(e^{ik(\theta-\theta_{0})X}\) and \(f(\theta)\) around \(\theta=\theta_{0}\) up to the second order:
\[e^{ik(\theta-\theta_{0})X} \sim 1+ik(\theta-\theta_{0})X-k^{2}X^{2}(\theta-\theta_{0})^{2}/2, \tag{12}\] \[f(\theta) \sim f(\theta_{0})+f^{\prime}(\theta_{0})(\theta-\theta_{0})+f^{ \prime\prime}(\theta_{0})(\theta-\theta_{0})^{2}/2.\]
The integral in Eq. (11) can then be performed easily and reads,
\[\Phi(X) \sim 2\Delta\theta\left\{f(\theta_{0})\right.\] \[\left.+\frac{(\Delta\theta)^{2}}{3}\left(-\frac{k^{2}X^{2}}{2}f( \theta_{0})+ikXf^{\prime}(\theta_{0})+\frac{f^{\prime\prime}(\theta_{0})}{2} \right)\right\}.\]
From this equation, one obtains
\[\frac{d}{dX}|\Phi(X)|^{2} \propto -2k^{2}|f(\theta_{0})|^{2}X\] \[+ik(f^{*}(\theta_{0})f^{\prime}(\theta_{0})-f(\theta_{0})f^{ \prime}(\theta_{0})^{*}).\]
The peak of the image then appears at
\[X=\frac{i}{2k}\left(\frac{f^{\prime}(\theta_{0})}{f(\theta_{0})}-\frac{f^{ \prime}(\theta_{0})^{*}}{f^{*}(\theta_{0})}\right). \tag{16}\]
We apply this to Rutherford scattering, whose scattering amplitude is given by,
\[f_{C}(\theta)=-\frac{\eta}{2k\sin^{2}\frac{\theta}{2}}\,\exp\left[-i\eta\ln \left(\sin^{2}\frac{\theta}{2}\right)+2i\sigma_{0}\right]. \tag{17}\]
Here, \(\eta=Z_{1}Z_{2}e^{2}/\hbar v\) is the Sommerfeld parameter, where \(v\) is the relative velocity, and \(\sigma_{0}=\arg\Gamma(1+i\eta)\) is the \(s\)-wave Coulomb phase shift. For Eq. (17), one finds
\[\frac{f^{\prime}_{C}(\theta)}{f_{C}(\theta)}=-(1+i\eta)\cot\left(\frac{\theta }{2}\right). \tag{18}\]
The peak of the image therefore appears at
\[X=\frac{\eta}{k}\cot\left(\frac{\theta_{0}}{2}\right), \tag{19}\]
that is nothing but the impact parameter for Rutherford scattering.
### The image of Mott scattering
Let us consider Mott scattering, i.e., scattering of two identical particles, for which the scattering amplitude is given by \(f(\theta)\pm f(\pi-\theta)\). For the component \(f(\pi-\theta)\), the \(X\) dependence of Eq. (1) reads
\[\Phi_{\pi-\theta}(X) \equiv \int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta\theta}d\theta \,e^{ik(\theta-\theta_{0})X}f(\pi-\theta) \tag{20}\] \[= \int_{\pi-\theta_{0}-\Delta\theta}^{\pi-\theta_{0}+\Delta\theta} d\tilde{\theta}\,e^{ik(\pi-\tilde{\theta}-\theta_{0})X}f(\tilde{\theta}), \tag{21}\]
where \(\tilde{\theta}\) is defined as \(\tilde{\theta}=\pi-\theta\). For \(\theta_{0}=\pi/2\), this is equivalent to
\[\Phi_{\pi-\theta}(X) = \int_{\theta_{0}-\Delta\theta}^{\theta_{0}+\Delta\theta}d\tilde{ \theta}\,e^{ik(\theta_{0}-\tilde{\theta})X}f(\tilde{\theta})=\Phi_{\theta}(- X).\]
Thus, the image of Mott scattering is symmetric with respect to \(X=0\), and it therefore has two symmetric peaks at \(X_{\rm peak}\) and \(-X_{\rm peak}\).
The upper panel of Fig. 7 shows the differential cross sections for elastic scattering of \({}^{16}\)O+\({}^{16}\)O at \(E_{\rm c.m.}=8.8\) MeV. This energy is at about 1.5 MeV below the Coulomb barrier, and the nuclear effect can be neglected. In fact, the experimental data can be well fitted using the Coulomb scattering amplitudes, \(d\sigma/d\Omega=|f_{C}(\theta)+f_{C}(\pi-\theta)|^{2}\). The contributions of \(f_{C}(\theta)\) and \(f_{C}(\pi-\theta)\)
Figure 7: (Upper panel) The differential cross sections for elastic scattering of \({}^{16}\)O+\({}^{16}\)O at \(E_{\rm c.m.}=8.8\) MeV. The contributions of unsymmetrized scattering amplitudes are also shown by the dashed and the dotted lines. The experimental data are taken from Ref. [2]. (Lower panel) The image of Mott scattering shown in the upper panel. \(\theta_{0}\) and \(\Delta\theta\) are taken to be 90 and 30 degrees, respectively.
are also shown by the dashed and the dotted lines, respectively. The image of Mott scattering is shown in the lower panel. \(\theta_{0}\) and \(\Delta\theta\) are taken to be 90 and 30 degrees, respectively. As we have argued, the image has two symmetric peaks. A comparison with Fig. 1 indicates that the peak at a positive \(X\) corresponds to the contribution of \(f_{C}(\theta)\), while the peak at a negative \(X\) corresponds to the contribution of \(f_{C}(\pi-\theta)\).
|
2309.00072 | Naturalness-motivated composite Higgs model for generating the top
Yukawa coupling | The large top Yukawa coupling results in the top quark contributing
significantly to the quantum correction of the Higgs mass term. Traditionally,
this effect is canceled by the presence of top partners in symmetry-based
models. However, the absence of light top partners poses a challenge to the
Naturalness of these models. In this paper, we study a model based on composite
Higgs with the top Yukawa coupling originating from dimension-six four-fermion
operators. The low cutoff scale of the top quark loop required by the
Naturalness principle can be realized with a light gauge boson $E_\mu$ which
connects the hyperfermions and top quarks. A scalar-less dynamical model with
weakly coupled extended $SU(4)_{EC}$ gauge group is presented. The model
features an $E_\mu$ boson and a $Z'_E$ boson both at the sub-TeV scale, which
lead to a rich phenomenology, especially in the top physics. | Yi Chung | 2023-08-31T18:20:27Z | http://arxiv.org/abs/2309.00072v3 | # A Naturalness motivated Top Yukawa Model for the Composite Higgs
###### Abstract
The top quark leads to the dominant quantum correction to the Higgs quadratic term, which is usually canceled by top partners in traditional symmetry-based models. However, the absence of light top partners starts challenging the Naturalness of these models. In this paper, we study a model based on composite Higgs with the top Yukawa coupling originating from dim-6 four-fermion operators. The low cutoff scale of the top quark loop required by the Naturalness principle can be realized with a light gauge boson \(E_{\mu}\) which connects the hyperfermions and top quarks. A scalar-less dynamical model with weakly coupled extended \(SU(4)_{EC}\) group is presented. The model features a light \(E_{\mu}\) gauge boson and a third-generation-philic \(Z^{\prime}_{E}\) boson, which leads to a rich phenomenology, especially on the top quark physics.
## I Introduction
The Standard Model (SM) of particle physics successfully describes all known elementary particles and interactions. At the center of SM is the mechanism of electroweak symmetry breaking (EWSB), which is responsible for the masses of SM gauge bosons and fermions. The discovery of Higgs bosons in 2012 [1; 2] filled in the last missing piece of the SM. However, the SM does not address the UV-sensitive nature of the Higgs boson, which is known as the hierarchy problem. The Higgs quadratic term receives divergent radiative corrections from the interactions with SM fields, especially the top quark due to its large Yukawa coupling. The contribution can be derived numerically by calculating the one-loop diagram with the top quark and is given by
\[\Delta m_{H}^{2}|_{\rm top} \sim-i\,2N_{c}\,y_{t}^{2}\int\frac{d^{4}k}{(2\pi)^{4}}\frac{k^{2 }+m_{t}^{2}}{(k^{2}-m_{t}^{2})^{2}}\] \[=-\frac{3}{8\pi^{2}}y_{t}^{2}\left[\Lambda_{t}^{2}-3\,m_{t}^{2} \ln\left(\frac{\Lambda_{t}^{2}}{m_{t}^{2}}\right)+\cdots\right]\, \tag{1}\]
where \(\Lambda_{t}\) is the scale of the top Yukawa coupling.
To avoid the large quadratic corrections, most models invoke new symmetry such that the corrections cancel in the symmetric limit. New degrees of freedom, known as top partners, are introduced to cancel out the \(\Lambda_{t}^{2}\) term. However, the symmetry can not be exact and the difference between the top mass \(m_{t}\) and top partner mass \(M_{T}\) will reintroduce the correction given by
\[\Delta m_{H}^{2}|_{\rm top}+\Delta m_{H}^{2}|_{\rm top\ partner}\sim-\frac{3}{8 \pi^{2}}y_{t}^{2}M_{T}^{2}. \tag{2}\]
Following the Naturalness principle [3], we expect top partners to show up at the sub-TeV scale to avoid fine-tuning. However, after years of searches, the bounds of colored top partner mass \(M_{T}\) have reached 1.5 TeV for both scalar partners [4; 5] and fermionic partners [6; 7; 8; 9; 10]. The absence of colored top partners starts challenging the naturalness of this type of model.
In this study, we focus on an alternative scenario [11] where the top Yukawa coupling originated from dim-6 operators with a characteristic scale \(\Lambda_{t}\). If we can have the scale \(\Lambda_{t}\lesssim 1\) TeV, the contribution from the top loop will be under control. The idea has already been realized at the one-loop level in [12] with an elementary Higgs and top quark. In this paper, instead, we consider that the observed Higgs boson is a composite state [13; 14] formed by hyperfermions from a strongly coupled theory.
Generating SM Yukawa couplings in a strongly coupled theory can be traced back to Extended Technicolor (ETC) [15; 16; 17], where SM Yukawa couplings arise from dim-6 four-fermion operators. The scale \(\Lambda_{t}\) is determined by the mass of new massive gauge bosons \(\Lambda_{t}\sim M_{ETC}\) which connect the hyperfermions and SM fermions. The models based on modern composite Higgs models have also been studied in [18]. However, for the generic mass \(M_{ETC}\sim g_{E}f_{E}\), the breaking scale \(f_{E}\) is fixed by the value of the top Yukawa coupling at around the TeV scale and \(g_{E}\) is the coupling of the ETC group which is related to the strong coupling responsible for the hyperfermion condensate so the mass \(M_{ETC}\) is expected to be heavy from the theoretical aspect.
Motivated by the Naturalness principle, we aim at a model with a small \(g_{E}\) such that the scale \(\Lambda_{t}\) can be low. That is, the gauge group that connects hyperfermions and top quarks is weakly coupled and independent of the strong interaction. Moreover, we want to construct a fully dynamical UV model, where the two relevant scales, \(f\) and \(f_{E}\), both come from strong dynamics. We will show how to get all these features in a chiral theory with an extended gauge group. The phenomenology is also presented with a special focus on the top physics.
This paper is organized as follows. In Sec. II, we introduce the basic idea and issue in an ETC-like mechanism and how we are going to solve them. Start with the extension of the gauge group in Sec. III, we briefly go through the difference between the traditional way and the new way we work on. A concrete model is presented in Sec. IV with three relevant mechanisms discussed in detail. The important phenomenology is presented, including the indirect searches in Sec. V and direct searches in Sec. VI. Sec. VII contains our conclusions and outlooks.
Basic idea and issue of top Yukawa from four-fermion operators
To generate the top Yukawa coupling from dim-6 four-fermion operators, we need to first introduce an extended gauge group \(\mathcal{G}_{E}\) with gauge bosons \(G_{E}^{a}\) and coupling \(g_{E}\), where the top quarks and hyperfermions \(\psi\) are within the same multiplets \(Q\). The generic Lagrangian is given by
\[\mathcal{L}_{\rm E}= g_{E}G_{E,\mu}^{a}(\bar{Q}_{L}\gamma^{\mu}T^{a}Q_{L}+\bar{Q}_{R} \gamma^{\mu}T^{a}Q_{R})\] \[\supset \frac{1}{\sqrt{2}}g_{E}E_{\mu}(\bar{\psi}_{L}\gamma^{\mu}q_{L}+ \bar{\psi}_{R}\gamma^{\mu}t_{R})\,, \tag{3}\]
where \(E_{\mu}\) is the specific boson among \(G_{E}^{a}\) that mediates the top quarks and hyperfermions. The group \(\mathcal{G}_{E}\) is then broken at the scale \(f_{E}\) down to the SM gauge group \(\mathcal{G}_{SM}\) and hypercolor \(\mathcal{G}_{HC}\) (can be either broken or unbroken)1. After integrating out the massive \(E_{\mu}\) gauge bosons with a mass \(M_{E}\), we get an low energy effective Lagrangian as
Footnote 1: In this study, we use the term hypercolor for strong interaction, as usually used in modern composite Higgs models, instead of technicolor. Besides the common confining hypercolor group, we also consider the case where the hypercolor is broken when it is strong, which leads to an unconfined strong interaction.
\[\mathcal{L}_{\rm eff} =-\frac{g_{E}^{2}}{2M_{E}^{2}}\left(\bar{q}_{L}\gamma^{\mu}\psi_{ L}\right)\left(\bar{\psi}_{R}\gamma_{\mu}t_{R}\right)+h.c.\] \[\to\frac{g_{E}^{2}}{M_{E}^{2}}\left(\bar{\psi}_{R}\psi_{L}\right) \left(\bar{q}_{L}t_{R}\right)+\cdots\text{(after Fierzing)}. \tag{4}\]
Once hypercolor becomes strongly coupled and condenses the hyperfermions with a breaking scale \(f\), the \(\bar{\psi}_{R}\psi_{L}\) will form a bound state that behaves like the SM Higgs. The top Yukawa coupling is then generated with a value
\[y_{t}v\sim\frac{g_{E}^{2}}{M_{E}^{2}}\langle\bar{\psi_{R}}\psi_{L}\rangle_{HC} \sim\frac{g_{E}^{2}}{M_{E}^{2}}\cdot Y_{S}f^{2}v \tag{5}\]
where the coupling \(Y_{S}\) is the Yuakwa coupling from the strong dynamics with an \(O(1)\) value. As the \(\mathcal{G}_{E}\) is broken by \(f_{E}\), we expect the relation \(M_{E}\sim g_{E}f_{E}\) and thus
\[y_{t}\sim\left(\frac{f}{f_{E}}\right)^{2}Y_{S}\sim 1\, \tag{6}\]
which fixes the ratio among scales as \(f_{E}\sim O(1)\times f\).
Now we get a rough description for the top Yukawa coupling generated from four-fermion interactions in the composite Higgs model. To achieve an UV completion, several issues need to be addressed.
The first issue is the gauge group \(\mathcal{G}_{E}\), which requires an extension of the SM gauge group to combine hyperfermions and top quarks into the same representation. Moreover, motivated by the Naturalness principle, we want to have a light mediator \(E_{\mu}\). Its mass \(M_{E}\) is given by the product of coupling \(g_{E}\) and the breaking scale \(f_{E}\). As the scale \(f_{E}\) is fixed by the value of the top Yukawa coupling, we aim at a model with a small \(g_{E}\). That is, the gauge group that connects hyperfermions and top quarks should be weakly coupled, which will be further discussed in the next section.
Second, since we aim at a fully dynamical UV model, the two relevant scales, \(f\) and \(f_{E}\), should both come from strong dynamics. The difference between the two scales is the key to explaining the value of top Yuakwa coupling. If \(f=f_{E}\), then \(y_{t}\sim Y_{S}\) which will predict a much heavier top quark as in top condensation models [19; 20; 21; 22; 23]. The mechanism to generate a sequence of scales in a strongly coupled theory is known as tumbling gauge theories [24], which will be applied in the concrete model.
The other common concern about the ETC-type models is the flavor constraints. However, our motivation is Naturalness and our goal is to lower down the top loop cutoff so we assume the current mechanism is specific for top quarks and ignore the light fermions at this stage. Then, the main constraints from flavor physics will be from \(B\) meson physics due to the \(b_{L}\) inside \(q_{L}\), which will be discussed in Sec. V.
## III Extend the gauge group
With the SM gauge group \(SU(3)_{C}\times SU(2)_{W}\times U(1)_{Y}\), there are many different ways to extend it to include hyperfermions \(\psi\). In this work, we focus on the cases with extended \(SU(3)_{C}\). Other cases like extended \(SU(2)_{W}\) are also possible and have been studied in ETC models but we will leave them to the future study.
### Traditional extension: \(\mathcal{G}_{HC}\times\mathcal{G}_{SM}\to\mathcal{G}_{E}\)
Traditional ways following the ETC models usually have the hypercolor group combined with one of the SM gauge groups to a larger group. From the top down, the extended group \(\mathcal{G}_{E}\) group is broken down to \(\mathcal{G}_{HC}\times\mathcal{G}_{SM}\) at the scale \(f_{E}\), which separates the fermion \(Q\) to the hyperfermions and top quarks.
Following the idea in [18], the hypercolor group \(\mathcal{G}_{HC}=SU(N)_{HC}\) is combined with \(SU(3)_{C}\subset\mathcal{G}_{SM}\) to \(\mathcal{G}_{E}=SU(N+3)_{E}\). The desired fermion content \(Q_{L,R}\) under \(SU(N+3)_{E}\times SU(2)_{W}\) is given by (we ignore the \(U(1)\) in this section for simplicity)
\[Q_{L}=(N+3,2),\quad Q_{R}=(N+3,1). \tag{7}\]
Then, the \(\mathcal{G}_{E}\) group is broken down as
\[SU(N+3)_{E}\to SU(N)_{HC}\times SU(3)_{C} \tag{8}\]
After breaking, The fermions are also separated to (under \(SU(N)_{HC}\times SU(3)_{C}\times SU(2)_{W}\))
\[\psi_{L}=(N,1,2),\quad\psi_{R}=(N,1,1)\] \[q_{L}=(1,3,2),\quad\ t_{R}=(1,3,1). \tag{9}\]
The gauge boson \(E_{\mu}\), which mediates hyperfermions and top quarks, has a quantum number
\[E_{\mu}=(N,3,1)\, \tag{10}\]
which carries both hypercolor and color. Besides, there is also a massive \(Z^{\prime}_{E}\) boson which corresponds to the diagonal \(U(1)_{E}\) subgroup of \(SU(N+3)_{E}\). The generic charge of fermions under this broken \(U(1)_{E}\) are given by
\[\psi_{L},\,\psi_{R}:-1/N,\quad q_{L},\,t_{R}:1/3\, \tag{11}\]
which features a universal charge in the SM sector. This \(Z^{\prime}_{E}\) is the source of dangerous flavor processes such as flavor-changing neutral currents. However, if it is third-generation-philic, the flavor constraints are much weaker, which has been studied in [25; 26].
In this type of extension, we can easily combine \(\mathcal{G}_{HC}\) and \(\mathcal{G}_{SM}\) to \(\mathcal{G}_{E}\) and thus hyperfermions and top quarks to a multiplets \(Q\). The \(E_{\mu}\) boson carries hypercolor so it will form a hypercolor singlet bound state with other hypercolored particles below \(\Lambda_{HC}\sim 10\) TeV. Therefore, even if it is as light as 1 TeV, there are not going to be new states observed around the TeV scale, which might explain the absence of new particles so far. The only exception is \(Z^{\prime}_{E}\) which can be searched for at the LHC.
However, since \(SU(N)_{HC}\) group is directly separated from the \(SU(N+3)_{E}\) group. The gauge coupling \(g_{E}\) is the same as hypercolor coupling \(g_{H}\) above the breaking scale \(f_{E}\). After breaking, the running can separate the two couplings. However, to generate the observed top Yukawa \(y_{t}\sim 1\), the two scales \(f_{E}\) and \(f\) must be close, which also means \(g_{E}\) must be close to \(g_{H}\) which is a strong coupling. Therefore, the resulting \(M_{E}\sim g_{E}f_{E}\) is expected to be very heavy and the fine-tuning problem from the top loop will not be relieved.
### New extension: \(\mathcal{G}_{HC}\times(\mathcal{G}_{HF}\times\mathcal{G}_{SM}\to\mathcal{G}_{E})\)
The new extension will be the main focus of this study. To avoid a large \(g_{E}\) situation as mentioned, we want to decouple it from the hypercolor coupling \(g_{H}\). As all we need is to have hyperfermions and top quarks in the same representation, the unification of the gauge group is not necessary. One can imagine the combination happens in an orthogonal direction to the hypercolor group such that the couplings \(g_{E}\) and \(g_{H}\) are unrelated. In this case, the coupling \(g_{E}\) is related to one of the SM gauge couplings instead. The gauge group \(\mathcal{G}_{E}\) can be weakly coupled and is broken down to \(\mathcal{G}_{HF}\times\mathcal{G}_{SM}\) at the scale \(f_{E}\), where \(HF\) stands for hyperfermion.
More specifically, we consider the extension of the SM \(SU(3)_{C}\) to \(SU(4)_{EC}\) to include the hyperfermions, where \(EC\) stands for extended color. The fermion content under \(SU(N)_{HC}\times SU(4)_{EC}\times SU(2)_{W}\) is given by
\[Q_{L}=(N,4,2),\quad Q_{R}=(N,4,1)\, \tag{12}\]
After the first breaking, the \(SU(4)_{EC}\) gauge group is broken down to \(SU(3)_{EC}\). The fermion content then becomes (under \(SU(N)_{HC}\times SU(3)_{EC}\times SU(2)_{W}\))
Left-handed (LH): \[(N,3,2),\ (N,1,2)\] (13)
which should include both hyperfermions and top quarks. However, under this setup, all the fermion are charged under \(SU(3)_{HC}\), which is obviously not allowed for a realistic top quark. Unless the \(SU(N)_{HC}\) is broken and thus unconfined like Topcolor [23]. The fact that top quarks are charged under hypercolor also restricts the number of \(N\) we can have (unlike the traditional extension) because we can not introduce exotic degrees of freedom for top quarks. Instead, we can only use the existing quantum number in the top quark, such as \(N=3\) used in the Topcolor models, and have the SM gauge group as the unbroken subgroup through an additional breaking process \(SU(N)_{HC}\times SU(N)_{ESM}\to SU(N)_{SM}\).
In general, we can have hypercolor group as \(SU(3)_{HC}\) (broken down to \(SU(3)_{C}\) in the end), \(SU(2)_{HC}\) (broken down to \(SU(2)_{W}\) in the end), or \(U(1)_{HC}\) (broken down to \(U(1)_{Y}\) in the end). In this work, we focus on the first case with \(N=3\). Therefore, an additional breaking process is required to break \(SU(3)_{HC}\times SU(3)_{EC}\to SU(3)_{C}\) and the fermion content is further separated to (under \(SU(N)_{HC}\times SU(3)_{EC}\times SU(2)_{W}\to SU(3)_{C}\times SU(2)_{W}\))
\[Q_{L}\to(3,3,2)+(3,1,2)\to(6,2)+(\bar{3},2)+(3,2), \tag{14}\] \[Q_{R}\to(3,3,1)+(3,1,1)\to(6,1)+(\bar{3},1)+(3,1), \tag{15}\]
which includes exotic fermions transformed as sextets. For anti-triplets and triplets, though they look similar, they have different strengths of interactions as the anti-triplet originated from \((3,3)\) with both \(SU(3)\) interactions but the triplet only has the one from \(SU(3)_{HC}\). This difference is crucial to realize the tilting mechanism and requires the anti-triplets to be hyperfermions and triplets to be top quarks. Together with exotic fermions labelled by \(f_{L,R}\), we get
\[Q_{L}\to f_{L}=(6,2),\quad\psi_{L}=(\bar{3},2),\quad q_{L}=(3,2)\, \tag{16}\] \[Q_{R}\to f_{R}=(6,1),\quad\psi_{R}=(\bar{3},1),\quad t_{R}=(3,1). \tag{17}\]
This setup can allow \(\bar{\psi}\psi\) to form the condensate without \(\bar{t}t\) condensate. Such a condition might require some fine-tuning among the couplings as in Topcolor models [23]. But the self-breaking mechanism could fix the strong coupling at the value right above the critical point, which can make the tilting mechanism look natural. More concrete discussions will be presented in the next section.
In this type of extension, we can still combine hyperfermions and top quarks through a more complicated way with additional exotic fermions. However, now the top quark also undergoes a strong interaction. The \(E_{\mu}\) gauge boson no longer carries hypercolor and is naturally light, which will be a stable heavy particle at the TeV scale. There is still a massive \(Z^{\prime}_{E}\) boson thich plays an important role in phenomenology.
A concrete model
In this section, we construct a concrete model based on \(SU(4)_{EC}\) with all the ingredients we mention. For the gauge sector, we consider a strongly coupled \(SU(3)_{HC}\) and a weakly coupled \(SU(4)_{EC}\). The overall gauge group is \(\mathcal{G}_{E}=SU(3)_{HC}\times SU(4)_{EC}\times SU(2)_{W}\times U(1)_{X}\)2. We denote the corresponding gauge fields as \(H_{\mu}^{a}\), \(E_{\mu}^{a}\), \(W_{\mu}^{i}\), and \(X_{\mu}\), the gauge couplings as \(g_{H}\), \(g_{E}\), \(g_{W}\), \(g_{X}\), and the generators as \(T^{a}\), \(T^{a}\), \(T^{i}\), \(Y^{\prime}\) with indices \(a=1,...,8\), \(\alpha=1,...,15\), \(i=1,2,3\). The generators is normalized as \(\text{Tr}(T^{A}T^{B})=\frac{1}{2}\delta^{AB}\).
Footnote 2: A similar group structure and breaking pattern has also been studied know as ”4321 model” [27] for the purpose of TeV leptoquarks and B-meson anomalies.
The gauge group is spontaneously broken down to SM gauge group \(\mathcal{G}_{SM}=SU(3)_{C}\times SU(2)_{W}\times U(1)_{Y}\) through the scalar representation \(\Sigma=(\bar{3},4,1,1/24)\), which acquires a vacuum expectation (VEV) value given by
\[\langle\Sigma\rangle=\frac{f_{E}}{\sqrt{2}}\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}. \tag{18}\]
The formation of the \(\Sigma\) field and its VEV can be realized dynamically through the tumbling gauge theory with additional chiral fermion under larger representation, which will be discussed in subsection **A**.
The breaking pattern of \(\mathcal{G}_{E}\rightarrow\mathcal{G}_{SM}\) can be separated into three parts corresponding to the three resulting massive gauge bosons with different tasks:
(1) \(SU(4)_{EC}\to SU(3)_{EC}\times U(1)_{EC}\) breaking introduces the massive \(E_{\mu}\) boson with the mass
\[M_{E}=\frac{1}{2}\,g_{E}f_{E} \tag{19}\]
and the gauge coupling \(g_{E}\). It plays an important role in connecting the hyperfermions with top quarks, which helps generate the top Yukawa coupling. The mass \(M_{E}\) thus serves as the cutoff scale of top loop correction to the Higgs quadratic term.
(2) \(SU(3)_{HC}\times SU(3)_{EC}\to SU(3)_{C}\) breaking leads to a broken \(SU(3)^{\prime}\) and an unbroken \(SU(3)\) expressed as
\[{G^{\prime}}^{a}_{\mu}=\frac{g_{H}H_{\mu}^{a}-g_{E}E_{\mu}^{a}}{\sqrt{g_{H}^{ 2}+g_{E}^{2}}},\quad G_{\mu}^{a}=\frac{g_{E}H_{\mu}^{a}+g_{H}E_{\mu}^{a}}{ \sqrt{g_{H}^{2}+g_{E}^{2}}}\, \tag{20}\]
The broken \(SU(3)^{\prime}\) bosons get the mass
\[M_{G^{\prime}}=\frac{1}{\sqrt{2}}\,\sqrt{g_{H}^{2}+g_{E}^{2}}f_{E} \tag{21}\]
and the gauge coupling \(g_{s}^{\prime}=\sqrt{g_{H}^{2}+g_{E}^{2}}\). It is the mediator of strong interaction and makes the hyperfermions condense, which leads to the subsequent composite Higgs and EWSB. More details are covered in subsection **B**.
The unbroken \(SU(3)\) is just SM color group \(SU(3)_{C}\) with the gauge coupling given by
\[g_{s}=\frac{g_{H}g_{E}}{\sqrt{g_{H}^{2}+g_{E}^{2}}}=1.02\, \tag{22}\]
where we choose the matching value at the scale \(\sim 2\) TeV. The matching then fixes the value \(g_{E}\sim g_{s}=1.02\) assuming \(g_{H}\gg g_{E}\), which is related to the SM coupling and is weak as desired. The mass \(M_{E}\) is then determined, which will be discussed further in subsection **C**.
(3) \(U(1)_{EC}\times U(1)_{X}\to U(1)_{Y}\) breaking similarly leads to a broken \(U(1)^{\prime}\) and an unbroken \(U(1)\) expressed as
\[Z_{E,\mu}^{\prime}=\frac{cg_{X}X_{\mu}-g_{E}E_{\mu}^{15}}{\sqrt{c ^{2}g_{X}^{2}+g_{E}^{2}}},\ B_{\mu}=\frac{g_{E}X_{\mu}+cg_{X}E_{\mu}^{15}}{ \sqrt{c^{2}g_{X}^{2}+g_{E}^{2}}}\, \tag{23}\]
where \(c=1/\sqrt{24}\). The \(Z_{E}^{\prime}\) boson gets the mass
\[M_{Z^{\prime}}=\frac{1}{\sqrt{8}}\,\sqrt{c^{2}g_{X}^{2}+g_{E}^{2}}\,f_{E} \tag{24}\]
and the gauge coupling gauge coupling \(g^{\prime}=\sqrt{c^{2}g_{X}^{2}+g_{E}^{2}}\). It is the lightest new degree of freedom and has a huge impact on phenomenology.
The unbroken \(U(1)\) would be the SM hypercharge with \(Y=c\,T^{15}+X\) where \(T^{15}=1/\sqrt{24}\,\text{diag}(3,-1,-1,-1)\). The gauge coupling is given by
\[g_{Y}=\frac{g_{X}g_{E}}{\sqrt{c^{2}g_{X}^{2}+g_{E}^{2}}}=0.36\, \tag{25}\]
where we choose the matching value at the scale \(\sim 2\) TeV. The matching then fixes the value \(g_{X}\sim g_{Y}=0.36\) because \(g_{E}\sim 1.02\) is much greater than \(cg_{Y}\).
Based on the matching with SM gauge coupling, we get the strengths of new gauge groups within \(\mathcal{G}_{E}\) as
\[g_{E}\sim g_{s}=1.02,\quad g_{X}\sim g_{Y}=0.36. \tag{26}\]
The strong coupling \(g_{H}\) is expected to be right below the critical coupling \(g_{c}\sim 5.1\) which will be explained in subsection **B**.
Next, we discuss the fermion content. The desired fermion under \(SU(3)_{HC}\times SU(4)_{EC}\times SU(2)_{W}\times U(1)_{X}\) are given by
\[Q_{L}=(3,4,2,1/24),\quad Q_{R}=(3,4,1,13/24)\, \tag{27}\]
After the symmetry breaking, the fermions are decomposed as (under \(\mathcal{G}_{SM}=SU(3)_{C}\times SU(2)_{W}\times U(1)_{Y}\))
\[Q_{L}\rightarrow(6,2)_{0}+(\bar{3},2)_{0}+(3,2)_{\frac{1}{4}}\, \tag{28}\] \[Q_{R}\rightarrow(6,1)_{\frac{1}{2}}+(3,1)_{\frac{1}{2}}+(3,1)_{ \frac{3}{2}}. \tag{29}\]
Each of \(Q_{L,R}\) is separated to three parts, exotic fermions \(f_{L,R}\), hyperfermions \(\psi_{L,R}\), and top quarks \(q_{L},t_{R}\) as
\[f_{L}=(6,2)_{0},\ \psi_{L}=(\bar{3},2)_{0},\ q_{L}=(3,2)_{\frac{1}{4}}\, \tag{30}\] \[f_{R}=(6,1)_{\frac{1}{2}},\ \psi_{R}=(\bar{3},1)_{\frac{1}{2}},\ t_{R}=(3,1)_{ \frac{3}{2}}. \tag{31}\]
In the following three subsections, we will discuss the roles of each fermion and all the relevant mechanisms from the top down in order of energy scales as
**A.** The \({\cal G}_{E}\to{\cal G}_{SM}\) breaking at the scale \(f_{E}\sim 1.7\) TeV through tumbling mechanism with exotic fermions
**B.** Composite Higgs formation at the scale \(f\sim 1\) TeV through hyperfermion condensation
**C.** Generation of top Yukawa coupling at the scale \(M_{E}\) through integrating out the \(E_{\mu}\) boson
Then we summarize the overall spectrum and properties of new particles in subsection **D**.
### Tumbling mechanism with exotic fermions
In this model, the first symmetry breaking required is \(SU(3)_{HC}\times SU(4)_{EC}\to SU(3)_{C}\), which is similar to the 4321 model [27]. Instead of using additional scalars with nonzero VEVs to realize the breaking, we want to construct a dynamical model with the breaking through the \(SU(3)_{HC}\) strong interaction itself. Such self-breaking mechanism is known as "Tumbling" gauge theories [24] and has been used in BSM model building [28; 29].
The self-breaking of strong \(SU(3)\) gauge group has already been studied in [30; 31] and the desired breaking is possible in a chiral theory with fermions in both the triplet **3** and sextet **6** representation. Since we already have LH **3**, we only need to add an additional RH **6**. With fermions under \({\cal G}_{E}\) given by
\[Q_{L}=(3,4,2,1/24)\,\quad F_{R}=(6,1,2,0), \tag{32}\]
the most attractive channel under \(SU(3)_{HC}\) is RH **6** combined with some of LH **3** to form the condensate. The \(SU(3)_{HC}\) will be broken down to a \(SU(3)\) symmetry which is the diagonal subgroup of \(SU(3)_{HC}\times SU(3)_{G}\), where \(SU(3)_{G}\) is a subgroup of global symmetry of **3**. The global symmetry of **3** under our setup will be the \(SU(4)_{EC}\times U(1)_{X}\) gauge symmetry. The \(SU(2)_{W}\) part is directly contracted so it does not play any role here. The condensate, \(\bar{F}_{R}Q_{L}\), is formed with exactly the same quantum number as the scalar \(\Sigma=(3,4,1,1/24)\) and with the desired VEV structure shown in Eq. (18). The scale \(f_{E}\) is determined by the strength of \(\bar{\bf 6}\,{\bf 3}\) condensate and the coupling \(g_{H}\) is fixed at the corresponding value.
The VEV not only breaks \(SU(3)_{HC}\times SU(4)_{EC}\times U(1)_{X}\) to \(SU(3)_{C}\times U(1)_{Y}\) with massive \(E_{\mu}\), \(G^{\prime}\) and \(Z^{\prime}_{E}\) but also gives the Dirac masses to the fermion sextet. The VEV mixes the \(F_{R}\) with the exotic fermion \(f_{L}\) in Eq. (30). We then get the mass term as \(M_{F}\bar{F}_{R}f_{L}\) with \(M_{F}\sim Y_{S}f_{E}\), where the Yukawa coupling \(Y_{S}\) comes from the strong dynamics and should have a large value. With the assistance of the tumbling mechanism, now we have a dynamical origin for the breaking pattern and also get rid of part of the dangerous exotic fermions as they are much heavier and out of reach of LHC searches. Similarly, we can introduce an additional LH sextet \(F_{L}\) to generate Dirac mass with \(f_{R}\). However, an additional mechanism is required to forbid the direct condensation among the two sextets \(F_{L}\) and \(F_{R}\), which is more attractive, such as a strong repulsive \(U(1)\) force.
### CHM from hyperfermion condensation
After the first breaking, the strong \(SU(3)_{HC}\) is broken and the fermion sextets become massive. The next most attractive channels are RH **3** combined with LH **3** whose strength of the attraction is only slightly below the first one [28; 29]. Though \(SU(3)_{HC}\) is already broken and the coupling \(g_{H}\) is fixed at the value to trigger \(\bar{F}_{R}Q_{L}=\bar{\bf 6}\,{\bf 3}\) condensation, we assume the \(\bar{\psi}_{R}\psi_{L}=\bar{\bf 3}\,{\bf 3}\) condensate can still happens with an assist from \(SU(3)_{EC}\) interaction.
Since the strong gauge group is broken, we can describe it by the Nambu-Jona-Lasinio (NJL) model [32; 33]. The critical coupling for \(\bar{\bf 3}\,{\bf 3}\) condensation is
\[g_{c}=\sqrt{8\pi^{2}/3}\sim 5.1. \tag{33}\]
We claim that after the first breaking, the coupling \(g_{H}\) is fixed at the value right below \(g_{c}\) as the first attractive channel with \(\bar{\bf 6}\,{\bf 3}\) has a smaller critical coupling.
Combining with the \(SU(3)_{EC}\) interaction, which only applies on hyperfermions but not top quarks, we claim the following relation on couplings is achieved
\[g_{\psi}^{2}\sim g_{H}^{2}+g_{E}^{2}>g_{c}^{2}\,,\quad g_{t}^{2}\sim g_{H}^{2} <g_{c}^{2}\, \tag{34}\]
such that the interaction is strong enough to form \(\bar{\psi}\psi\) condensate for composite Higgs without \(\bar{t}t\) condensate.
In the NJL model, we can also estimate the breaking scale by the \(\bar{\psi}\psi\) condensate. Generically, the breaking scale in the NJL model is close to the scale of the braoken strong gauge group, i.e. \(f\sim f_{E}\), unless we have \(g_{\psi}\sim g_{c}\). However, as we already show how the coupling \(g_{\psi}\) can be naturally closed to critical coupling \(g_{c}\) in our model, we can then get a desired hierarchy \(f<f_{E}\). The difference thus determines the value of \(y_{t}\) in the model.
The detail of the CHM sector is beyond the scope of this study as we want to focus on the top Yukawa generation so we will only make a few claims on the possible realization 3. Following the general CHMs, we need the \(\bar{\psi}\psi\) to break the global symmetry at the CHM scale \(f\) and introduce Higgs as pNGBs of the coset. Because we have \(N_{HC}=3\) with hyperfermions in complex representations, the minimal choice of the FCHMs [34; 35] is the one with 4 Dirac fermions in the fundamental representation, which results in a \(SU(4)\times SU(4)/SU(4)\) FCHM. The coset has a total of 15 pNGBs, including two Higgs doublets, and the dedicated study of this model can be found in [36; 37].
Footnote 3: The only effort we make is to have hyperfermions with hypercharge required from FCHMs such that they can get the EW preserving condensate, which is the main difference between the composite Higgs model and the technicolor model.
### Top Yukawa from the \(E_{\mu}\) boson
The top Yuakwa model we construct with an extended gauge group \(SU(4)_{EC}\) allows us to introduce top Yukawa coupling in exactly the way we describe in Sec. II. Now with a concrete model, we can further estimate the required value and set up our benchmark.
With the extended gauge group \(SU(4)_{EC}\) broken at the scale \(f_{E}\), the \(E_{\mu}\) gauge boson which connects the hyperfermions and top quarks acquires a mass \(M_{E}\). The top Yukawa coupling is generated after the composite Higgs is formed by the \(\bar{\psi}\psi\) condensate and the massive \(E_{\mu}\) boson is integrated out. The value is given by
\[y_{t}\sim\frac{1}{v}\frac{g_{E}^{2}}{M_{E}^{2}}\langle\bar{\psi_{R}}\psi_{L} \rangle_{HC}\sim\left(\frac{f}{f_{E}}\right)^{2}Y_{S}\, \tag{35}\]
where \(Y_{S}\) is the Yukawa coupling from the strong interaction among hyperfermions. In the NJL model, \(Y_{S}\) can be estimated as
\[Y_{S}\sim\frac{4\pi}{\sqrt{N_{HC}\,\ln(\Lambda^{2}/M_{\psi}^{2})}}\, \tag{36}\]
where \(\Lambda\) is the cutoff of the theory and \(M_{\psi}\) is the dynamical mass of hyperfermions. In a strongly coupled theory, \(Y_{S}\) is expected to be \(3-4\). In our case, as we have additional splitting between \(f_{E}\) and \(f\) which might enhance the logarithmic term, we take the lower value \(Y_{S}=3\) for our numerical study.
To generate the observed top Yukawa \(y_{t}\sim 1\), the scale
\[f_{E}\sim\sqrt{Y_{S}}\times f\sim 1.7\times f. \tag{37}\]
Setting \(f=1\) TeV as our benchmark, we get \(f_{E}\sim 1.7\) TeV. Next, we can also derive the mass of the \(E_{\mu}\) gauge boson. With \(g_{E}\sim 1.02\) and \(f_{E}\sim 1.7\) TeV, The mass is then given by
\[M_{E}=\frac{1}{2}\,g_{E}f_{E}\sim 0.9\ \text{TeV}\, \tag{38}\]
which is the most important quantity in our model because it serves as the cutoff of the top loop. Now from the weakly coupling extended gauge group \(SU(4)_{EC}\), we get a naturally light cutoff for the top loop contribution, which can relieve the fine-tuning problem and serve as a good alternative to the top partner solution.
### The overall spectrum
Before moving on to the phenomenology section, we briefly summarize all the new particles we introduce and the overall spectrum. Start with massive gauge bosons, we have the broken \(SU(3)^{\prime}\) bosons \(G_{\mu}^{\prime}\), which is a color-octet (colorons) with masses given by
\[M_{G^{\prime}}=\frac{1}{\sqrt{2}}\,\sqrt{g_{H}^{2}+g_{E}^{2}}f_{E}\sim 6\ \text{TeV}. \tag{39}\]
Next, the \(E_{\mu}\) gauge boson, with quantum number under \(\mathcal{G}_{SM}\) as \((3,1,-1/6)\), is much lighter with a mass
\[M_{E}=\frac{1}{2}\,g_{E}f_{E}\sim 0.9\ \text{TeV}. \tag{40}\]
Last, there is a massive neutral bosons \(Z_{E}^{\prime}\) with a mass
\[M_{Z^{\prime}}=\frac{1}{\sqrt{8}}\,\sqrt{c^{2}g_{X}^{2}+g_{E}^{2}}\,f_{E}\sim 0.6\ \text{TeV}\, \tag{41}\]
which is the lightest new particles. As \(g_{E}\gg cg_{X}\), the couplings between \(Z_{E}^{\prime}\) and fermions are mainly determined by the \(U(1)_{EC}\) part with coupling \(g_{E}\) and charge of fermions given by
\[q_{L},\,t_{R}:\frac{3}{\sqrt{24}}\sim 0.6,\quad\psi_{L,R},\,f_{L,R}:\frac{-1}{ \sqrt{24}}\sim-0.2\, \tag{42}\]
Besides bosons, we have some new fermions at the TeV scale. Because the \(SU(3)_{HC}\) is broken, these fermions are unconfined and can be searcheded for at the LHC. First, we have color sextet Dirac fermion \(F\) with quantum number \((6,2,0)\) and \((6,1,1/2)\), which get a dynamical mass at the breaking scale \(f_{E}\) with 4
Footnote 4: Here we still use \(Y_{S}=3\) for convenience. However, for a sextet fermion, due to a stronger interaction, the coupling \(Y_{S}\) should be greater and the fermions \(F\) should be heavier.
\[M_{F}\sim Y_{S}f_{E}\sim 5\ \text{TeV}. \tag{43}\]
Next, the hyperfermions \(\psi\) are also unconfined. They are also Dirac fermion with quantum number \((\bar{3},2,0)\) and \((\bar{3},1,1/2)\). The mass is lighter as it comes from a lower breaking scale \(f\) as
\[M_{\psi}\sim Y_{S}f\sim 3\ \text{TeV}. \tag{44}\]
## V Indirect searches
Since the goal of the whole study is to generate the top Yukawa coupling, we will start with the discussion on top physics. The main effect comes from the dim-6 nature of top Yukawa coupling, which has already been discussed in [11; 12], so in this paper, we will focus on the benchmark we use and some new analyses.
### Higgs coupling measurements
Having the top Yukawa from dim-6 operators in general will not change its value at zero momentum. However, a deviation is still expected due to the Goldstone nature of composite Higgs. The measurements of the top Yukawa coupling as well as other Higgs couplings are the direct test of misalignment, which is the key mechanism
in CHMs. Combining all the Higgs coupling measurements, we can get a constraint on the CHM scale \(f\). Assuming a simplified form with \(\kappa_{V}=\kappa_{f}=\sqrt{1-v^{2}/f^{2}}\) for the deviations on the Higgs couplings, recent measurements by ATLAS and CMS with Run 2 data [38; 39] put a constraint on \(f>1.1\) TeV, which is slightly above our benchmark with \(f=1\) TeV. The constraint can be relieved if we go beyond the simplified form.
### Running Top Yukawa
The dim-6 origin of the top Yukawa coupling will lead to a nontrivial form factor on the top-Higgs vertex. Such momentum-dependence of the top Yukawa coupling at high scales could be measured in the tails of momentum distributions in processes such as \(t\bar{t}h\) production [40; 41; 42; 43]. However, it will require precise measurement of \(t\bar{t}h\) differential cross section, which suffers from both the small \(t\bar{t}h\) cross section and the complexity of final states. The current measurement has not yet reached the desired sensitivity but could be done with new data at the HL-LHC.
### Running Top mass
Tests of the dim-6 top Yukawa can also be done by measuring the running of the top quark mass. The non-trivial running top mass at the high scale will affect the \(t\bar{t}\) differential cross section. Compared to the \(t\bar{t}h\) channel, the \(t\bar{t}\) channel has a larger cross section, which could provide better sensitivity. The measurement has been done by the CMS collaboration using part of Run 2 data with an integrated luminosity of \(35.9\) fb\({}^{-1}\)[44]. The result has been interpreted in [45] as the top mass running up to \(0.5\) TeV as shown in Fig. 1.
Assume a generic form of top mass running as
\[m_{t}(\mu)=m_{t,SM}(\mu)\left(\frac{\Lambda_{t}^{2}}{\mu^{2}+\Lambda_{t}^{2}} \right)\, \tag{45}\]
where \(\Lambda_{t}=M_{E}\) in our top Yukawa model. We can then get a bound from the current data as \(M_{E}\gtrsim 700\) GeV. With more data coming out, we expect the relevant parameter space can be fully explored in the HL-LHC era.
### Four top quarks cross section
The model also comes with new bosons interacting with top quarks, including a massive neutral boson \(Z_{E}^{\prime}\) and colorons \(G^{\prime}\). Both of them will introduce additional contributions to the four top-quark cross section. Due to the heaviness of top quarks, this measurement is like a precision test of a rare process. In the SM, the cross section is derived as [46]
\[\sigma^{\rm SM}_{t\bar{t}t\bar{t}}=13.4^{+1.0}_{-1.8}\ {\rm fb}. \tag{46}\]
Measurements using different final states have been performed by both ATLAS [47] and CMS [48] with LHC Run 2 data. The cross section are measured as
\[\sigma^{\rm ATLAS}_{t\bar{t}t\bar{t}}=22.5^{+6.6}_{-5.5}\ {\rm fb},\quad \sigma^{\rm CMS}_{t\bar{t}t\bar{t}}=17.9^{+4.4}_{-4.1}\ {\rm fb}. \tag{47}\]
where ATLAS gets a central value of about \(1.7\) times the SM prediction while CMS gets a value closer to the SM prediction but still higher.
Both collaborations have seen evidence for the simultaneous production of four top quarks and a cross section slightly larger compared to the SM prediction. The bound on the cross section at \(95\%\) CL level is given by
\[\sigma_{t\bar{t}t\bar{t}}<38\,(27)\ {\rm fb}\ {\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}} {{{\rmrm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}{{{{}}}}}{{{}}}}{{{}}}{{{{{}}}}{{{{{}}}{{{{}}{{{}}{{}{{}}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{ {}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{ {}{{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{}{ {}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{ {}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{}{{}{ {}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{ {}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{ {}{{}{}{}{{}{}{}{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{ {}{}{{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{ {}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{ {}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{ {}{{}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{ {}{{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{ {}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{ {}{{}{{}{}{{}{}{{}{{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{{}{ {}{{}{{}{}{{}{}{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{{}{{}{}{{}{{}{{}{}{{}{ {}{{{}{}{}{{{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{{ }{{{{{}{{}{}{{{}{{{}{}{}{{}{{}{}{}{{}{{}{}{{}{{}{{}{{{}{{}{}{}{}{{}{ }{{{{{{{{{{{{{{{{{{{{{{{}{}{}{}{}{{{{}{}{{{}{}{{{}{{}{}{}{{}{{{}{{}{{{{{}{{}{{{}{{}{{}{{}{{{{}{{}{}{{{{}}{{{{ \!}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}}\}}}}}\}}}}\}}}}\}}}}}\}}}\ \ {\ {{}\{\{\{\{\{\}}}}}}}}}}}}}}}}}\{\{\ \{\{\{\\\\\\\\\\\\\\{\{\{\{\{\\\{\\\\\}\\\\\\{\{\}\}\\\\\\\{\{\}\}\}\\\\\\\{\}\\\{\}\}\\\}\\\\\\\{\{\}\}\}\\\\\\\{\}\}\\\\\\\{\}\\\{\}\}\\\\}\\\\\\\{\}\\\\}\\\\\}\\\\\\}\\}\{\\\\\}\\}\{\\\\}
### Flavor constraints
Besides the four top quarks cross section, the same four-quark operators will also introduce other light quark physics through the mixing, which might lead to dangerous flavor-changing neutral currents. Assuming that the mixing angle \(\theta_{23}\gg\theta_{13}\) analogous to the CKM matrix, then among all the processes, the strongest constraint comes from \(B_{s}-\bar{B}_{s}\) mixing, which contains both the second and third generation down-type quarks. The contribution comes from the operator
\[\Delta\mathcal{L}_{B_{s}}=C_{sb}(\bar{s}_{L}\gamma_{\mu}b_{L})(\bar{s}_{L} \gamma_{\mu}b_{L}). \tag{51}\]
Following the calculation in [52], we can derive the contribution from new physics on the mass difference \(\Delta M_{s}\). Comparing the measurement of mixing parameter [53] to the SM prediction by sum rule calculations [54], we get the bound on the coefficient of the operator as
\[|C_{sb}|\approx\frac{1}{2}\left(\frac{g_{V}\theta_{sb}}{M_{V}(\text{TeV})} \right)^{2}\leq\left(\frac{1}{274}\right)^{2} \tag{52}\]
for a top-philic color-singlet vector boson \(V\), where the angle \(\theta_{sb}\) is the mixing between left-handed strange and bottom quark. In our benchmark, the \(g_{V}/M_{V}(\text{TeV})=1\), we get the strong constraint as \(\theta_{sb}<0.005\). The bound can be relieved with some model building efforts to reduce the charge of \(q_{L}\) under \(U(1)_{EC}\). For example, if we can switch the assignment of \(q_{L}\) and \(\psi_{L}\) inside \(Q_{L}\), the charge will be three times weaker and so is the bound.
### Electroweak precision tests
Precise measurements from the electroweak sector usually put strong constraints on the TeV scale new physics, especially the \(T\) parameter and \(Zb\bar{b}\) coupling. Both of them measure the violation of \(SU(2)_{R}\) symmetry, which is related to the detail of bottom Yukawa coupling generation. Since we only care about the top Yukawa generation, the topic is beyond the scope of this study and relies on a more complicated UV complete model. We leave such a custodial symmetric UV completion and the discussion of corresponding constraints to a future study.
## VI Direct Searches
There are many new particles in this top Yukawa model for the composite Higgs. Some of them are composite states from the CHM sector but we will not discuss them. Instead, we would like to focus on the new particle due to the extension of the gauge group, including the new gauge bosons and fermions discussed in Sec. IV.4.
#### vi.5.1 The \(Z^{\prime}_{e}\) boson
Start from the lightest particle in the spectrum. First, there is a massive neutral boson \(Z^{\prime}_{E}\) with the mass \(M_{Z^{\prime}}\sim 600\) GeV. If the charge assignment follows only the Eq. (42), the \(Z^{\prime}_{E}\) boson will only couple to the top and bottom quarks among the SM fermions. The dominant production will be through the \(b\bar{b}\) fusion. The branching ratio will be
\[\text{Br}(t\bar{t})=2\times\text{Br}(b\bar{b})=66\% \tag{53}\]
as \(q_{L}=(t_{L},b_{L})\) and \(t_{R}\) carry the same charge. Under this assumption, there are only two decay channels with the final states \(t\bar{t}\) and \(b\bar{b}\). However, the current direct searches for both \(t\bar{t}\)[55] and \(b\bar{b}\)[56] final states have no access to \(M_{Z^{\prime}}\) around \(600\) GeV due to the heaviness of top quarks and the \(b\)-tagging issues.
Therefore, the direct search of \(Z^{\prime}_{E}\) is only possible with lepton final states but it requires an additional setup. Assuming that in a more realistic model, the \(\tau\) lepton is also charged under \(U(1)^{\prime}\), then the most promising channel will become the process \(b\bar{b}\to Z^{\prime}_{E}\to\tau\tau\), which covers the sub-TeV regime. The current searches [57; 58] for \(M_{Z^{\prime}}=600\) GeV already require the cross section to be lower than \(20\) fb which can put the constraint on the coupling of \(Z^{\prime}_{E}\) with \(\tau\) leptons.
#### vi.5.2 The \(E_{\mu}\) gauge boson
Next, the \(E_{\mu}\) boson is also at the sub-TeV scale with \(M_{E}\sim 900\) GeV in our benchmark. The most important feature of \(E_{\mu}\) is that it is stable! Since the hyperfermions get masses at few-TeV, without additional assumptions, there is no allowed decay channels for a single \(E_{\mu}\) boson. It is the direct consequence of having a light mediator in an ETC-type model.
Under our extended gauge group, the \(E_{\mu}\) is colored, which then gets a large cross section from the pair production process at the LHC. Although a single \(E_{\mu}\) boson is stable, a pair of \(E_{\mu}\) bosons is another story. For \(pp\to E_{\mu}^{+}E_{\mu}^{-}\) at the LHC, both of them can decay to a top/bottom quark and an off-shell hyperfermion. In general, the off-shell hyperfermions can not decay to lighter on-shell final states so the \(E_{\mu}\) boson is stable. However, the two off-shell hyperfermions from \(E_{\mu}^{+}\) and \(E_{\mu}^{-}\) can form a deeply bound state, which is just the composite Higgs in our model. Therefore, we expect that several decays channels are allowed, including \(E_{\mu}^{+}E_{\mu}^{-}\to t\bar{t}h\,/\,t\bar{t}Z\,/\,tbW\). Due to this feature, the direct resonance search is impossible. Instead, we might need precision measurements on the cross sections of possible final states.
Strong constraints could arise from cosmology because it might introduce unacceptable relic abundance. Since the \(E_{\mu}\) boson is stable and colored, it will form heavy color-neutral bound states with other colored particles through QCD interactions, which behave like mas
sive stable charged particles. The constraints on stable charged particles have been studied [59; 60], which mainly depends on the thermal production/annihilation rate. Since the \(E_{\mu}\) boson is colored, the relic abundance is lower compared to pure charged particles [61]. However, it also relies on the details of cosmological evolution such as reheating, so after all we only refer to the searches from the LHC.
#### vii.3.3 The \(G^{\prime}\) boson (Coloron)
Compared to the \(Z^{\prime}_{E}\) boson and \(E_{\mu}\) boson, the coloron is much heavier with the mass \(\sim 6\) TeV. As a color-octet, we expect a large cross section even though it is heavy. If it only couples to the top and bottom quarks like the \(Z^{\prime}_{E}\) boson, the decay channels will also be dominated by \(t\bar{t}\) and \(b\bar{b}\) final states. However, due to the strong coupling \(g^{\prime}_{s}\sim g_{H}\sim 5\), we expect the coloron to be a very broad resonance, which will be hard to search for.
#### vii.3.4 Heavy fermions
Since the strong \(SU(3)_{HC}\) is broken and unconfined, new fermions, even charged under hypercolor, are able to propagate freely after being produced. There are two types of heavy fermions. First is the color sextet Dirac fermion \(F\) with quantum number \((6,2,0)\) and \((6,1,1/2)\), which get a dynamical mass at the breaking scale \(f_{E}\) with \(M_{F}\sim 5\) TeV. Next, the hyperfermions \(\psi\) are also Dirac fermion with quantum number \((3,2,0)\) and \((3,1,1/2)\), which have a lighter mass \(M_{\psi}=3\) TeV. Both of them can be pair-produced at the LHC and decay through a \(E_{\mu}\) boson plus a top/bottom quark channel. And again a pair of \(E_{\mu}\) bosons will decay to two more top/bottom quarks with a Higgs/W/Z boson.
## VII Conclusion and outlook
In this paper, we study a Top Yuakwa model based on the motivation from the Naturalness principle, i.e. a light cutoff for the top quark loop. We construct a composite Higgs model where the top Yukawa coupling arises from four-fermion interactions through an ETC-like mechanism. Different from the traditional extension, we extend the gauge group in a direction independent of the strong interaction. In this way, the gauge coupling \(g_{E}\) can be weak and the mediator \(E_{\mu}\), which plays the role of top loop cutoff, can be naturally light, which relieves the top loop contribution.
A concrete model with \(\mathcal{G}_{E}=SU(3)_{HC}\times SU(4)_{EC}\times SU(2)_{W}\times U(1)_{X}\) is discussed in detail. The breaking of \(\mathcal{G}_{E}\rightarrow\mathcal{G}_{SM}\) is realized dynamically through the tumbling mechanism with a chiral fermion content. We also show that under this content, the hyperfermions can condense without the dangerous top quark condensation due to the tilting mechanism. Most important of all, the top Yukawa coupling is generated through a light mediator - the \(E_{\mu}\) gauge boson from the weakly coupled \(SU(4)_{EC}\).
The rich phenomenology on top physics is discussed, where \(t\bar{t}\) differential cross section could provide important hints. The method also features two new sub-TeV particles which have important impacts at the LHC. One is a third-generation-philic \(Z^{\prime}_{E}\) boson, the lightest state in the spectrum, which will enhance the \(t\bar{t}t\bar{t}\) cross section. The other is the \(E_{\mu}\) gauge boson, the cutoff of top loop, which will affect the \(t\bar{t}h\), \(t\bar{t}Z\), \(tbW\) final states.
This study aims at the model building in a different direction compared to the traditional model. Our attempt only focuses on the gauge group extension in its simplest way, which might not be realistic considering that we ignore the bottom quarks and other light fermions. We expect this extension can be applied to other flavor-safe setups, such as partial compositeness [62]. As in fundamental partial compositeness [63; 64], the mixing should also arise from dim-6 four-fermion operators. With assistance from our method, the top partners no longer need to be light and can escape from the LHC direct searches without worsening the fine-tuning problem because now the top loop contribution is controlled by the light \(E_{\mu}\) gauge boson [65].
Also, the detail of the composite Higgs sector is left aside to avoid distracting the attention from our goal. Because of the \(SU(3)_{HC}\) hypercolor group, a large coset is expected. If we want to stick to the small coset, such as the \(SU(4)/Sp(4)\) fundamental composite Higgs model with only a Higgs doublet and a real singlet [34; 35], we need hyperfermions to be pseudo-real representations of the hypercolor group. To realize the idea, we can have \(SU(2)_{HC}\) with hyperfermions as fundamental representations, which is possible if the \(SU(2)_{HC}\) is broken down to the SM \(SU(2)_{W}\) in the end. For this scenario, we start with a strongly coupled \(SU(2)_{HC}\) and a weakly coupled \(SU(3)_{ESM}\). With a certain fermion content, we could have symmetry breaking \(SU(3)_{ESM}\times SU(2)_{HC}\to SU(2)_{W}\) as desired. The concrete construction is left to a future study.
Together with [11; 12], we hope to raise some interest in the modified top Yukawa running scenario compared to the top partner solutions. As the constraints on the top partner mass become higher and require more fine-tuning, the measurements on top physics, on the other hand, are reaching higher precision and providing many intriguing results, which might reveal the mysterious relation between top quarks and Higgs bosons.
###### Acknowledgements.
I thank Andreas Bally and Florian Goertz for their collaboration in the early stages of this work. I am also grateful to Hsin-Chia Cheng, David Marzocca, Markus Luty, and Alvaro Pastor-Gutierrez for useful discussions. |
2309.06365 | Cited Text Spans for Citation Text Generation | An automatic citation generation system aims to concisely and accurately
describe the relationship between two scientific articles. To do so, such a
system must ground its outputs to the content of the cited paper to avoid
non-factual hallucinations. Due to the length of scientific documents, existing
abstractive approaches have conditioned only on cited paper abstracts. We
demonstrate empirically that the abstract is not always the most appropriate
input for citation generation and that models trained in this way learn to
hallucinate. We propose to condition instead on the cited text span (CTS) as an
alternative to the abstract. Because manual CTS annotation is extremely time-
and labor-intensive, we experiment with distant labeling of candidate CTS
sentences, achieving sufficiently strong performance to substitute for
expensive human annotations in model training, and we propose a
human-in-the-loop, keyword-based CTS retrieval approach that makes generating
citation texts grounded in the full text of cited papers both promising and
practical. | Xiangci Li, Yi-Hui Lee, Jessica Ouyang | 2023-09-12T16:28:36Z | http://arxiv.org/abs/2309.06365v2 | # Cited Text Spans for Citation Text Generation
###### Abstract
Automatic related work generation must ground their outputs to the content of the cited papers to avoid non-factual hallucinations, but due to the length of scientific documents, existing abstractive approaches have conditioned only on the cited paper _abstracts_. We demonstrate that the abstract is not always the most appropriate input for citation generation and that models trained in this way learn to hallucinate. We propose to condition instead on the _cited text span_ (CTS) as an alternative to the abstract. Because manual CTS annotation is extremely time- and labor-intensive, we experiment with automatic, ROUGE-based labeling of candidate CTS sentences, achieving sufficiently strong performance to substitute for expensive human annotations, and we propose a human-in-the-loop, keyword-based CTS retrieval approach that makes generating citation texts grounded in the full text of cited papers both promising and practical.
## 1 Introduction
Each academic research paper distinguishes its novel contributions from prior work by conducting a literature review; in natural language processing, we do so in a separate "Related Work" section. The literature review is important to situate the current work in the context of a larger body of research, but writing it can be challenging. An automatic related work generation system aims to assist researchers to draft related work sections faster, more accurately, and with broader coverage. Due to the rigorous nature of academic research, even large language models (LLMs), such as the GPT family Brown et al. (2020); OpenAI (2023), still have to ground their outputs to the input cited papers to avoid non-factual hallucinations.
Given a set of papers to cite, prior extractive related work generation approaches Hoang and Kan (2010); Hu and Wan (2014); Chen and Zhuge (2019); Wang et al. (2019); Deng et al. (2021) selected salient sentences from the full text of the cited papers. However, existing abstractive approaches AbuRa'ed et al. (2020); Xing et al. (2020); Ge et al. (2021); Luu et al. (2021); Chen et al. (2021); Li et al. (2022) generate citation texts conditioned on only the cited paper _abstracts_, since the full text of the paper often exceeds the length limit of state-of-the-art Transformer-based models Li and Ouyang (2022). Li et al. (2022) show that, when human judges are asked to rate the _Relevance_ of human-written gold citations with respect to the citation context and the cited paper abstract, the scores are significantly lower than those of system-generated citations. We hypothesize that this discrepancy is caused by the gold citations referring to content from the cited paper body sections, not just the abstract, and these body sections are not shown to either the generation model or the human judges.
Intuitively, the _cited text span_ (CTS), the span of the cited paper that the gold citation refers to, should be a better alternative to the abstract for grounding a citation. Unfortunately, CTS information is difficult to obtain. Existing manually annotated CTS datasets Jaidka et al. (2018, 2019); AbuRa'ed et al. (2020) suffer from two key issues: annotation is extremely labor-intensive, even for domain experts, so the datasets are small; and inter-annotator agreement is relatively low (Cohen's \(\kappa\) of 0.16-0.52). Further, automatic CTS retrieval systems trained on these datasets give low performance (F1 < 0.2) and also use the gold citation text as input, making them inapplicable in a citation generation setting. Given these obstacles, CTS-based citation text generation is under-explored.
In this work, we study scalable CTS retrieval for citation text generation without expensive manual annotations. Our main contributions are as follows:
* We argue that CTS retrieval is an important and under-explored subtask of citation text generation. We demonstrate that citation texts
generated using CTS are significantly more accurate and faithful than with the existing approach of using the cited paper abstract.
* Because existing human-annotated CTS datasets are too small to train an effective CTS retrieval system, we show that ROUGE-based CTS labeling achieves sufficiently good performance to substitute for expensive human annotations. Using this method, we create a large-scale, distantly-labeled CTS dataset.
* We argue that existing CTS retrieval systems are inappropriate for the citation generation task because they use the gold citation text as the retrieval query. Instead, we propose a simple, human-in-the-loop, keyword-based CTS retrieval method that makes generating citations grounded in the full text of the cited paper full text both promising and practical1. Footnote 1: Code, model, and data will be released upon acceptance.
## 2 Related Work
### Cited Text Spans
There are two main CTS datasets: jaidka2018, jaidka2019, jaidka2019 proposed the CL-Scisumm shared task and manually annotated CTS, citation facets, and summaries for 364 citation texts that cite a set of 20 papers; AbuRa'ed et al. (2020) manually annotated sentence-level CTS for 20 related work sections. Unfortunately, both suffer from low inter-annotator agreement, and automatic CTS retrieval systems trained and evaluated on these datasets perform poorly(jaidka2018, jaidka2019, jaidka2020, 2020).
Only a few studies have used CTS for scientific document summarization. ScisummNet (Yasunaga et al., 2019) leveraged a ROUGE-based CTS retrieval approach (Cao et al., 2015) to collect community-based evidence for summarizing a paper. wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,wang2019,2021,wangwang2019,2021,wangwang2019,2021,wangwang2021,2021,wangwang2021,2021,wangwang2021,2021,20222,20223,2023,2023,2024,2025,2026,2027,2028,2029,2029,20229,2029,20227,2029,2203,20228,203,20329,20329,20429,20329,20429,20528,20629,20629,20728,2029,2029,20229,20229,20329,20329,20329,20429,20528,20229,20629,20728,2029,20329,20329,20429,20528,2029,20329,20429,20528,2029,20629,20728,2029,20329,20829,20929,20932,20329,20329,20329,20429,20528,2029,20629,20728,20829,20932,20329,20429,20528,20629,20728,20829,209329,20942,20528,20629,20728,20829,209529,209629,20973,20198,20982,20229,20329,20429,20528,20629,20728,20229,20329,20329,20429,20629,20728,20829,20829,20998,2099999,209999,209999,2019999,20999,201999,201999,201999,202199,20329,20429,20528,202932,20329,20528,20329,20429,20528,20629,20728,20829,209999,20999,20999,201999,201999,20199,20219,20329,20429,20528,20528,20629,20728,20829,20829,209999,201999,20219,20329,20429,20528,20629,20728,20629,20728,20829,20829,20999,20999,20199,20219,20329,20429,20528,20629,20728,20829,20999,201999,20219,20329,20429,20629,20728,202932,20528,20629,20728,20829,20999,201999,20199,20219,20329,20629,20728,2029,20999,20199,20329,20429,20528,20629,20728,20728,20829,20999,201999,20829,20999,20199,20219,20329,20329,20429,20629,20728,20829,20999,20999,20999,201999,20329,20429,20528,20629,20728,20829,20829,20999,20999,20999,20199,201999,201999,20219,20329,20329,20329,20429,20528,20629,20629,20728,20629,20728,2029,20629,20728,20329,20629,20728,20829,20829,20999,20999,20199,209999,20199,20329,20429,20528,20629,20728,20829,20829,20999,20829,20999,20199,20329,20429,20528,20629,20728,20629,20728,20829,20999,20999,20199,20329,20629,20728,2029,20329,20429,20528,20629,20728,2029,20728,20829,2099,20999,20999,20199,20999,201999,20199,201999,20199,20199,20219,20219,20329,20329,20329,20429,20528,20629,20728,20829,20999,201999,201999,20199,20199,201999,201999,20199,201999,201999,201999,202199,201999,202199,20329,20429,205
trieved, the RAG Lewis et al. (2020) paradigm can be applied using any sequence-to-sequence model. The conditional generation input is paired and encoded with each retrieved document in parallel, and the generation output is jointly decoded using all input/retrieved document pairs: RAG-sequence decodes the entire output sequence based on a fixed distribution over the retrieved documents, while RAG-token decodes each token using a different distribution over the retrieved documents.
Finally, Shuster et al. (2021) propose FiD Izacard and Grave (2021) as the decoder in the RAG framework. FiD pairs and encodes the conditional generation input with each retrieved document in parallel and leverages the attention mechanism to jointly decode from the encodings.
## 3 A Large-Scale CTS Dataset: Automatic vs. Human Labeling
When authors cite a paper, they locate salient passages and compose a citation text grounded in these cited text spans (CTS). Unfortunately, such fine-grained CTS information is not recorded anywhere; even an author may not be able to precisely retrieve the CTS that they used. While prior CTS retrieval systems have trained on small, human-annotated CTS datasets Jaidka et al. (2018, 2019); AbuRa'ed et al. (2020); Chandrasekaran et al. (2020), we argue that the relatively low inter-annotator agreement of these datasets suggests there can be multiple reasonable CTS for a given citation, just as there can be multiple valid extractive summaries of a document (see Section 3.3). Thus, automatically-labeled CTS is not necessarily of lower quality than human-annotated CTS, as long as the automatic labels identify one of these multiple reasonable candidates. In this section, we use ROUGE-based retrieval, with the gold citation text as query, to automatically label plausible CTS candidates to create a large-scale CTS dataset without expensive and time-consuming human annotations. We demonstrate that our automatically-labeled CTS are of comparable quality to existing human-annotated datasets.
### Approach
With the observation that high keyword overlap with the citation text is a crucial characteristic of CTS, we follow Cao et al. (2015); Yasunaga et al. (2019) in using gold citation texts as queries to rank the top-\(k\) sentences in the corresponding cited papers as CTS candidates. Similarity is measured by the average of the ROUGE-1 and -2 recall scores Lin (2004). We use NLTK2 to remove stop words for both the query and cited paper sentences.
Footnote 2: [https://www.nltk.org/index.html](https://www.nltk.org/index.html)
We use the human-labeled datasets of Jaidka et al. (2018, 2019) (CL-SciSumm) and AbuRa'ed et al. (2020) as test sets to evaluate our ROUGE-labeled CTS3. We take the union of all annotators' selected sentences as the human-labeled CTS.
Footnote 3: We exclude 45 of AbuRa’ed et al. (2020)’s 239 cited papers due to an XML format error, but we do not expect this omission to affect the overall trend of our evaluation.
### Evaluation Metrics
Token overlap.We use ROUGE-recall to measure the coverage of the automatically-labeled tokens that support the gold citation text.
Figure 3: Distribution of the lengths of the cited papers by the number of sentences in CL-SciSumm and AbuRa’ed.
Figure 2: Performance of ROUGE-based CTS, measured by recall against human-labeled CTS. Solid lines (“Any”): each citation counts as one data point; a true positive is when at least one human-labeled sentence is selected by ROUGE. Dotted lines (“Individual”): each human-labeled sentence is a separate data point. Green lines (“A”): AbuRa’ed. Yellow lines (“S”): CL-SciSumm.
Faithfulness.We use QuestEval Scialom et al. (2021) and ANLI Nie et al. (2020) to compare the faithfulness of the gold citation text when grounded in human-labeled versus automatically-labeled CTS. For ANLI, we report the non-contradiction score (sum of _entailment_ and _neutral_) given by a fine-tuned RoBERTa-large model Liu et al. (2019).
### Findings
ROUGE labeling has high coverage of human annotations.As Figure 2 shows, our top-40 ROUGE-labeled CTS cover 80% and 95% of the human-labeled CTS in CL-SciSumm and AbuRa'ed et al. (2020), respectively. Further, as Figure 3 shows, the ROUGE-based approach concentrates the human-annotated CTS into roughly the top 20% of labeled sentences.
ROUGE labeling is more similar to the gold citation text.As Figure 4 and Table 1 show, although both human-labeled and ROUGE-labeled CTS are comparably faithful, ROUGE-based CTS has higher token overlap with the target citation text. This is unsurprising, given that ROUGE ranking explicitly prioritizes token overlap, while human annotators may not. Using CTS in the intersection, rather than the union, of different annotators' labels (_strict_) has a noticeable impact on token overlap, indicating that _individual annotators can select different, but all reasonable, CTS sentences_.
ROUGE labeling performs better on the downstream generation task.We also experiment with citation text generation using both ROUGE-labeled and human-labeled CTS, using the _LED-oracle_ model described in Section 4. As Table 2 shows, ROUGE-labeled CTS performs comparably to human-labeled CTS for the AbuRa'ed dataset and outperforms human labels for CL-SciSumm, confirming that ROUGE-based CTS labeling is a strong distant approach that produces competitive performance on the downstream generation task.
## 4 Citation Text Generation Using CTS
In this section, we experiment with citation text generation conditioned on CTS, rather than the abstract-only approach used in prior work, and demonstrate that conditioning on CTS produces more accurate and faithful generations.
### Problem Formulation
Following Li et al. (2022), who note the information loss and leak issue of the citation _sentence_ generation task, we aim to generate citation _spans_ conditioned on (1) the citing paper context and (2) and the full texts of the cited papers. We assume an auto-regressive generation setting, where citations are generated one at a time, in order. Thus, we have access to up to two sentences before the target citation to use as the context. We only consider the "dominant"-type citations that are grounded to specific CTS, following Jaidka et al. (2018, 2019); AbuRa'ed et al. (2020), who filter out very short ("reference"-type) citations.
Data.We use the CORWA citation span generation dataset Li et al. (2022), which is derived
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{_ROUGE-Labeled_} & \multicolumn{3}{c}{_Human-Labeled_} \\ \cline{2-7}
**Config** & **R-L** & **QEval** & **ANLI** & **R-L** & **QEval** & **ANLI** \\ \hline AbuRa’ed & **0.376** & 0.254 & 0.959 & 0.328 & 0.263 & **1.000** \\ A-strict & - & - & - & 0.224 & **0.269** & 0.992 \\ \hline SciSumm & **0.429** & 0.259 & **0.973** & 0.225 & 0.258 & 0.922 \\ S-strict & - & - & - & 0.187 & **0.275** & 0.923 \\ \hline \hline \end{tabular}
\end{table}
Table 1: ROUGE-based and human CTS labeling performance measured by ROUGE-L-recall, QuestEval, and ANLI against the gold citation text. “Strict” uses the intersection, rather than union, of human-labeled sentences by different annotators. \(k=10\) for ROUGE-based labeling.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{_ROUGE-Labeled_} & \multicolumn{3}{c}{_Human-Labeled_} \\ \cline{2-7}
**Config** & **R-1** & **R-2** & **R-L** & **R-1** & **R-2** & **R-L** \\ \hline AbuRa’ed & **0.295** & 0.123 & 0.220 & 0.273 & 0.105 & 0.201 \\ A-strict & - & - & - & - & 0.280 & **0.129** & **0.226** \\ \hline SciSumm & **0.365** & **0.176** & **0.265** & 0.316 & 0.149 & 0.242 \\ S-strict & - & - & - & 0.323 & 0.164 & 0.250 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Citation text generation performance measured by ROUGE-F1 against the gold citation. “Strict”: intersection, rather than union, of human-labeled sentences.
Figure 4: Overlap of the top-\(k\) ROUGE-labeled CTS (solid lines) & human-labeled CTS (dotted lines), measured by ROUGE-L recall against the corresponding citation text. Green lines (“A”): AbuRa’ed. Yellow lines (“S”): CL-SciSumm. The average length of human-labeled CTS is \(|\overline{A}|\)=16.8 and \(|\overline{S}|\)=2.5 sentences.
from the ACL partition of S2ORC (Lo et al., 2020). CORWA consists of a human-annotated _training_ set and _test_ set, as well as an automatically-labeled _distant_ training set, with 1654, 1206, and 19784 "dominant"-type citation spans4, respectively.
Footnote 4: To avoid confusion between cited text _spans_ and citation _spans_, we will refer to the generation targets as _citation texts_.
### Approach
#### 4.2.1 Cts Retrieval Strategies
ROUGE-based oracle retrieval.The ROUGE-based retrieval method described in Section 3 uses the gold citation span as query; while this approach cannot be used at test time, when the target citation is not known, we use it as an _Oracle_ baseline.
Human-in-the-loop keyword retrieval.We also experiment with a practical, human-in-the-loop scenario where a researcher already has a general idea of what they want to write about a particular cited paper, so they input some keywords as a query to guide CTS retrieval. To simulate this scenario, we apply a keyword extractor (Section 4.2.3) to the target citation spans and use the extracted keywords to retrieve CTS using ROUGE ranking (_Keyword_).
Context-based retrieval.We retrieve CTS given only the target citation's context sentences (_Context_), which is both significantly more challenging and more realistic than using the target citation text as the query. We apply the ROUGE labeling approach described in Section 3 on the Li et al. (2022) training set to create a distantly-labeled, large-scale CTS dataset and use this data to train a DPR model (Karpukhin et al., 2020). We initialize both the query and candidate document encoders with Aspire (Mysore et al., 2022), a scientific sentence encoder fine-tuned on top of SciBERT (Beltagy et al., 2019). We regard the top-40 ROUGE-labeled CTS sentences as positive documents and the rest of the sentences in the cited papers as hard-negative documents.
Table 3 shows that retrieved CTS supports the target citation text more faithfully than the abstract does, emphasizing the importance of conditioning on CTS for this generation task, rather than only conditioning on the cited paper abstract.
#### 4.2.2 Citation Text Generation Strategies
Retrieval-Augmented Generation.The RAG model (Lewis et al., 2020) bridges document retrieval and text generation by fine-tuning the query encoder and generator jointly. RAG uses DPR to retrieve \(k\) documents using the input query. We concatenate the target citation's context sentences and the retrieved CTS sentences5 as the input to the citation span generator. We use FiD (Izacard and Grave, 2021) with BART-base (Lewis et al., 2020) as the generator in our experiments.
Footnote 5: To balance performance and GPU memory usage, we retrieve the top \(k=10\) CTS in all experiments.
Longformer-Encoder-Decoder.We experiment with a simple Longformer-Encoder-Decoder (LED; Beltagy et al., 2020) model. We input the concatenation of the target citation context, the citation mark, and the retrieved CTS for each cited paper.
#### 4.2.3 Implementation Details
Opting out of FAISS.The original RAG implementation was designed for Wikipedia-based dialogue generation, where every Wikipedia article is a candidate for retrieval. Therefore, FAISS (Johnson et al., 2019) was used to speed up the search process. However, our search space is limited to cited paper sentences, and we need to retrieve CTS separately for each cited paper. Unfortunately, FAISS does not support partitioned search, so we replace FAISS with our simpler CTS retrievals strategies described in Section 4.2.1.
Freezing the query encoder weights.The RAG model retrieves candidate documents on-the-fly while jointly fine-tuning the pretrained DPR query encoder and the generator. In our preliminary experiments using paragraph-level CTS, we did not find significant improvement from fine-tuning the query encoder over simply freezing the query encoder weights. Further, due to the much larger candidate space of the sentence-level CTS setting, the on-the-fly retrieval needed to update the query encoder becomes intractable. Therefore, in our experiments, we retrieve CTS sentences for all target citation/cited paper pairs in advance and do not fine-tune the query encoder.
\begin{table}
\begin{tabular}{l|l l l} \hline \hline
**Setting** & **ROUGE-L** & **QuestEval** & **ANLI** \\ \hline Abstract & 0.290 & 0.263 & 0.868 \\ \hline Oracle CTS & **0.533** & **0.271** & 0.955 \\ Keyword CTS & 0.440 & **0.271** & 0.949 \\ Context CTS & 0.400 & 0.265 & **0.964** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quality of citation generation input in terms of token coverage (ROUGE-recall) and faithfulness (QuestEval, ANLI) with respect to the target citation.
Keyword extraction.We manually annotate a small set (673 instances) of citation keywords on top of the CORWA dataset Li et al. (2022). Each instance consists of a short, "reference"-type citation text with manually highlighted keywords. We pre-train a T5-small model Raffel et al. (2020) on the named-entity recognition labels of the SciREX training-set Jain et al. (2020) and further fine-tune the model on our keyword dataset as a sequence-to-sequence keyword extractor6. We use this extractor to simulate the human-in-the-loop setting, where a user provides some keywords to guide CTS retrieval and citation text generation (Section 4.2.1).
Footnote 6: Alternatively, recent LLMs can also be used to automatically predict or extract keywords.
Training.We train on the large, automatically-labeled CORWA _distant_ set using cross-entropy loss and perform minimal hyper-parameter tuning on the smaller, human-labeled _training_ set; Appendix Table 9 shows our hyper-parameter settings. For all experiments, we report performance on the CORWA _test_ set. Each model takes about one day to train on one Nvidia Tesla V100S-32GB GPU.
### Human Evaluation
Following the setup used in prior works Xing et al. (2020); Ge et al. (2021); Li et al. (2022), we recruit six NLP graduate students who are fluent in English as human evaluation judges. To increase their coverage of the test set, we split the judges into two groups of three and assign each group 25 instances sampled from the output of the RAG-FiD model. Each instance consists of a single target citation context and several candidate generations: one conditioned on the _Abstract_ only and one for each of our three CTS retrieval strategies: _Oracle_, _Keyword_, and _Context_. The target citation context, cited paper abstracts, and retrieved CTS from all three strategies are also provided to the judges. Each generated citation is rated on a five-point Likert scale with respect to _Fluency_, _Relevance_ (to the input; either abstract or CTS), _Coherence_ (with respect to the context), and _Overall Quality_.
We perform two rounds of evaluation: first, gold citation texts are mixed in with the candidate generations but are not identified to the judges; second, we reveal the gold citation texts and ask the judges to revise their ratings with this new information.
### Results and Discussion
CTS-based generation outperforms abstract-only generation.Table 4 shows that the _Oracle_ & _Keyword_ CTS settings massively outperform the _Abstract_ approach. Moreover, Table 5 shows that _Abstract_ receives the lowest human _Overall_ rating. In contrast with prior works in the abstract-only setting AbuRa'ed et al. (2020); Xing et al. (2020); Ge et al. (2021); Luu et al. (2021); Chen et al. (2021); Li et al. (2022), all settings now score well on _Relevance_ (higher than 4). Previously, even the gold citation text scored poorly on _Relevance_ because the judges were only shown the abstract, which did not contain sufficient information to support the citation texts; our judges are shown the retrieved CTS sentences, which are a much better match for the citation texts.
CTS-based generation extracts more short phrases.The ROUGE-recall scores in Table 6 show that, compared to _Abstract_, CTS-based generations are more similar to their inputs, suggesting that models trained on CTS are more extractive. To investigate further, we use Grusky et al. (2018)'s _coverage_ and _density_ metrics. _Coverage_ measures how much of the generated citation text is extracted from the input (abstract or CTS), while _density_ measures the size of the extractions (e.g. individual intergrams versus entire sentences). Additionally, we
\begin{table}
\begin{tabular}{l|l l l l} \hline \multicolumn{2}{c|}{_RAG-FiD_} & \multicolumn{2}{c}{_LED_} \\ \multirow{2}{*}{**Setting**} & **R-L** & **ANLI** & **R-L** & **ANLI** \\ \hline Abstract & 0.556 & 0.745 & 0.591 & 0.744 \\ \hline Oracle & **0.726** & 0.940 & **0.682** & 0.950 \\ Keyword & 0.630 & 0.941 & 0.604 & 0.946 \\ Context & 0.641 & **0.959** & 0.626 & **0.954** \\ \hline \end{tabular}
\end{table}
Table 6: Similarity and faithfulness of the generated citation texts with respect to their input setting, measured by ROUGE-recall and ANLI.
\begin{table}
\begin{tabular}{l|l l l l l} \hline \multicolumn{2}{c|}{_RAG-FiD_} & \multicolumn{2}{c}{_LED_} \\ \multirow{2}{*}{**Setting**} & **R-1** & **R-2** & **R-L** & **R-1** & **R-2** & **R-L** \\ \hline Abstract & 0.240 & 0.072 & 0.194 & 0.254 & 0.068 & 0.189 \\ \hline Oracle & **0.357** & **0.155** & **0.292** & **0.388** & **0.179** & **0.312** \\ Keyword & 0.320 & 0.127 & 0.262 & 0.332 & 0.138 & 0.270 \\ Context & 0.244 & 0.069 & 0.197 & 0.245 & 0.067 & 0.194 \\ \hline \end{tabular}
\end{table}
Table 4: Citation text generation performance measured by ROUGE-F1 against gold citation texts. Citation marks are excluded for calculating ROUGE.
\begin{table}
\begin{tabular}{l|l l l l} \hline
**Setting** & **Fluency** & **Relevance** & **Coherence** & **Overall** \\ \hline Gold & 4.71 (4.74) & 4.40 (4.18) & **4.23 (4.41)** & 3.87 **(4.16)** \\ \hline Abstract & 4.71 (4.72) & **4.29 (4.26)** & 4.06 (3.96) & 3.77 (3.63) \\ \hline Oracle & 4.80 (4.80) & 4.18 (4.14) & 4.07 (4.09) & 3.86 (3.80) \\ Keyword & **4.84 (4.84)** & 4.21 (4.16) & 4.07 (4.07) & 3.84 (3.76) \\ Context & 4.80 (4.80) & 4.41 (4.31) & 4.00 (3.96) & **3.95** (3.85) \\ \hline \end{tabular}
\end{table}
Table 5: Human evaluation with moderate intern-annotator agreement (avg. Kendall’s \(\tau=0.239\)). Scores in parentheses are from the second round after the gold citation texts are revealed to the judges.
Figure 5: Extractiveness of the generated citation texts, measured by _coverage_ and _density_ Grusky et al. (2018) against the generation input (abstract or CTS). The average coverage (C), density (D), and compression ratio (R) are shown for each generation setting. The “Gold” setting (Sub-figure 5i) is measured using Oracle CTS as the “input”.
calculate the compression ratio between the lengths of the inputs and the generated citation texts.
As Figure 5 shows, although the _Abstract_ setting has the lowest compression ratio, it has a significantly lower _coverage_ score than the CTS settings, confirming that the latter is more extractive. From this finding, along with the lower faithfulness scores in Table 67, we can infer that _Abstract_ produces more non-factual hallucinations that distort the ideas of the cited paper.
Footnote 7: We exclude QuestEval from this table due to an issue arising from the relative compression ratios of the different input settings. See Appendix C for details.
Moreover, _Abstract_ has a significantly higher _density_ score, indicating longer copied sequences from the input, while the CTS-based settings extract and reorganize shorter phrases from the input. Since copying long text spans from other documents results in plagiarism, CTS-based generation is again preferable. Nonetheless, we stress that CTS-based generation is intended for increasing accuracy and faithfulness and does not explicitly address the plagiarism issue; additional post-processing, such as a rewriting module, should be used to further reduce the risk of plagiarism.
Predicting CTS fully automatically is challenging.Although the fully automatic _Context_ setting has strong input retrieval performance (Table 3), its performance on the downstream citation text generation task is mixed. It underperforms on automated metrics (Table 4), but human judges rate it slightly higher than _Oracle_ and _Keyword_ (Table 5). Table 7 shows that _Oracle_ and _Keyword_ tend to retrieve CTS from more specific sections, such as "model or architecture," suggesting that the human judges may have felt that those generated citation texts contained too much unnecessary detail.
CTS retrieval given only the target citation context sentences is challenging because the citation context and the CTS candidates come from different papers (citing versus cited), and the semantic mismatch between them lowers the performance of the DPR module. CTS selection and citation text generation are both open problems, and it is challenging to precisely predict the CTS support and content of a citation text unless explicit guidance, such as keywords, is provided. Interestingly, Table 5 shows that human evaluators cannot distinguish between gold and generated citation texts during the blind evaluation round. When the gold citation text is identified to the judges, they revise its scores higher, while lowering the scores of the generated candidates. This finding suggests that even humans need some guiding information to help them process citation texts. Thus, as Li et al. (2022) argue, the citation text or related work generation task should be human-in-the-loop, leveraging a researcher's expertise to reduce both the search space size and cascading error propagation in the pipeline, and our _Keyword_-based CTS retrieval approach provides a practical workaround for the challenge of predicting CTS fully automatically.
## 5 Conclusion
In this work, we show that CTS-based citation text generation is more accurate and faithful than the abstract-based approaches used in prior work. We address the challenges that previously prevented CTS from being used effectively for citation text generation. ROUGE-based retrieval achieves sufficiently strong performance to substitute for expensive manual CTS annotations, allowing us to build a dataset large enough to train a supervised CTS retriever. To avoid the pitfall of using the gold citation span as the retrieval query, which has been used in prior work but would not be possible in a real application, we propose a human-in-the-loop, keyword-based CTS retrieval approach that achieves strong performance in a practical and realistic setting. Finally, our experimental results show that, while retrieving CTS using only the citation context sentences is challenging and does not achieve the best performance using automated metrics, it is preferred by human judges and is a good direction for further investigation.
\begin{table}
\begin{tabular}{l|c c c} \hline
**Section** & **Oracle** & **Keyword** & **Context** \\ \hline introduction & 2125 (1) & 1681 (1) & 4140 (1) \\ related work & 1217 (2) & 634 (5) & 401 (4) \\ model or architecture & 903 (3) & 1036 (2) & 383 (5) \\ abstract & 712 (4) & 816 (3) & 3667 (2) \\ experimental results & 710 (5) & 703 (4) & 92 (8) \\ conclusion & 508 (6) & 589 (6) & 316 (6) \\ approach & 312 (7) & 312 (8) & 141 (7) \\ data & 307 (8) & 370 (7) & 75 (9) \\ evaluation & 210 (9) & 215 (10) & 38 (12) \\ discussion & 88 (10) & 75 (11) & 15 (16) \\ title & 73 (11) & 262 (9) & 921 (3) \\ background & 67 (12) & 74 (12) & 54 (10) \\ \hline \end{tabular}
\end{table}
Table 7: Counts (and ranks) of the most common section headers of retrieved CTS for the CORWA test set. Approximately 12,000 CTS are retrieved in total.
### Limitations
As discussed in Section 2.1, existing CTS retrieval approaches all use the gold citation text as the query. Our results also show that, while CTS retrieved using the gold citation text (or its keywords) yields strong generation performance, CTS retrieved with only the citation context sentences is not satisfactory. Therefore, we argue that citation text generation should be divided into two parts: support document retrieval and document-based citation text generation. Based on our results, we can achieve strong generation performance, but the automated retrieval part requires further investigation.
As a workaround solution without a strong, context-based CTS retriever, we show that, when the user provides a few keywords to guide CTS retrieval, our models can perform quite well. This setting is practical under the human-in-the-loop configuration suggested by Li et al. (2022). Future work may predict the keywords of the gold citation text by performing paragraph- or document-level topic analysis using LLMs.
### Ethics Statement
In this work, we focus on showing that retrieved CTS provides relevant support information to enable faithful citation text generation. Unsurprisingly, the trained CTS-based citation text generation models turn out to be more extractive (Section 4.4), directly copying parts of the input CTS to the output citation texts. This copying behavior has the potential to result in plagiarism if the output of our CTS-based citation text generator is directly without further modification. As we mention in Section 4.4, it may be necessary to apply a post-processing step, such as a rewriting module, to avoid plagiarism, which is out of the scope of this work and left for future investigation. |
2309.16176 | Matrix Multiplication Verification Using Coding Theory | We study the Matrix Multiplication Verification Problem (MMV) where the goal
is, given three $n \times n$ matrices $A$, $B$, and $C$ as input, to decide
whether $AB = C$. A classic randomized algorithm by Freivalds (MFCS, 1979)
solves MMV in $\widetilde{O}(n^2)$ time, and a longstanding challenge is to
(partially) derandomize it while still running in faster than matrix
multiplication time (i.e., in $o(n^{\omega})$ time).
To that end, we give two algorithms for MMV in the case where $AB - C$ is
sparse. Specifically, when $AB - C$ has at most $O(n^{\delta})$ non-zero
entries for a constant $0 \leq \delta < 2$, we give (1) a deterministic
$O(n^{\omega - \varepsilon})$-time algorithm for constant $\varepsilon =
\varepsilon(\delta) > 0$, and (2) a randomized $\widetilde{O}(n^2)$-time
algorithm using $\delta/2 \cdot \log_2 n + O(1)$ random bits. The former
algorithm is faster than the deterministic algorithm of K\"{u}nnemann (ESA,
2018) when $\delta \geq 1.056$, and the latter algorithm uses fewer random bits
than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same
time and uses $\log_2 n + O(1)$ random bits (in turn fewer than Freivalds's
algorithm).
We additionally study the complexity of MMV. We first show that all
algorithms in a natural class of deterministic linear algebraic algorithms for
MMV (including ours) require $\Omega(n^{\omega})$ time. We also show a barrier
to proving a super-quadratic running time lower bound for matrix multiplication
(and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally,
we study relationships between natural variants and special cases of MMV (with
respect to deterministic $\widetilde{O}(n^2)$-time reductions). | Huck Bennett, Karthik Gajulapalli, Alexander Golovnev, Evelyn Warton | 2023-09-28T05:22:15Z | http://arxiv.org/abs/2309.16176v2 | # Matrix Multiplication Verification Using Coding Theory
###### Abstract
We study the _Matrix Multiplication Verification Problem_ (MMV) where the goal is, given three \(n\times n\) matrices \(A\), \(B\), and \(C\) as input, to decide whether \(AB=C\). A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in \(\widetilde{O}(n^{2})\) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in \(o(n^{\omega})\) time).
To that end, we give two algorithms for MMV in the case where \(AB-C\) is _sparse_. Specifically, when \(AB-C\) has at most \(O(n^{\delta})\) non-zero entries for a constant \(0\leq\delta<2\), we give (1) a deterministic \(O(n^{\omega-\varepsilon})\)-time algorithm for constant \(\varepsilon=\varepsilon(\delta)>0\), and (2) a randomized \(\widetilde{O}(n^{2})\)-time algorithm using \(\delta/2\cdot\log_{2}n+O(1)\) random bits. The former algorithm is faster than the deterministic algorithm of Kunnemann (ESA, 2018) when \(\delta\geq 1.056\), and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses \(\log_{2}n+O(1)\) random bits (in turn fewer than Freivalds's algorithm).
Our algorithms are simple and use techniques from coding theory. Let \(H\) be a parity-check matrix of a Maximum Distance Separable (MDS) code, and let \(G=(I_{k}\mid G^{\prime})\) be a generator matrix of a (possibly different) MDS code in systematic form. Our deterministic algorithm uses fast rectangular matrix multiplication to check whether \(HAB=HC\), and our randomized algorithm samples a uniformly random column \(\mathbf{g}^{\prime}\) from \(G^{\prime}\) and checks whether \(AB\mathbf{g}^{\prime}=C\mathbf{g}^{\prime}\).
We additionally study the _complexity_ of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require \(\Omega(n^{\omega})\) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic \(\widetilde{O}(n^{2})\)-time reductions).
###### Contents
* 1 Introduction
* 1.1 Our Results
* 1.1.1 Algorithms
* 1.1.2 Barriers
* 1.1.3 Reductions
* 1.2 Related Work
* 1.3 Open Questions
* 1.4 Acknowledgments
* 2 Preliminaries
* 2.1 Matrix Multiplication
* 2.2 Matrix Multiplication Verification
* 2.3 Coding Theory
* 2.4 Regular and Super Regular Matrices
* 2.5 Fine-Grained Complexity
* 2.6 Matrix Rigidity and Circuit Lower Bounds
* 3 Fast Matrix Multiplication Verification for Sparse Matrices
* 3.1 Deterministic MMV for Sparse Matrices
* 3.2 Randomized MMV for Sparse Matrices
* 3.3 A Barrier for Linear Algebraic Approaches to MMV
* 4 On the non-SETH-hardness of Matrix Multiplication
* 4.1 A Nondeterministic Algorithm for Multiplying Non-Rigid Matrices
* 4.2 The Main SETH-Hardness Barrier Result
* 4.3 Comparison with Known Rigidity Results
* 5 Reductions Between Variants of MMV
* 5.1 Problems Equivalent to MMV
* 5.1.1 Inverse Verification
* 5.1.2 Symmetric MMV
* 5.2 Problems no Harder than MMV
* 5.2.1 Monochromatic All Pairs Orthogonal Vectors
* 5.2.2 Strong Symmetric MMV
* 5.3 Problems at Least as Hard as MMV but no Harder than MM
* 5.3.1 Generalized AllZeroes Equivalence
* 5.3.2 Relation to Orthogonal Vectors and Matrix-Product-Sparsity
Introduction
The goal of the _Matrix Multiplication Problem_ (MM) is to compute the product \(AB\) of two \(n\times n\) matrices \(A\) and \(B\) given as input. Matrix multiplication has many practical and theoretical applications, and because of this has been studied by an extensive line of work. The primary goal of this work has been to determine the running time \(O(n^{\omega})\) of the fastest algorithms for MM, which is captured by the matrix multiplication exponent \(\omega\).1 The best upper bounds on \(\omega\) and related quantities continue to improve [10, 2, 11, 12, 13, 14], and [23] recently showed the current best known bound of \(\omega\leq 2.371552\). The dream of this line of work is to show that \(\omega=2\), and this in fact holds under certain plausible combinatorial and group-theoretic conjectures (see [1, Conjecture 4.7 and Conjecture 3.4]). Nevertheless, showing that \(\omega=2\) seems very challenging for the time being.
Footnote 1: Formally, \(\omega\) is defined as the infimum over \(\omega^{\prime}\) such that the product of two \(n\times n\) matrices can be computed in \(O(n^{\omega^{\prime}})\) time. So, MM algorithms are actually only guaranteed to run in \(O(n^{\omega+\epsilon})\) time for any constant \(\varepsilon>0\).
In this work, we consider a variant of matrix multiplication where the goal is to _verify_ that the product of two matrices is equal to a third matrix. Specifically, we study the _Matrix Multiplication Verification Problem_ (MMV) where, given three \(n\times n\) matrices \(A\), \(B\), and \(C\) as input, the goal is to decide whether \(AB=C\). MMV is clearly no harder than matrix multiplication--it can be solved in \(O(n^{\omega})\) time by computing the product \(AB\) and then comparing the product entry-wise against \(C\)--but it is natural to ask whether it is possible to do better. In what became classic work, Freivalds [16] answered this question in the affirmative and gave a simple, randomized algorithm that solves MMV in \(\widetilde{O}(n^{2})\) time. This \(\widetilde{O}(n^{2})\) running time bound is essentially the best possible, and so, unlike matrix multiplication, the complexity of MMV is relatively well understood.
However, it is in turn natural to ask whether it is possible to _derandomize_ Freivalds's algorithm partially or completely. More specifically, it is natural to ask whether it is possible to give a _deterministic_ algorithm for MMV running in \(\widetilde{O}(n^{2})\) time or at least \(O(n^{\omega-\varepsilon})\) time for constant \(\varepsilon>0\).2 Or, if it is not possible to give a deterministic algorithm for MMV with these running times, it is natural to ask whether it is possible to use fewer random bits than Freivalds's algorithm, which uses \(n\) random bits. Trying to answer these questions has become a key goal for derandomization efforts, and has received substantial study [1, 1, 13, 15].
Footnote 2: We use the notation \(\widetilde{O}(f(n))\) to mean \(f(n)\cdot\mathrm{poly}(\log f(n))\). Freivalds’s algorithm uses \(O(n^{2})\) arithmetic operations, each of which takes \(\mathrm{poly}(\log n)\) time when working over integer matrices with entries bounded in magnitude by \(\mathrm{poly}(n)\); we assume this setting in the introduction.
### Our Results
Our main results are two new algorithms for the Matrix Multiplication Verification Problem in the _sparse_ case, i.e., in the case where \(AB-C\) is promised to have few non-zero entries (if any). See Table 1 for a summary of our algorithms and how they compare to other known algorithms for MMV. Additionally, we give a barrier for giving a fast algorithm for MMV using a broad class of linear algebraic techniques, a barrier to showing hardness of MMV, and reductions between variants of MMV.
#### 1.1.1 Algorithms
We define \(\|\mathbf{v}\|_{0}\) (respectively, \(\|M\|_{0}\)) to be the number of non-zero entries in (i.e., Hamming weight of) a vector \(\mathbf{v}\) (respectively, matrix \(M\)). We call a vector \(\mathbf{v}\) (respectively, matrix \(M\)) \(t\)_-sparse_ if \(\|\mathbf{v}\|_{0}\leq t\) (respectively, if \(\|M\|_{0}\leq t\)).
Our first algorithm is deterministic, and uses fast rectangular matrix multiplication. For \(\alpha,\beta,\gamma\in[0,1]\), let the rectangular matrix multiplication exponent \(\omega(\alpha,\beta,\gamma)\) be the infimum over values \(\omega^{\prime}>0\) such that the product of a \(n^{\alpha}\times n^{\beta}\) matrix and a \(n^{\beta}\times n^{\gamma}\) matrix can be computed using \(O(n^{\omega^{\prime}})\) arithmetic operations. Note that \(\omega=\omega(1,1,1)\) is the standard (square) matrix multiplication exponent.
**Theorem 1.1** (Fast deterministic MMV for sparse matrices, informal).: _Let \(A,B,C\in\mathbb{Z}^{n\times n}\) be matrices satisfying \(\max_{i,j}\{\left|A_{i,j}\right|,\left|B_{i,j}\right|,\left|C_{i,j}\right|\} \leq n^{c}\) for some constant \(c>0\) and satisfying \(\|AB-C\|_{0}\leq n^{\delta}\) for \(0\leq\delta\leq 2\). Then for any constant \(\varepsilon>0\), there is a deterministic algorithm for MMV on input \(A,B,C\) that runs in \(O(n^{\omega(1,1,\delta/2)+\varepsilon})\) time._
We note that \(\omega(1,1,\beta)<\omega\) for all \(\beta<1\) (assuming \(\omega>2\); see Theorem 2.3), and so our algorithm is faster than matrix multiplication when \(AB-C\) is promised to be \(O(n^{\delta})\)-sparse for constant \(\delta<2\). Furthermore, it is faster than Kunnemann's algorithm [11], which is also for MMV in the regime where \(AB-C\) is sparse, when \(\omega(1,1,\delta/2)<1+\delta\). The equation \(\omega(1,1,\delta/2)=1+\delta\) turns out to be relevant in other contexts too [10], and both [11, 12] provide bounds on its (unique) solution. Specifically, [12] shows that the solution \(\delta\) to this equation satisfies \(\delta\leq 1.056\), and so the algorithm in Theorem 1.1 is (strictly) faster than any previously known deterministic algorithm for MMV when \(1.056\leq\delta<2\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Algorithm** & **Asymptotic Runtime** & **Bits of Randomness** \\ \hline Matrix Multiplication & \(n^{\omega+\varepsilon}\) & \(0\) \\ \hline Random Entry Sampling (folklore) & \(n^{3-\delta}\) & \(2n^{2-\delta}\cdot\log_{2}(n)+O(1)\) \\ \hline Freivalds’s Algorithm [10] & \(n^{2}\) & \(n\) \\ \hline Vandermonde Mat. Sampling [11] & \(n^{2}\) & \(\log_{2}(n)+O(1)\) \\ \hline Multipoint Poly. Evaluation [11] & \(n^{2}+n^{1+\delta}\) & \(0\) \\ \hline Cauchy Bound [11] & \(n^{3}\) (\(n^{2}\) in Integer RAM) & \(0\) \\ \hline
**Parity Check/Fast RMM (Thm. 1.1)** & \(n^{\omega(1,1,\delta/2)+\varepsilon}\) & \(0\) \\ \hline
**Cauchy Mat. Sampling (Thm. 1.2)** & \(n^{2}\) & \(\frac{\delta}{2}\cdot\log_{2}(n)\) \\ \hline \end{tabular}
\end{table}
Table 1: Algorithms for MMV on matrices \(A,B,C\in\mathbb{Z}^{n\times n}\) with entries of magnitude at most \(\mathrm{poly}(n)\) and such that \(AB-C\) has at most \(n^{\delta}\) non-zero entries for \(0\leq\delta\leq 2\). Our new algorithms are shown in bold. We list asymptotic running times, with \(\mathrm{poly}(\log n)\) factors suppressed for readability, and the number of random bits used to achieve success probability \(1/2\). (Each of the three listed randomized algorithms has one-sided error, so this probability is meaningful.) Here \(\omega(\cdot,\cdot,\cdot)\) is the rectangular matrix multiplication exponent, \(\omega=\omega(1,1,1)\) is the (square) matrix multiplication exponent, and \(\varepsilon>0\) is an arbitrarily small positive constant.
Additional bounds on \(\omega(1,\delta/2,1)=\omega(1,1,\delta/2)\)--and hence the running time of the algorithm in Theorem 1.1--appear in [23, Table 1]. For example, that table shows that \(\omega(1,1,0.55)<2.067\) and \(\omega(1,1,0.95)<2.333\) (which correspond to \(\delta=1.1\) and \(\delta=1.9\), respectively). We also note that our algorithm runs in essentially optimal \(\widetilde{O}(n^{2})\) time when \(\delta\leq 0.642\leq 2\omega^{\perp}\), where \(\omega^{\perp}:=\sup\{\omega^{\prime}>0:\omega(1,1,\omega^{\prime})=2\}\geq 0.321\) is the dual matrix multiplication exponent [23], but that Kunnemann's algorithm [22] runs in \(\widetilde{O}(n^{2})\) time for any \(\delta\leq 1\).
Our second algorithm runs in \(\widetilde{O}(n^{2})\) time, but is randomized. It uses few bits of randomness when \(AB-C\) is sparse.
**Theorem 1.2** (Fast randomized MMV for sparse matrices, informal).: _Let \(c>0\) be a constant, let \(A,B,C\in\mathbb{Z}^{n\times n}\) be matrices satisfying \(\max_{i,j}\{\left|A_{i,j}\right|,\left|B_{i,j}\right|,\left|C_{i,j}\right|\} \leq n^{c}\) and satisfying \(\|AB-C\|_{0}\leq n^{\delta}\) for \(0\leq\delta\leq 2\), and let \(\varepsilon=\varepsilon(n)\geq 1/n\). Then there is a randomized algorithm for MMV on input \(A,B,C\) that runs in \(\widetilde{O}(n^{2})\) time, succeeds with probability \(1-\varepsilon\), and uses at most \(\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\) bits of randomness._
Theorem 1.2 improves on the number of random bits used by the algorithm of Kimbrel and Sinha [16] when \(\delta<2\) (which uses \(\log_{2}(n)+\log_{2}(1/\varepsilon)+O(1)\) random bits regardless of the sparsity of \(AB-C\)), and matches the number of random bits used by their algorithm when \(\delta=2\). The algorithms both run in \(\widetilde{O}(n^{2})\) time. In fact, one may think of the algorithm summarized in Theorem 1.2 as a natural extension of the algorithm in [16] to handle the sparse case more efficiently, although it requires additional techniques to implement. (Our algorithm requires matrices with a stronger pseudorandomness property than theirs; see the "algorithmic techniques" section below.)
We note that Theorem 1.2 only improves on known algorithms when \(1<\delta<2\), and only by a factor of \(\delta/2\). Indeed, as mentioned above, when \(\delta\leq 1\) Kunnemann's algorithm [22] solves MMV _deterministically_ in \(\widetilde{O}(n^{2})\) time, and when \(\delta=2\) our algorithm matches the number of random bits used by Kimbrel and Sinha's algorithm. Although seemingly modest, this constant-factor improvement is not surprising: any super-constant improvement on the number of bits used by [16] (i.e., an MMV algorithm using \(o(\log n)\) random bits) could be turned into a deterministic algorithm for MMV with only a sub-polynomial (i.e., \(n^{o(1)}\)) multiplicative increase in running time.
Algorithmic techniques.Here we briefly summarize the techniques that we use for the MMV algorithms corresponding to Theorems 1.1 and 1.2. We start by remarking that Theorems 1.1 and 1.2 hold not just for matrices over \(\mathbb{Z}\) with entries of polynomial magnitude, but also for matrices over all finite fields \(\mathbb{F}_{q}\) with \(q\leq\operatorname{poly}(n)\).3 In fact, our algorithms work "natively" in the finite field setting--i.e., on \(n\times n\) matrices \(A,B,C\) over finite fields \(\mathbb{F}_{q}\)--which is directly amenable to using techniques from coding theory. We assume this setting in the description below.
Footnote 3: The algorithms also work over larger finite fields, but with slower running times due to the increased bit complexity of performing arithmetic operations over those fields.
For both algorithms, we will use the observation that if \(AB-C\) is \(t\)-sparse then at least one row or column of \(AB-C\) must be non-zero and \(k\)-sparse for \(k:=\lfloor\sqrt{t}\rfloor\). A similar observation appears in [24].
Our first, deterministic algorithm (Theorem 1.1) uses a matrix \(H\) over \(\mathbb{F}_{q}\) such that any \(k\) columns of \(H\) are linearly independent. Equivalently, we require a matrix \(H\in\mathbb{F}_{q}^{m\times n}\) such that for all non-zero vectors \(\boldsymbol{x}\in\mathbb{F}_{q}^{n}\) with \(\|\boldsymbol{x}\|_{0}\leq k\) (corresponding to a sparse, non-zero column or row of \(AB\)), \(H\boldsymbol{x}\neq\boldsymbol{0}\). This is exactly the property guaranteed by the parity-check matrix \(H\) of an
error correcting code \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\) with minimum distance \(d>k\). Moreover, if a code \(\mathcal{C}\) with minimum distance \(d=k+1\) is a so-called Maximum Distance Separable (MDS) code, then it has a \(k\times n\) parity-check matrix \(H\). MDS codes with useful parameters exist and have efficiently constructible parity-check matrices. In particular, (generalized) Reed-Solomon codes are MDS codes, and exist when \(k\leq n\leq q\) (see, e.g., [12]). Their parity-check matrices \(H\) are Vandermonde matrices, which are constructible in \(kn\cdot\operatorname{poly}(\log q)\leq n^{2}\cdot\operatorname{poly}(\log q)\) time.
Our algorithm then uses fast rectangular matrix multiplication to compute \(HAB=(HA)B\) and \(H(AB)^{T}=(HB^{T})A^{T}\) using roughly \(n^{\omega(1,1,\delta/2)}\) arithmetic operations, where \(0\leq\delta\leq 2\) is such that \(t\leq n^{\delta}\). If \(AB=0\), then \(HAB=H(AB)^{T}=0\). On the other hand, if \(AB\neq 0\) then \(AB\) is \(t\)-sparse and therefore has a \(k\)-sparse row or column. So, at least one of the expressions \(HAB\) and \(H(AB)^{T}\) is non-zero.
Our second, randomized algorithm (Theorem 1.2) uses a matrix \(S\in\mathbb{F}_{q}^{m\times n}\) with the property that all of its \(k\times k\) submatrices are non-singular. Matrices \(S\) with this property are called \(k\)_-regular_, and matrices \(S\) all of whose square submatrices (of any size) are non-singular are called _super regular_ (see also the definitions in Section 2.4). We note that \(k\)-regularity is stronger than the property we require for \(H\) in the first algorithm. In particular, if a matrix \(S\in\mathbb{F}_{q}^{m\times n}\) is \(k\)-regular and \(0<\|\boldsymbol{x}\|_{0}\leq k\), then \(\|S\boldsymbol{x}\|_{0}\geq m-k+1\). I.e., \(S\) being \(k\)-regular implies not only that \(S\boldsymbol{x}\) is non-zero, but that \(S\boldsymbol{x}\) has relatively high Hamming weight for such \(\boldsymbol{x}\). This property is useful because it implies that \(\Pr[\langle\boldsymbol{s},\boldsymbol{x}\rangle\neq 0]\geq(m-k+1)/m\), where \(\boldsymbol{s}\) is a random row of \(S\). Indeed, this observation leads to our second algorithm: we sample a random row \(\boldsymbol{s}\) from a \(k\)-regular matrix \(S\in\mathbb{F}_{q}^{m\times n}\) and check whether \(\boldsymbol{s}AB=\boldsymbol{0}\) and \(\boldsymbol{s}(AB)^{T}=\boldsymbol{0}\). Setting, e.g., \(m=2k\), we get that this algorithm succeeds with probability at least \((2k-k+1)/(2k)>1/2\).
It remains to construct (rows of) \(k\)-regular matrices \(S\) efficiently. Although a priori it is not even obvious that \(k\)-regular matrices exist for arbitrarily large \(k\), in fact _super regular_ matrices exist and are efficiently constructible. Specifically, we use a family of super regular (and hence \(k\)-regular) matrices called _Cauchy matrices_; the entries of such a matrix \(S\) are defined as \(S_{i,j}=1/(x_{i}-y_{j})\), where \(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}\) are distinct elements of \(\mathbb{F}_{q}\). In fact, as follows from their definition, given a (random) index \(1\leq i\leq m\), it is even possible to construct the \(i\)th row of a Cauchy matrix \(S\) efficiently without computing the rest of the matrix, as needed.
Finally, we remark that there is a deep connection between MDS codes and super regular matrices (and more specifically between generalized Reed-Solomon codes, Vandermonde matrices, and Cauchy matrices). Specifically, if \(G=(I\mid S)\) is the generator matrix of an MDS code in systematic form, then \(S\) is a super regular matrix [13]. Moreover, [13] shows that if such a matrix \(G\) is the generator matrix of a generalized Reed-Solomon code, then \(S\) is a Cauchy matrix [13]. See also Remark 2.15.
#### 1.1.2 Barriers
The dream for the line of work described in this paper is to give a deterministic, \(\widetilde{O}(n^{2})\)-time algorithm for MMV on arbitrary matrices. However, achieving this goal has proven to be very difficult despite a substantial amount of work towards it. So, it is natural to ask whether perhaps no such algorithm exists, i.e., whether MMV is in some sense _hard_. We first show a result in this direction, and then show a barrier result to showing SETH hardness of MMV (and even MM).4
Linear algebraic algorithms barrier.We first prove that a natural class of deterministic linear algebraic algorithms for MMV based on multiplying smaller matrices--including the algorithm in Theorem 1.1--cannot run in less than \(n^{\omega}\) time using when instantiated with a matrix multiplication subroutine running in worst-case rectangular matrix multiplication time and when performing all multiplications independently. Specifically, the \(\Omega(n^{\omega})\) lower bound holds if for all \(\alpha,\beta\geq 0\), the subroutine requires \(\Omega(n^{\omega(1,1,\alpha)})\) to compute the product of an \(n\times n^{\alpha}\) matrix and an \(n\times n\), and \(\Omega(n^{\omega(1,1,\beta)})\) time to compute the product of an \(n\times n\) matrix and an \(n\times n^{\beta}\) matrix.
The idea is that natural algorithms for verifying that \(AB=C\) for \(n\times n\) matrices \(A,B,C\) including ours amount to performing \(k\) "zero tests." 5 More specifically, the \(i\)th such test checks that \(L_{i}(AB-C)R_{i}=0\) for some fixed \(n^{\alpha_{i}}\times n\) matrix \(L_{i}\) and \(n\times n^{\beta_{i}}\) matrix \(R_{i}\), where \(\alpha_{i},\beta_{i}\in[0,1]\). We observe that the conditions \(L_{i}(AB-C)R_{i}=0\) for \(i=1,\ldots,k\) together correspond to a homogeneous system of \(\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\) linear equations in the \(n^{2}\) variables corresponding to the entries of \(X=AB-C\) for \(1\leq i,j\leq n\). So, for this system to have \(X_{i,j}=0\) for \(1\leq i,j\leq n\) as its unique solution, it must be the case that \(\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\geq n^{2}\), which we show implies that \(\sum_{i=1}^{k}n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))}\geq\sum_{i=1}^{k}n^{ \omega(\alpha_{i},1,\beta_{i})}\geq n^{\omega}\). Therefore, an algorithm that independently computes each product \(L_{i}ABR_{i}\) in time \(\Omega(n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))})\) uses \(\Omega(n^{\omega})\) time. This barrier appears in Theorem 3.7 and Corollary 3.8.
Footnote 5: Here we describe our barrier for MMV, but in Section 3.3 we present it for the closely related All Zeroes Problem.
A barrier to SETH-hardness of MM.While under certain reasonable conjectures, the matrix multiplication exponent \(\omega=2\) (see [10, Conjecture 4.7 and Conjecture 3.4]), the best provable upper bound we have is \(\omega<2.371552\) by [13]. Nevertheless, given the apparent difficulty of showing \(\omega\approx 2\), it is natural to ask whether MM is in fact _hard_. To that end, we study showing its hardness under the _Strong Exponential Time Hypothesis_ (SETH). However, rather than showing SETH-hardness of MM, we show a _barrier_ to proving \(n^{\gamma}\)-hardness of MM for constant \(\gamma>2\) under SETH. (Because MMV is trivially reducible to MM, our hardness barrier result also applies to MMV.)
We informally define several concepts used in the statement of our result. SETH says that for large constant \(k\), \(k\)-SAT instances on \(n\) variables take nearly \(2^{n}\) time to solve, and the _Non-deterministic Strong Exponential Time Hypothesis_ (NSETH) says that certifying that such \(k\)-SAT formulas are _not_ satisfiable takes nearly \(2^{n}\) time even for nondeterministic algorithms. We call a matrix _rigid_ if the Hamming distance between it and all low-rank matrices is high (the Hamming distance and rank are quantified by two parameters). Rigid matrices have many connections to complexity theory and other areas, and a key goal is to find explicit, deterministic constructions of such matrices.
Intuitively, NSETH rules out showing hardness of problems with non-trivial co-nondeterministic algorithms under SETH. Somewhat more precisely, assuming NSETH, problems contained in \(\mathsf{coTIME}[f(n)]\) (but perhaps only known to be in \(\mathsf{TIME}[g(n)]\) for \(g(n)=\omega(f(n))\)), cannot be shown to be \(\Omega(f(n)^{1+\varepsilon})\)-hard under SETH. (See Definition 2.22 for a formal definition of SETH-hardness.) Kunnemann [14] noted that, because Freivald's algorithm shows that MMV is in \(\mathsf{coTIME}[n^{2}\cdot\mathrm{poly}(\log n)]\), NSETH rules out showing \(\Omega(n^{\gamma})\) hardness of MMV under SETH for constant \(\gamma>2\).
In this work, we extend this observation and give a barrier not only to showing SETH-hardness of MMV but to showing hardness of MM. Our barrier result says that, if there exists a constant \(\gamma>2\) and a reduction from \(k\)-SAT to MM such that a \(O(n^{\gamma-\varepsilon})\)-time algorithm for MM for any constant
\(\varepsilon>0\) breaks SETH, then either (1) the _Nondeterministic Strong Exponential Time Hypothesis_ (NSETH) is false, or (2) a new non-randomized algorithm for computing (arbitrarily large) rigid matrices exists. We also note that, by known results, falsifying NSETH implies a new circuit lower bound as a consequence. In short, our barrier result says that showing \(n^{\gamma}\)-hardness of MM under SETH for \(\gamma>2\) would lead to major progress on important questions in complexity theory. See Theorem 4.4 for a formal statement.
A key idea that we use for proving our result is that it is possible to compute the product of two _non-rigid_ matrices efficiently using a _nondeterministic_ algorithm. This follows from two facts. First, by definition, a non-rigid matrix is the sum of a low-rank matrix \(L\) and a sparse matrix \(S\), and using nondeterminism it is possible to guess \(L\) and \(S\) efficiently. Second, it is possible to compute the product of two sparse matrices or a low-rank matrix and another matrix efficiently. (In fact, we also use nondeterminism to guess a _rank factorization_ of \(L\), and this factorization is what allows for fast multiplication by \(L\).)
Very roughly, we prove the barrier result as follows. We first suppose that there is a reduction from \(k\)-SAT to (potentially multiple instances of) matrix multiplication. In particular, such a reduction outputs several pairs of matrices to be multiplied. We then analyze three cases:
1. If the matrices output by this reduction always have small dimension (as a function of \(n\)), then we can compute the product of each pair quickly using standard matrix multiplication algorithms (even using naive, cubic-time matrix multiplication). This leads to a fast, deterministic algorithm for \(k\)-SAT, which refutes SETH (and hence NSETH).
2. If the matrices output by this reduction are always _not_ rigid, then we can compute the product of each pair quickly using the nondeterministic algorithm sketched above. This leads to a fast, nondeterministic algorithm for showing that \(k\)-SAT formulas are not satisfiable, which refutes NSETH.
3. Finally, if neither of the above cases holds, then the reduction must sometimes output rigid matrices with large dimension as a function of \(n\). So, we obtain an algorithm for generating arbitrarily large rigid matrices using an NP oracle: iterate through all \(k\)-SAT formulas \(\varphi\) with at most a given number of variables, apply the reduction from \(k\)-SAT to MM to each formula, and then use the NP oracle to check whether each large matrix output by the reduction is rigid.
We remark that although NSETH is a strong and not necessarily widely believed conjecture, [15, 16] showed that refuting it (as in Item 2 above) would nevertheless imply an interesting new circuit lower bound. Specifically, they showed that if NSETH is false, then the complexity class \(\mathsf{E}^{\mathsf{NP}}\) requires series-parallel circuits of size \(\omega(n)\).
Additionally, we remark that despite how slow the "iterate through all sufficiently large \(k\)-SAT formulas and apply the \(k\)-SAT-to-MM reduction to each one" algorithm described in Item 3 seems, it would still substantially improve on state-of-the-art non-randomized algorithms for generating rigid matrices. This is also true despite the fact that the algorithm uses an NP oracle. See Section 4.3 for a more thorough discussion.
#### 1.1.3 Reductions
Again, motivated by the apparent challenge of fully derandomizing Freivalds's algorithm, we study relationships between variants of MMV with the goal of understanding what makes the problem
hard to solve deterministically in \(\widetilde{O}(n^{2})\) time but easy to solve in \(\widetilde{O}(n^{2})\) time using randomness (in contrast to MM). More specifically, we study which variants are potentially easier than MMV (i.e., reducible to MMV, but not obviously solvable deterministically in \(\widetilde{O}(n^{2})\) time using known techniques), equivalent to MMV, and potentially harder than MMV (i.e., variants to which MMV is reducible, but which are not obviously as hard as MM). We study these questions by looking at deterministic \(\widetilde{O}(n^{2})\)-time reductions between variants. See Figure 3 for a summary of our results.
First, we show that two apparently special cases of MMV are in fact equivalent to MMV. These special cases are: (1) the _Inverse Verification Problem_, where the goal is to verify that \(B=A^{-1}\) for input matrices \(A\) and \(B\) (equivalently, the special case of MMV where \(C=I_{n}\)), and (2) the _Symmetric MMV Problem_, where the input matrices \(A\) and \(B\) (but not necessarily \(C\)) are symmetric. See Theorem 5.4. These reductions are relatively simple, and complement the (also simple) reduction of [12], who showed that the All Zeroes Problem (i.e., the special case of MMV where \(C=0\)) is MMV-complete (see Theorem 2.8).
Second, we identify two problems that are \(\widetilde{O}(n^{2})\)-time reducible to MMV, but are not clearly solvable in \(\widetilde{O}(n^{2})\) time or equivalent to MMV. These problems are: (1) the _Strong Symmetric MMV Problem_, where all three of the input matrices \(A\), \(B\), and \(C\) are symmetric, and (2) the Monochromatic All Pairs Orthogonal Vectors Problem, where the goal is, given vectors \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n}\) to decide whether \(\langle\boldsymbol{a}_{i},\boldsymbol{a}_{j}\rangle=0\) for all \(i\neq j\).
Third, we identify two problems for which there are \(\widetilde{O}(n^{2})\)-time reductions from MMV and that are \(\widetilde{O}(n^{2})\)-time reducible to MM. These "MMV/MM-intermediate problems" are: (1) the Matrix Product Sparsity Problem (MPS), in which the goal is, given matrices \(A\) and \(B\) and \(r\geq 0\) as input, to decide whether \(\|AB\|_{0}\leq r\), and (2) the \(k\)-MMV problem, in which given matrices \(A_{1},\ldots,A_{k},C\) as input, the goal is to decide whether \(\prod_{i=1}^{k}A_{i}=C\). We note that MPS is equivalent to the counting version of the Orthogonal Vectors Problem (#OV).6 We additionally show that \(k\)-MMV is equivalent to the \(k\)-All Zeroes problem, i.e., \(k\)-MMV where \(C\) is fixed to be \(0\).
Footnote 6: Indeed, Monochromatic All Pairs Orthogonal Vectors is no harder than MMV (and not obviously equivalent), Bichromatic All Pairs Orthogonal Vectors is equivalent to the All Zeroes Problem and is therefore equivalent to MMV, and MPS/#OV is at least as hard as MMV. In the fine-grained complexity setting OV variants are usually considered with \(n\) vectors in dimension \(d=\operatorname{poly}(\log n)\); here we are considering the regime where \(d=n\).
### Related Work
We next discuss other algorithms for MMV and related problems on \(n\times n\) integer matrices \(A\), \(B\), and \(C\). We summarize these algorithms, as well as ours, in Table 1. We start by noting that it suffices to consider the special case of MMV where \(C=0\) (i.e., where the goal is to decide whether \(AB=0\)), which is called the All Zeroes Problem. Indeed, a result from [12] (which we include as Theorem 2.8) shows that there is a simple \(O(n^{2})\)-time reduction from MMV on \(n\times n\) matrices \(A,B,C\) to the All Zeroes problem on \(2n\times 2n\) matrices \(A^{\prime},B^{\prime}\) with the property that \(\|AB-C\|_{0}=\|A^{\prime}B^{\prime}\|_{0}\). So, for this section we consider the All Zeroes Problem without loss of generality.
Perhaps the most closely related works to ours are [13, 2], which use fast rectangular matrix multiplication for the _Output-Sensitive Matrix Multiplication Problem_ (OSMM). In \(t\)-OSMM, the goal is, given matrices \(A,B\) as input, to compute the product \(AB\) when it is promised to be \(t\)-sparse. There is a trivial reduction from MMV when the output is promised to be \(t\)-sparse to \(t\)-OSMM--compute \(AB\) and check whether it is equal to \(0\). Indeed, OSMM is essentially the search version of sparse MMV. However, it is not clear that the measurement matrix \(M\) in [13]
is deterministically computable in \(\widetilde{O}(n^{2})\) time, and so the algorithm in [10] is a non-uniform algorithm as described. There are other candidate measurement matrices with deterministic constructions that may work for a similar purpose [11], but the exact tradeoffs do not seem to have been analyzed and it is not clear that it is possible to get a (uniform) algorithm with the same parameters. Additionally, [10] only handles the case when all columns or rows of \(AB\) are promised to have a given sparsity, rather than the case where there is a "global bound" of \(t\) on the sparsity of the matrix product itself.
The main algorithm in [1] for OSMM (summarized in [1, Theorem 1.4]) runs in randomized time \(O(n^{1.3459\delta})\) when both the input matrices \(A,B\) and their product \(AB\) are \(n^{\delta}\)-sparse.7 We note that [1] was written independently and concurrently with this work.
Footnote 7: A more general version of this theorem, which gives an algorithm whose running time depends both on the sparsity of the input matrices \(A,B\) and of their product \(AB\), appears as [1, Theorem 1.7].
Besides simply using matrix multiplication, perhaps the most natural idea for an algorithm for the All Zeroes problem is to compute a random entry \((AB)_{i,j}\) of \(AB\) and check whether it is non-zero. If \(\|AB\|_{0}\geq n^{\delta}\), then sampling, say, \(10n^{2-\delta}\) random entries of \(AB\) independently will find a non-zero entry with good constant probability. Because computing each such entry amounts to computing an inner product, and sampling indices \(i,j\sim\{1,\ldots,n\}\) takes roughly \(2\log_{2}n\) random bits, this algorithm overall takes \(\widetilde{O}(n^{3-\delta})\) time and \(O(n^{2-\delta}\log n)\) random bits. So, this algorithm is relatively efficient and succeeds with good probability in the case when \(AB\) is dense, but even then requires a relatively large number of random bits. We also note the somewhat odd fact that this algorithm is most efficient when \(AB\) is dense, whereas our algorithms are most efficient when \(AB\) is sparse.
Freivalds's algorithm [13] works by sampling a uniformly random vector \(\boldsymbol{x}\sim\{0,1\}^{n}\), and outputting "YES" if \(AB\boldsymbol{x}=\boldsymbol{0}\) and "NO" otherwise. If \(AB=0\), then this algorithm is always correct, and if \(AB\neq 0\) then it fails with probability at most \(1/2\).8 In particular, Freivalds's algorithm has one-sided error with no false negatives (i.e., it is a coRP algorithm).
Footnote 8: To see this, note that in the latter case some row \(\boldsymbol{s}^{T}\) of \(AB\) must be non-zero, and let \(j^{*}\) be the index of the last non-zero entry in \(\boldsymbol{s}\). Then for uniformly random \(\boldsymbol{x}\sim\{0,1\}^{n}\), \(\Pr[AB\boldsymbol{x}=\boldsymbol{0}]\leq\Pr[\langle\boldsymbol{s},\boldsymbol {x}\rangle=0]=\Pr[s_{j^{*}}x_{j^{*}}=-\sum_{k=1}^{j^{*}-1}s_{k}x_{k}]\leq 1/2\). Moreover, this holds for matrices \(A,B\) over any integral domain \(R\) with a multiplicative identity \(1\), and so Freivalds’s algorithm works for MMV over all such \(R\).
A key idea for subsequent algorithms was to reduce MMV to a question about polynomials. The main idea is the following. Define \(\boldsymbol{x}:=(1,x,x^{2},\ldots,x^{n-1})^{T}\), where \(x\) is an indeterminate, and define \(p_{i}(x):=(AB\boldsymbol{x})_{i}=\sum_{j=1}^{n}(AB)_{i,j}\cdot x^{j-1}\). Note that \(AB=0\) if and only if the polynomials \(p_{i}(x)\) are identically zero (as formal polynomials) for all \(i\in\{1,\ldots,n\}\). Furthermore, if the \(i\)th row of \(AB\) is non-zero then \(p_{i}(x)\) is a non-zero polynomial of degree at most \(n-1\), and therefore has at most \(n-1\) distinct complex (and hence integral) roots. So, for such \(p_{i}(x)\) and a non-empty set \(S\subset\mathbb{Z}\), \(\Pr_{\alpha\sim S}[p(\alpha)=0]\leq(n-1)/\left|S\right|\), which is less than \(1/2\) when \(\left|S\right|\geq 2n\). This observation leads to the following algorithm for MMV, which forms the basis for Kimbrel and Sinha's algorithm [12]. Sample \(\alpha\sim\{1,\ldots,2n\}\), and output "YES" if and only if \(AB\boldsymbol{\alpha}=\boldsymbol{0}\) for \(\boldsymbol{\alpha}:=(1,\alpha,\alpha^{2},\ldots,\alpha^{n-1})^{T}\). Using associativity, it is possible to compute this product as \(A(B\boldsymbol{\alpha})\) using \(O(n^{2})\) arithmetic operations.
However, there is an issue with this algorithm: it requires computing powers of \(\alpha\) up to \(\alpha^{n-1}\). These powers require \(\Omega(n)\) bits to represent for any integer \(\alpha\geq 2\), and so performing arithmetic operations with them takes \(\Omega(n)\) time. To solve this, Kimbrel and Sinha instead consider the "test vector" \(\boldsymbol{\alpha}\) modulo an (arbitrary) prime \(2n\leq q\leq 4n\), which they can find deterministically in \(O(n^{2})\) time. They show that their algorithm is still correct with good probability (over the choice of \(\alpha\)
with this modification.
Korec and Wiedermann [13] showed how to _deterministically_ find a good \(\alpha\) for the above test--that is, a value \(\alpha\) such that \(p_{i}(\alpha)\neq 0\) if \(p_{i}\) is not identically zero--using _Cauchy's bound_, which gives an upper bound on the magnitude of the largest root of a polynomial as a function of the polynomial's coefficients. Namely, they just choose \(\alpha\) larger than Cauchy's bound. (They note that the maximum magnitude of an entry in \(AB\)--and hence of a coefficient in any of the polynomials \(p_{i}(x)\)--is at most \(n\mu^{2}\), where \(\mu\) is the maximum magnitude of an entry in \(A\) or \(B\).) Their algorithm uses only \(O(n^{2})\) arithmetic operations, but again requires computing powers of \(\alpha\) up to \(\alpha^{n-1}\), and therefore the algorithm has bit complexity \(\Omega(n^{3})\).
Additionally, we mention the work of Kunnemann [13], which works for MMV over finite fields \(\mathbb{F}_{q}\) with \(q>n^{2}\) (he reduces MMV over the integers to MMV over such fields). His algorithm works by considering the bivariate polynomial \(f(x,y)=f_{A,B}(x,y):=\boldsymbol{x}^{T}AB\boldsymbol{y}\) for \(\boldsymbol{x}=(1,x,x^{2},\ldots,x^{n-1})\), \(\boldsymbol{y}=(1,y,y^{2},\ldots,y^{n-1})\), where \(x\) and \(y\) are indeterminates, and the corresponding univariate polynomial \(g(x)=g_{A,B}(x):=f(x,x^{n})\). The coefficient of \(x^{(i-1)+(j-1)n}\) in \(g(x)\) (and of \(x^{i-1}y^{j-1}\) in \(f(x,y)\)) is equal to \((AB)_{i,j}\), and so to decide whether \(AB=0\) it suffices to decide whether \(g(x)\) (or \(f(x,y)\)) is identically zero as a formal polynomial.9 He shows that to do this it in turn suffices to decide whether \(g(\alpha^{i})=0\) for all \(i\in\{0,\ldots,t-1\}\), where \(\alpha\in\mathbb{F}_{q}\) is an element of order at least \(n^{2}\) and \(t=n^{\delta}\) is an upper bound on the sparsity of \(AB\). Indeed, he notes that the system of equations \(g(1)=\cdots=g(\alpha^{t-1})=0\) is a Vandermonde system of homogeneous linear equations in the at most \(t\) non-zero entries \((AB)_{i,j}\) in \(AB\), and so its only solution is the solution \((AB)_{i,j}=0\) for all \(1\leq i,j\leq n\) (i.e., it must be the case that \(AB=0\)). To evaluate \(g\) on the \(t\) values \(1,\alpha,\ldots,\alpha^{t-1}\) quickly, he uses a known result about fast multipoint polynomial evaluation.
Footnote 9: Indeed, [13] notes that this mapping from \(A,B\) to \(g(x)\) is a reduction from the All Zeroes Problem to Univariate Polynomial Identity Testing (UPIT).
We also note that MMV and its variants have also been studied from angles other than derandomization of Freivalds's algorithm. Notably, [14] gave a _quantum_\(O(n^{5/3})\)-time algorithm for MMV, [12] studied the _Boolean_ Matrix Multiplication Verification problem, and [13, 15] study the problem of _correcting_ matrix products. I.e., they study the problem of _computing_\(AB\) given matrices \(A\), \(B\), and \(C\) where \(\|AB-C\|_{0}\) is guaranteed to be small, which Kunnemann showed is equivalent to OSMM.
Finally, we remark that other recent works including [1, 1, 1, 1] have studied "barriers to SETH hardness" akin to Theorem 4.4.
### Open Questions
Of course, the main question that we leave open is whether Freivalds's algorithm can be fully derandomized, i.e., whether there is a deterministic \(\widetilde{O}(n^{2})\)-time algorithm for MMV on \(n\times n\) matrices over finite fields \(\mathbb{F}_{q}\) with \(q\leq\operatorname{poly}(n)\) and integer matrices with entries \([-n^{c},n^{c}]\) for constant \(c>0\). Giving such an algorithm for a natural special case of MMV (such as those described in Section 5.2) also seems like a natural step in this direction. Additionally, it would be interesting to extend our results for MMV in the sparse regime to Output Sensitive Matrix Multiplication. The coding-theoretic techniques that we use seem amenable to this.
### Acknowledgments
We thank Amir Nayyeri for many helpful discussions in the early stages of work on this paper, and Mark Iwen [14] for answering questions about [13].
## 2 Preliminaries
We define \(\|\mathbf{v}\|_{0}\) (respectively, \(\|M\|_{0}\)) to be the number of non-zero entries (i.e., Hamming weight) in a vector \(\mathbf{v}\) (respectively, matrix \(M\)). We call a vector \(\mathbf{v}\) (respectively, matrix \(M\)) \(t\)_-sparse_ if \(\|\mathbf{v}\|_{0}\leq t\) (respectively, if \(\|M\|_{0}\leq t\)).
### Matrix Multiplication
We first give definitions related to matrix multiplication.
**Definition 2.1** (\(\text{MM}_{R}\)).: The Matrix Multiplication Problem over a ring \(R\) (\(\text{MM}_{R}\)) is defined as follows. Given matrices \(A,B\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\), compute their product \(AB\).
**Definition 2.2** (Rectangular and Square Matrix Multiplication Exponents).: For \(\alpha,\beta,\gamma\in[0,1]\), we define the _rectangular matrix multiplication exponent_\(\omega(\alpha,\beta,\gamma)\) to be the infimum over \(\omega^{\prime}>0\) such that the product of an \(n^{\alpha}\times n^{\beta}\) matrix \(A\) and an \(n^{\beta}\times n^{\gamma}\) matrix \(B\) can be computed using \(O(n^{\omega^{\prime}})\) arithmetic operations. We also define the (square) _matrix multiplication exponent_ as \(\omega:=\omega(1,1,1)\).
Additionally, we define the _dual matrix multiplication exponent_\(\omega^{\perp}\) as
\[\omega^{\perp}:=\sup\{\omega^{\prime}>0:\omega(1,1,\omega^{\prime})=2\}. \tag{1}\]
A recent line of work [10, 1, 1, 1, 11] has shown improved bounds on rectangular matrix multiplication exponents \(\omega(1,1,\beta)\) (including \(\omega=\omega(1,1,1)\)) and the dual matrix multiplication exponent \(\omega^{\perp}\). The current best bounds for all of these quantities--including, to the best of our knowledge, \(\omega(1,1,\beta)\) for all \(\beta\in(\omega^{\perp},1]\)--appear in [13]. In particular, [13] proves the following bounds:
\[\omega \leq 2.371552\, \tag{2}\] \[\omega^{\perp} \geq 0.321334. \tag{3}\]
We will use the following upper bound on rectangular matrix multiplication exponents \(\omega(1,1,\beta)\) in terms of \(\beta\), \(\omega\), and \(\omega^{\perp}\).
**Theorem 2.3** ([10, 11]).: _Let \(\beta\in[0,1]\). Then_
\[\omega(1,1,\beta)\leq\begin{cases}2&\text{if }0\leq\beta\leq\omega^{\perp}\,\\ 2+(\omega-2)\cdot\frac{\beta-\omega^{\perp}}{1-\omega^{\perp}}&\text{if } \omega^{\perp}<\beta\leq 1\.\end{cases}\]
We note that the second case of Theorem 2.3 implies that if \(\omega>2\) then \(\omega(1,1,\beta)<\omega\) for \(\beta<1\). We will also use the following lower bound on rectangular matrix multiplication exponents.
**Lemma 2.4**.: _For any \(\alpha,\beta\in[0,1]\), \(\omega(\alpha,1,\beta)\geq\omega-2+\alpha+\beta\)._
Proof.: Consider two matrices \(X,Y\in R^{n\times n}\) over some ring \(R\). Partition \(X\) into \(\lceil n^{1-\alpha}\rceil\) submatrices with dimensions \(\lceil n^{\alpha}\rceil\times n\), and \(Y\) into \(\lceil n^{1-\beta}\rceil\) submatrices with dimensions \(n\times\lceil n^{\beta}\rceil\) (padding \(X\) and \(Y\) with zeroes if \(\lceil n^{\alpha}\rceil\) or \(\lceil n^{\beta}\rceil\) does not divide \(n\)).
Then for any constant \(\varepsilon>0\), the product \(XY\) can be computed using \(O(\lceil n^{1-\alpha}\rceil\cdot\lceil n^{1-\beta}\rceil\cdot n^{\omega(\alpha,1,\beta)+\varepsilon})=O(n^{2+\omega(\alpha,1,\beta)-\alpha-\beta+\varepsilon})\) algebraic operations by multiplying each of the \(O(n^{1-\alpha}\cdot n^{1-\beta})=O(n^{2-\alpha-\beta})\) pairs of submatrices \(X^{\prime}\) of \(X\) and \(Y^{\prime}\) of \(Y\) in time \(O(n^{\omega(\alpha,1,\beta)+\varepsilon})\). It follows that \(\omega\leq 2+\omega(\alpha,1,\beta)-\alpha-\beta\), and the result follows by rearranging.
Finally, we will also use the following fact about equivalences among rectangular matrix multiplication exponents.
**Theorem 2.5** ([14]).: _For any \(\beta\in[0,1]\), \(\omega(1,1,\beta)=\omega(1,\beta,1)=\omega(\beta,1,1)\)._
### Matrix Multiplication Verification
We next define Matrix Multiplication Verification and the All Zeroes Problem over arbitrary rings \(R\) and with a sparsity parameter \(t\).
**Definition 2.6** (\(\text{MMV}_{R}^{t}\)).: The Matrix Multiplication Verification Problem over a ring \(R\) with sparsity parameter \(t=t(n)\geq 0\) (\(\text{MMV}_{R}^{t}\)) is defined as follows. Given matrices \(A,B,C\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) such that \(\|AB-C\|_{0}\leq t\) as input, decide whether \(AB=C\).
**Definition 2.7** (\(\text{AllZeroes}_{R}^{t}\)).: The All Zeroes Problem over a ring \(R\) with sparsity parameter \(t=t(n)\geq 0\) (\(\text{AllZeroes}_{R}^{t}\)) is defined as follows. Given matrices \(A,B\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) such that \(\|AB\|_{0}\leq t\) as input, decide whether \(AB=0\).
We note that when \(t\geq n^{2}\), \(t\) does not restrict the input.
We use the following reduction from [10] with the additional observation that it is sparsity-preserving. We provide its simple proof for completeness.
**Theorem 2.8** ([10, Proposition 3.1]).: _For all \(n\in\mathbb{Z}^{+}\) and \(0\leq t\leq n^{2}\), there is a reduction from \(\text{MMV}_{R}^{t}\) on \(n\times n\) matrices to \(\text{AllZeroes}_{R}^{t}\) on \(2n\times 2n\) matrices that uses \(O(n^{2})\) arithmetic operations on \(R\)._
Proof.: Let the matrices \(A,B,C\) be an instance of MMV, and define
\[A^{\prime}:=\begin{bmatrix}A&-I\\ 0&0\end{bmatrix}\;,\;\;\;B^{\prime}:=\begin{bmatrix}B&0\\ C&0\end{bmatrix}\;.\]
We then have that
\[A^{\prime}B^{\prime}=\begin{bmatrix}A&-I\\ 0&0\end{bmatrix}\cdot\begin{bmatrix}B&0\\ C&0\end{bmatrix}=\begin{bmatrix}AB-C&0\\ 0&0\end{bmatrix}\;,\]
and therefore that \(\|A^{\prime}B^{\prime}\|_{0}=\|AB-C\|_{0}\). It follows that \(A^{\prime},B^{\prime}\) is a YES instance of \(\text{AllZeroes}_{R}^{t}\) if and only if \(A,B,C\) is a YES instance of \(\text{MMV}_{R}^{t}\), as needed.
### Coding Theory
We next review useful background information about (linear) error-correcting codes \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\). We refer the reader to [10] for a comprehensive resource on coding theory. We start by giving a "dual" definition of error-correcting codes in terms of parity-check matrices \(H\), which will be useful for our work.
**Definition 2.9**.: Let \(k\geq 0\) and \(n\geq 1\) be integers satisfying \(k\leq n\), let \(q\) be a prime power, and let \(H\in\mathbb{F}_{q}^{(n-k)\times n}\) be a matrix with linearly independent rows. Then
\[\mathcal{C}=\mathcal{C}^{\perp}(H):=\{\boldsymbol{x}\in\mathbb{F}_{q}^{n}:H \boldsymbol{x}=\boldsymbol{0}\}\]
is the linear error-correcting code with parity check matrix \(H\).
We remark that it is also possible to define a code in the primal way using a _generator matrix_\(G\in\mathbb{F}_{q}^{k\times n}\), setting
\[\mathcal{C}=\mathcal{C}(G)=\{G^{T}\boldsymbol{x}:\boldsymbol{x}\in\mathbb{F}_ {q}^{k}\}\.\]
That is, \(\mathcal{C}(G)\) is the linear subspace of \(\mathbb{F}_{q}^{n}\) spanned by rows of \(G\). A generator matrix of the form \(G=(I_{k}\mid G^{\prime})\) is said to be in _systematic form_. Although we do not directly need this definition, it is related to a deep connection between coding theory and matrices with the "regularity" property we need for Theorem 1.2; see Remark 2.15.
The _minimum distance_ of a linear code \(\mathcal{C}\) is defined as
\[d=d(\mathcal{C}):=\min_{\begin{subarray}{c}\boldsymbol{x},\boldsymbol{y}\in \mathcal{C},\\ \boldsymbol{x}\neq\boldsymbol{y}\end{subarray}}\lVert\boldsymbol{x}- \boldsymbol{y}\rVert_{0}=\min_{\boldsymbol{x}\in\mathcal{C}\setminus\{ \boldsymbol{0}\}}\lVert\boldsymbol{x}\rVert_{0}\.\]
Note that for any \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\), \(0\leq d(\mathcal{C})\leq n\).
A linear error-correcting code \(\mathcal{C}\) with parity-check matrix \(H\in\mathbb{F}_{q}^{(n-k)\times n}\) and minimum distance \(d=d(\mathcal{C})\) is called a \([n,k,d]_{q}\) code. Here \(n\) is the _block length_ of the code and \(k\) is the _dimension_ of the code.
A primary goal of coding theory is to study the largest values of \(k=k(n)\) and \(d=d(n)\) for which \([n,k,d]_{q}\) codes exist (either for constant \(q\) or a family of values \(q=q(n)\)). A fundamental result in coding theory called the _Singleton bound_ asserts that for any \([n,k,d]_{q}\) code,
\[d\leq n-k+1. \tag{4}\]
See, e.g., [10, Theorem 4.3.1]. A code \(\mathcal{C}\) for which the preceding inequality is tight (i.e., for which \(d=n-k+1\)) is called a _maximum distance separable (MDS)_ code. We will use the fact that MDS codes exist for a wide range of values of \(k\) and \(n\), and moreover that these codes have efficiently computable parity check matrices \(H=H(k,n)\). In particular, generalized Reed-Solomon (GRS) codes are a family of such MDS codes (see [12, Theorem 5.1.1] and [10, Claim 5.2.3]).
**Theorem 2.10** ([12, Theorem 5.1.1]).: _Let \(q\) be a prime power, and let \(k\) and \(n\) be integers satisfying \(0<k\leq n\leq q\). Then there exists a \([k,n,n-k+1]_{q}\) code \(\mathcal{C}\). Moreover, there exists an algorithm to compute a parity check matrix \(H=H(k,n)\in\mathbb{F}_{q}^{(n-k)\times n}\) of such a code \(\mathcal{C}\) in \(n(n-k)\cdot\operatorname{poly}(\log q)\) time._
We remark that the parity-check matrix \(H\) has \((n-k)\cdot n\) entries, and so the preceding theorem says that \(H\) is computable in \(\operatorname{poly}(\log q)\) time per entry. For GRS codes, \(H\) is the transpose of a Vandermonde matrix. Such matrices are defined as follows.
**Definition 2.11** (Vandermonde Matrix).: Let \(m\) and \(n\) be positive integers. Let \(x_{1},\ldots,x_{n}\in\mathbb{F}\) be distinct elements of a field \(\mathbb{F}\). A _Vandermonde matrix_\(V\in\mathbb{F}^{n\times m}\) is a matrix with \(V_{i,j}:=x_{i}^{j-1}\) for distinct elements \(x_{1},\ldots,x_{n}\in\mathbb{F}\). I.e., \(V\) has the following form:
\[\begin{bmatrix}1&x_{1}&x_{1}^{2}&\cdots&x_{1}^{m-1}\\ 1&x_{2}&x_{2}^{2}&\cdots&x_{2}^{m-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_{n}&x_{n}^{2}&\cdots&x_{n}^{m-1}\end{bmatrix}\.\]
As a consequence of Theorem 2.10, the definition of minimum distance, and the definition of a parity check matrix we get the following corollary. In particular, the corollary uses the fact that if \(H\) is a parity check matrix for a code with minimum distance \(d\) then no non-zero vector \(\mathbf{x}\) with \(\|\mathbf{x}\|_{0}<d\) can be such that \(H\mathbf{x}=\mathbf{0}\).
**Corollary 2.12**.: _Let \(q\) be a prime power, and let \(k\) and \(n\) be integers satisfying \(0<k\leq n\leq q\). Then there exists a deterministic algorithm that runs in \(n(n-k)\cdot\operatorname{poly}(\log q)\) time for computing a matrix \(H\in\mathbb{F}_{q}^{(n-k)\times n}\) such that for all non-zero \(\mathbf{x}\in\mathbb{F}_{q}^{n}\) with \(\|\mathbf{x}\|_{0}\leq n-k\), \(H\mathbf{x}\neq\mathbf{0}\)._
We note that if \(q\leq\operatorname{poly}(n)\) then the algorithm in Corollary 2.12 runs in \(\widetilde{O}(n^{2})\) time. We also again emphasize that by the Singleton bound (Equation (4)) the parameters in Corollary 2.12 are optimal in terms of the trade-off between between \(k\) and \(d\) for fixed \(n\).
### Regular and Super Regular Matrices
We next define and discuss regular matrices.
**Definition 2.13** (Regular Matrix).: Let \(\mathbb{F}\) be a field and \(k\in\mathbb{Z}^{+}\). A matrix \(S\in\mathbb{F}^{n\times m}\) is _\(k\)-regular_ matrix, if all of its \(k\times k\) submatrices are non-singular. \(A\) is _super regular_, if it is \(k\)-regular for every \(1\leq k\leq\min(n,m)\).
We will use the following simple construction of super regular matrices.
**Definition 2.14** (Cauchy Matrix).: Let \(q\) be a prime power, \(n\) and \(k\) be positive integers satisfying \(k+n\geq q\), and \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{k}\) be distinct elements of \(\mathbb{F}_{q}\). Then \(S\in\mathbb{F}_{q}^{n\times k}\) defined by \(S_{i,j}=1/(x_{i}-y_{j})\) is a _Cauchy matrix_.
_Remark 2.15_.: We again recall the correspondence between MDS codes and super regular matrices: a matrix \(G=(I_{k}\mid S)\in\mathbb{F}_{q}^{k\times n}\) is a generator matrix of an MDS code if and only if \(G^{\prime}\) is super regular [10]. In fact, as [10] shows, it turns out that \(G=(I_{k}\mid S)\in\mathbb{F}_{q}^{k\times n}\) is the generator matrix of a generalized Reed-Solomon (GRS) code if and only if \(S\) is a Cauchy matrix! Indeed, this illustrates a close relationship between GRS codes, Vandermonde matrices (which are generator matrices of GRS codes not in systematic form), and Cauchy matrices (which correspond to the "non-identity part" of generator matrices of GRS codes in systematic form).
For completeness, we include a proof that Cauchy matrices are super regular.
**Proposition 2.16**.: _Let \(q\) be a prime power, \(n\) and \(k\) be positive integers satisfying \(k+n\geq q\), and \(S\in\mathbb{F}_{q}^{n\times k}\) be a Cauchy matrix. Then \(S\) is super regular._
Proof.: For \(1\leq m\leq\min(n,k)\), consider an arbitrary submatrix \(S^{\prime}\in\mathbb{F}_{q}^{m\times m}\) of \(S\). Note that \(S^{\prime}\) is a Cauchy matrix, too, and \(S^{\prime}=(1/(x_{i}^{\prime}-y_{j}^{\prime}))_{i,j=1}^{m}\) for distinct \(x_{1}^{\prime},\ldots x_{m}^{\prime},y_{1}^{\prime},\ldots,y_{m}^{\prime}\). The determinant of \(S^{\prime}\), known as the Cauchy determinant [12, Part VII, Chapter 1, Problem 3], is
\[\det(S^{\prime})=\frac{\prod_{i=2}^{m}\prod_{j=1}^{i-1}(x_{i}^{\prime}-x_{j}^ {\prime})(y_{i}^{\prime}-y_{j}^{\prime})}{\prod_{i=1}^{m}\prod_{j=1}^{m}(x_{i}^ {\prime}-y_{j}^{\prime})}\;.\]
Since \(x_{1}^{\prime},\ldots,x_{n}^{\prime},y_{1}^{\prime},\ldots,y_{k}^{\prime}\) are distinct, \(\det(S^{\prime})\neq 0\), which finishes the proof.
As a consequence of Proposition 2.16, we get an efficient construction of (columns of) super regular matrices.
**Corollary 2.17**.: _Let \(q\) be a prime power, and let \(n\), \(k\), and \(i\) be positive integers satisfying \(k+n\geq q\) and \(1\leq i\leq k\). Then there exists a deterministic algorithm that runs in time \(n\cdot\operatorname{poly}(\log q,\log n)\), and on input \((q,n,k,i)\) outputs the \(i\)th column \(\mathbf{c}_{i}\) of a super regular matrix \(S=(\mathbf{c}_{1},\ldots,\mathbf{c}_{k})\in\mathbb{F}_{q}^{n\times k}\)._
### Fine-Grained Complexity
In the \(k\)-SAT problem, given a \(k\)-CNF formula \(\varphi\), the task is to check if \(\varphi\) has a satisfying assignment. The complement of \(k\)-SAT, the \(k\)-TAUT problem, is to decide if all assignments to the variables of a given \(k\)-DNF formula satisfy it. The celebrated Strong Exponential Time Hypothesis (SETH) is now central to the field of fine-grained complexity.
**Definition 2.18** (Strong Exponential Time Hypothesis (SETH), [13]).: For every constant \(\varepsilon>0\) there exists \(k\in\mathbb{Z}^{+}\) such that no deterministic algorithm solves \(k\)-SAT on formulas with \(n\) variables in time \(2^{(1-\varepsilon)n}\).
The field of fine-grained complexity has leveraged SETH to prove tight bounds on the complexity of a large host of problems (see the excellent surveys by Vassilevska Williams [21, 22]). We will also use the following stronger hypothesis.
**Definition 2.19** (Nondeterministic Strong Exponential Time Hypothesis (NSETH), [14]).: For every constant \(\varepsilon>0\) there exists \(k\in\mathbb{Z}^{+}\) such that no nondeterministic algorithm solves \(k\)-TAUT on formulas with \(n\) variables in time \(2^{(1-\varepsilon)n}\).
A classical tool used in studies of algorithms for \(k\)-SAT is the Sparsification Lemma.
**Theorem 2.20** (Sparsification Lemma [13]).: _For every \(k\geq 3\) and \(\lambda>0\) there exists a constant \(c=c(k,\gamma)\) such that every \(k\)-SAT formula \(\varphi\) with \(n\) variables can be expressed as \(\varphi=\vee_{i=1}^{r}\psi_{i}\) where \(r\leq 2^{\lambda n}\) and each \(\psi_{i}\) is a \(k\)-SAT formula with at most \(cn\) clauses. Moreover, all \(\psi_{i}\) can be computed in \(2^{\lambda n}\)-time._
Now we formally define the notions of fine-grained reductions and SETH-hardness. We start with the definition of fine-grained reductions from [21, Definition 6] with a small modification that specifies the dependence of \(\delta\) on \(\varepsilon\).
**Definition 2.21** (Fine-grained reductions).: Let \(P,Q\) be problems, \(p,q\colon\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\) be non-decreasing functions and \(\delta\colon\mathbb{R}_{>0}\to\mathbb{R}_{>0}\). We say that \((P,p(n))\)\(\delta\)_-fine-grained reduces_ to \((Q,q(n))\) and write \((P,p(n))\leq_{\delta}(Q,q(n))\), if for every \(\varepsilon>0\) and \(\delta=\delta(\varepsilon)>0\), there exists an algorithm \(\mathcal{A}\) for \(P\) with oracle access to \(Q\), a constant \(d\), a function \(t(n)\colon\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\), such that on any instance of \(P\) of size \(n\), the algorithm \(\mathcal{A}\)
* runs in time at most \(d(p(n))^{1-\delta}\);
* produces at most \(t(n)\) instances of \(Q\) adaptively: every instance depends on the previously produced instances as well as the answers of the oracle \(Q\);
* the sizes \(n_{i}\) of the produced instances satisfy the inequality \[\sum_{i=1}^{t(n)}q(n_{i})^{1-\varepsilon}\leq d(p(n))^{1-\delta}\,.\]
It is easy to see that if \((P,p(n))\leq_{\delta}(Q,q(n))\), then for every \(\varepsilon>0\), an algorithm for \(Q\) running in time \(O(q(n)^{1-\varepsilon})\) implies an algorithm for \(P\) running in time \(O(p(n)^{1-\delta})\). Equipped with this definition, we are ready to define SETH-hardness of a problem. SETH-hardness of a problem is a sequence of fine-grained reductions from \(k\)-SAT for every value of \(k\).
**Definition 2.22** (SETH-hardness).: Let \(p\colon\mathbb{Z}_{\geq 0}\to\mathbb{Z}_{\geq 0}\) be a non-decreasing function. We say that a problem \(P\) is \(p(n)\)_-SETH-hard_ if there exists a function \(\delta\colon\mathbb{R}_{>0}\to\mathbb{R}_{>0}\) and for every \(k\in\mathbb{N}\),
\[(k\text{-SAT},2^{n})\leq_{\delta}(P,p(n))\,.\]
It is now easy to see that if a problem \(P\) is \(p(n)\)-SETH-hard, then any algorithm solving \(P\) in time \(p(n)^{(1-\varepsilon)}\) implies an algorithm solving \(k\)-SAT in time \(2^{(1-\delta(\varepsilon))n}\) for all \(k\), thus, breaking SETH.
### Matrix Rigidity and Circuit Lower Bounds
A celebrated result of Valiant [20] from 1977 shows that all linear functions computed by linear circuits of logarithmic depth and linear size can be written as a sum of a sparse and low-rank matrices. Valiant therefore introduced the notion of _rigid_ matrices.
**Definition 2.23**.: Let \(\mathbb{F}\) be a field, \(A\in\mathbb{F}^{n\times n}\) be a matrix, and \(0\leq r\leq n\). The rigidity of \(A\) over \(\mathbb{F}\), denoted by \(\mathcal{R}_{r}^{\mathbb{F}}(A)\), is the Hamming distance between \(A\) and the set of matrices of rank at most \(r\). Formally,
\[\mathcal{R}_{r}^{\mathbb{F}}(A):=\min_{S\colon\ \text{rank}(A-S)\leq r}\|S\|_{0}\;.\]
In other words, a matrix \(A\) has rigidity \(\mathcal{R}_{r}^{\mathbb{F}}(A)\geq s\) if and only if \(A\in\mathbb{F}^{n\times n}\)_cannot_ be written as a sum
\[A=S+L\;,\]
where \(S\in\mathbb{F}^{n\times n}\) is an \((s-1)\)-sparse matrix, and \(L\in\mathbb{F}^{n\times n}\) has low rank rank\((L)\leq r\).
Valiant proved that computing a matrix \(A\in\mathbb{F}^{n\times n}\) of rigidity \(\mathcal{R}_{\varepsilon n}^{\mathbb{F}}(A)\geq n^{1+\varepsilon}\) for a constant \(\varepsilon\), requires linear circuits of super-linear size and logarithmic depth. The construction of a rigid matrix even in the complexity class \(\mathsf{E}^{\mathsf{NP}}\) (i.e., an \(\mathsf{E}^{\mathsf{NP}}\) algorithm that outputs rigid matrices) would give us a long-awaited super-linear circuit lower bound.
Fast Matrix Multiplication Verification for Sparse Matrices
In this section, we prove the main algorithmic results of this work: we present efficient deterministic and randomized algorithms for sparse MMV over the integers and finite fields. Specifically, in Sections 3.1 and 3.2, we prove Theorems 1.1 and 1.2, respectively. In Section 3.3, we give a simple barrier to extending these techniques to the general case of MMV.
### Deterministic MMV for Sparse Matrices
We present a deterministic algorithm that solves MMV over the ring of integers. We do this by first giving an algorithm that solves MMV over finite fields in Theorem 3.2, and then extend this result to the ring of integers in Corollary 3.3.
We will use the following simple observation relating the sparsity of a matrix with the sparsity of its rows and columns. This observation also appears, e.g., in [11].
**Lemma 3.1**.: _Let \(M\) be a \(t\)-sparse matrix. Then \(M\) has a non-zero, \(\sqrt{t}\)-sparse row or column._
Proof.: If \(M\) has a non-zero, \(\sqrt{t}\)-sparse row then the claim holds. So, suppose that all non-zero rows of \(M\) have more than \(\sqrt{t}\) non-zero entries. Then, because \(\|M\|_{0}\leq t\), \(M\) has fewer than \(\sqrt{t}\) non-zero rows. It follows that all non-zero columns of \(M\) are \(\sqrt{t}\)-sparse.
We now prove our main theorem about deterministic MMV.
**Theorem 3.2** (Fast deterministic MMV for sparse matrices, formal).: _Let \(0\leq\delta\leq 2\) be a constant, and \(\mathbb{F}_{q}\) be a finite field. Then for every constant \(\varepsilon>0\), there is a deterministic algorithm for \(\text{MMV}_{\mathbb{F}_{q}}^{\text{\tiny$\mathsf{n}$}^{\delta}}\) that runs in \(n^{\omega(1,1,\delta/2)+\varepsilon}\cdot\operatorname{poly}(\log q)\) time._
Proof.: From Theorem 2.8, it is sufficient to construct a deterministic algorithm solving the \(\text{AllZeroes}_{\mathbb{F}_{q}}^{\text{\tiny$\mathsf{n}$}^{\delta}}\) problem with the same guarantee on the running time. If \(q<n\), then we consider \(\mathbb{F}=\mathbb{F}_{r}\), a field extension of \(\mathbb{F}_{q}\), where \(n\leq r<n^{2}\). (Such a field can be constructed in deterministic time \(\widetilde{O}(n)\)[12]). If \(q\geq n\), then we simply set \(\mathbb{F}=\mathbb{F}_{q}\). We remark that \(AB=0\) over \(\mathbb{F}_{q}\) if and only if \(AB=0\) over \(\mathbb{F}\), and that all arithmetic operations over \(\mathbb{F}\) can be performed in time \(\operatorname{poly}(\log q,\log n)\) (see, e.g., [12]). In the following, given two matrices \(A,B\in\mathbb{F}^{n\times n}\), with the guarantee that \(\|AB\|_{0}\leq n^{\delta}\), we will determine if \(AB=0\).
Let \(H\in\mathbb{F}^{n^{\delta/2}\times n}\) be the parity check matrix from Corollary 2.12 with \(k=n-n^{\frac{\delta}{2}}\) computed in time \(n^{2}\cdot\operatorname{poly}(\log q)\). Our algorithm reports that \(AB=0\) if and only if both \(HAB=0\) and \(H(AB)^{T}=0\).
Since the matrix \(H\) is computed in time \(n^{2}\cdot\operatorname{poly}(\log q,\log n)\), and computing the matrix products \(HAB\) and \(H(AB)^{T}\) takes time \(n^{\omega(1,1,\delta/2)+\varepsilon}\cdot\operatorname{poly}(\log q)\) for any constant \(\varepsilon>0\), we conclude the desired bound on the running time of the algorithm.
It remains to prove the correctness of the algorithm. First, if \(AB=0\), then \(HAB=H(AB)^{T}=0\) for all matrices \(H\), and, therefore, our algorithm correctly concludes that \(AB=0\). Now, suppose that \(AB\neq 0\). If a non-zero column \(\boldsymbol{c}\) of \(AB\) has \(\|\boldsymbol{c}\|_{0}\leq n^{\delta/2}\) then \(H\boldsymbol{c}\neq\boldsymbol{0}\) by Corollary 2.12 and so \(HAB\neq 0\). Similarly, if a non-zero row \(\boldsymbol{r}\) of \(AB\) has \(\|\boldsymbol{r}\|_{0}\leq n^{\delta/2}\) then \(H\boldsymbol{r}^{T}\neq\boldsymbol{0}\) and so \(H(AB)^{T}\neq 0\). Moreover, Lemma 3.1 implies that \(AB\) has a non-zero, \(n^{\delta/2}\)-sparse row or column, and so the theorem follows.
We now show how to extend Theorem 3.2 to the case of MMV over the integers.
**Corollary 3.3**.: _Let \(0\leq\delta\leq 2\) be a constant. Then for every \(\varepsilon>0\), there is a deterministic algorithm (given in Figure 1) for \(\text{MMV}_{\mathbb{Z}}^{n^{\delta}}\) for matrices with entries from \([-M,M]\) that runs in \(n^{\omega(1,1,\delta/2)+\varepsilon}\cdot\operatorname{poly}(\log M)\) time._
Proof.: The algorithm follows the high level idea of the algorithm from Theorem 3.2 with a few modifications that we describe below. We present an algorithm for the \(\text{AllZeroes}_{\mathbb{Z}}^{n^{\delta}}\) problem, and use Theorem 2.8 to conclude an algorithm for the \(\text{MMV}_{\mathbb{Z}}^{n^{\delta}}\) problem. Let \(p\) be an arbitrary prime satisfying \(n\leq p<2n\). Such prime can be found in deterministic time \(\widetilde{O}(n)\) using the Sieve of Eratosthenes (see, e.g., [23, Theorem 18.10 (ii)]). Let \(H\in\mathbb{F}_{p}^{n^{\delta/2}\times n}\) be the parity check matrix from Corollary 2.12 with \(k=n-n^{\delta/2}\) computed in time \(n^{2}\cdot\operatorname{poly}(\log n)\). Our algorithm reports that \(AB=0\) if and only if \(HAB=H(AB)^{T}=0\), where the multiplication is over the integers.
The running time of the algorithm is dominated by the time required to compute rectangular matrix multiplication, \(n^{\omega(1,1,\delta/2)+\varepsilon}\cdot\operatorname{poly}(\log M)\) for any \(\varepsilon>0\). If \(AB=0\), then \(HAB=H(AB)^{T}=0\) for every \(H\). Now, if \(AB\neq 0\), we can conclude that \(AB\) contains a non-zero column or row vector of sparsity at most \(n^{\delta/2}\). Let \(\mathbf{c}\in\mathbb{Z}^{n}\) be a nonzero column of \(AB\) of sparsity \(\|\mathbf{c}\|_{0}\leq n^{\delta/2}\). Assume that \(H\mathbf{c}=\mathbf{0}\) where the multiplication is over the integers. We factor out the largest power of \(p\) dividing all coefficients of \(\mathbf{c}\), and obtain a non-zero vector \(\mathbf{c}^{\prime}\in\mathbb{F}_{p}^{n}\), \(\|\mathbf{c}^{\prime}\|_{0}\leq n^{\delta/2}\) such that \(H\mathbf{c}^{\prime}=\mathbf{0}\) where the multiplication is over \(\mathbb{F}_{p}\). This contradicts the guarantee of Corollary 2.12. Therefore, \(H\mathbf{c}\neq\mathbf{0}\), and our algorithm detects that \(AB\neq C\) in this case. Similarly, in the case when \(AB\) contains a non-zero row of sparsity at most \(n^{\delta/2}\), the algorithm detects that \(AB\neq 0\) from \(H(AB)^{T}\neq 0\).
We again remark that the algorithm from Corollary 3.3 outperforms the currently best known algorithm from [19] for \(\text{MMV}_{\mathbb{Z}}^{n^{\delta}}\) for every \(\delta\geq 1.055322\). Indeed, the algorithm from [19] runs in time \(\widetilde{O}(n^{1+\delta})\), while the running time of the algorithm from Corollary 3.3 is \(\widetilde{O}\left(n^{\omega(1,1,\delta/2)+\varepsilon}\right)\) for any \(\varepsilon>0\). From Table 1 in [20], the latter running time is asymptotically better than the former one for every \(\delta/2>0.527661\).
If the matrix multiplication exponent \(\omega=2\), then the MMV problem can be solved in time \(n^{2+\varepsilon}\) for every \(\varepsilon>0\). Below we show that even if \(\omega>2\), then our algorithm solves \(\text{MMV}_{\mathbb{Z}}^{n^{\delta}}\) in better-than-\(n^{\omega}\) time for every constant \(\delta<2\).
**Corollary 3.4**.: _If \(\omega>2\), then for every \(0\leq\delta<2\) there exists an \(\alpha>0\), and a deterministic algorithm for \(\text{MMV}_{\mathbb{Z}}^{n^{\delta}}\) for matrices with entries from \([-M,M]\) that runs in \(n^{\omega-\alpha}\cdot\operatorname{poly}(\log M)\) time._
Figure 1: Deterministic Algorithm for \(\text{MMV}_{\mathbb{Z}}^{t}\).
Proof.: The running time of the algorithm from Corollary 3.3 is \(n^{\omega(1,1,\delta/2)+\varepsilon}\) for any \(\varepsilon>0\).
If \(\delta\leq 2\omega^{\perp}\), then by Theorem 2.3, \(\omega(1,1,\delta/2)=2<\omega\). Setting \(\varepsilon=\alpha=(\omega-2)/2>0\), we have that the running time of the algorithm is bounded from above by
\[n^{2+\varepsilon}\cdot\operatorname{poly}(\log M)=n^{\omega-\alpha}\cdot \operatorname{poly}(\log M)\,.\]
If \(\delta>2\omega^{\perp}\), then by Theorem 2.3,
\[\omega(1,1,\delta/2)\leq 2+(\omega-2)\frac{\delta/2-\omega^{\perp}}{1-\omega^{ \perp}}\,.\]
In this case, setting
\[\alpha=\varepsilon=\frac{(\omega-2)(1-\delta/2)}{2(1-\omega^{\perp})}>0\,,\]
we have that the algorithm runs in time
\[n^{\omega(1,1,\delta/2)+\varepsilon}\cdot\operatorname{poly}(\log M)=n^{ \omega-\alpha}\cdot\operatorname{poly}(\log M)\,.\qed\]
Interestingly, Corollary 3.4 reduces the problem of constructing faster-than-\(n^{\omega}\) algorithms for AllZeroes to the following gap problem: Distinguish the case where the product of two matrices has \(\Omega(n^{2-\varepsilon})\) non-zero entries and the case where the product is identically zero. As noted in Section 1.2, this leads to a surprising situation: on the one hand, for every \(\varepsilon>0\), if the product of the two matrices is \(n^{2-\varepsilon}\)-sparse, then there is an \(n^{\omega-\alpha}\)-time deterministic algorithm for \(\alpha>0\); on the other hand, if the product is \(\Omega(n^{2-\varepsilon})\)-dense, then there is randomized algorithm that runs in time \(\widetilde{O}(n^{1+\varepsilon})\) and reads only \(\widetilde{O}(n^{1+\varepsilon})\) entries of the input matrices.
### Randomized MMV for Sparse Matrices
In this section, we extend our framework to the setting of randomness-efficient probabilistic algorithms for MMV. We combine our results conditioned on sparsity along with [10] to get the best known randomized algorithm for MMV in both running time, and random bits used. In Theorem 3.5, we present an algorithm for MMV over finite fields, and in Corollary 3.6 we extend it to the case of matrices over the integers.
**Theorem 3.5** (Fast randomized MMV for sparse matrices, formal).: _Let \(0\leq\delta\leq 2\) be a constant, and \(\mathbb{F}_{q}\) be a finite field. Then for every \(\varepsilon>0\), there is a randomized algorithm for \(\text{MMV}_{\mathbb{F}_{q}}^{\delta}\) that runs in \(n^{2}\cdot\operatorname{poly}(\log(nq/\varepsilon))\) time, succeeds with probability \(1-\varepsilon\), and uses at most \(\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\) bits of randomness._
Proof.: Let \(k=\lceil n^{\delta/2}/\varepsilon\rceil\). If \(q<k+n\), then we consider \(\mathbb{F}=\mathbb{F}_{r}\), a field extension of \(\mathbb{F}_{q}\), where \(k+n\leq r<(k+n)^{2}\), otherwise we set \(\mathbb{F}=\mathbb{F}_{q}\). We note that \(\mathbb{F}\) can be constructed in time \(q\cdot\operatorname{poly}(\log(kq))=n^{2}\cdot\operatorname{poly}(\log(nq/ \varepsilon))\)[11], all arithmetic operations over \(\mathbb{F}\) can be performed in time \(\operatorname{poly}(\log|\mathbb{F}|)=\operatorname{poly}(\log(nq/\varepsilon))\)[11], and that \(AB=C\) over \(\mathbb{F}\) if and only if \(AB=C\) over \(\mathbb{F}_{q}\).
Let \(S\in\mathbb{F}^{n\times k}\) be an \(n^{\delta/2}\)-regular matrix, and \(\mathbf{s}\in\mathbb{F}^{n}\) be a uniformly random column of \(S\). Our algorithm reports that \(AB=C\) if and only if \(AB\mathbf{s}=C\mathbf{s}\) and \((AB)^{T}\mathbf{s}=C^{T}\mathbf{s}\). In the following, we will show that the running time of the algorithm is \(n^{2}\cdot\operatorname{poly}(\log(nq/\varepsilon))\), the number of random bits used is \(\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\), and finally we will prove the correctness of the algorithm.
From Corollary 2.17, one can compute \(\mathbf{s}\) in deterministic time \(n\cdot\operatorname{poly}(\log(nq/\varepsilon))\). Each of \(AB\mathbf{s}\) and \((AB)^{T}\mathbf{s}\) can be computed by two vector-matrix multiplications in deterministic \(n^{2}\cdot\operatorname{poly}(\log(nq/\varepsilon))\) time.
The described algorithm only uses randomness to generate a uniformly random column index \(1\leq i\leq k\), which requires \(\lceil\log_{2}(k)\rceil=\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\) random bits.
If \(AB=C\), then for every vector \(\mathbf{s}\), we have \(AB\mathbf{s}=C\mathbf{s}\) and \((AB)^{T}\mathbf{s}=C^{T}\mathbf{s}\), and the algorithm correctly concludes that \(AB=C\). For the case when \(AB\neq C\), we want the algorithm to detect this with probability at least \(1-\varepsilon\). To this end, we will show that for every \(n^{\delta/2}\)-regular matrix \(S\in\mathbb{F}^{n\times k}\), either the matrix \(ABS\) or the matrix \((AB)^{T}S\) has at least \(1-\varepsilon\) fraction of non-zero columns.
By Lemma 3.1, if \(\|AB-C\|_{0}\leq n^{\delta}\), then \(AB-C\) contains a non-zero row or column of sparsity at most \(n^{\delta/2}\). Let us first consider the case where \(AB-C\) has a non-zero row \(\mathbf{r}\in\mathbb{F}^{n}\) of sparsity at most \(n^{\delta/2}\). We will show that \(\mathbf{r}S\) has fewer than \(n^{\delta/2}\) zero coordinates. This will finish the proof, as the success probability of our algorithm is at least the probability of sampling a column where \(\mathbf{r}S\) is non-zero, which is at least \(1-n^{\delta/2}/k\geq 1-\varepsilon\).
Assume, towards a contradiction, that \(\mathbf{r}S\) has at least \(n^{\delta/2}\) zero coordinates. Let \(I\subseteq[n],|I|=n^{\delta/2}\) be an arbitrary set containing (the indices of) all non-zero coordinates of \(\mathbf{r}\), and let \(J\subseteq[n],|J|=n^{\delta/2}\) be a set of indices of some \(n^{\delta/2}\) zero coordinates of \(\mathbf{r}S\). Then the matrix \(S\) restricted to the rows \(I\) and columns \(J\) is singular (the linear combination of these rows, given by the non-zero vector \(\mathbf{r}\), is \(\mathbf{0}\)). On the other hand, since \(S\) is \(n^{\delta/2}\)-regular, \(S\) restricted to \(I\) and \(J\) is non-singular, which leads to a contradiction. Similarly, in the case when \(AB-C\) has a non-zero column \(\mathbf{c}\in\mathbb{F}^{n}\) of sparsity at most \(n^{\delta/2}\), our algorithm will detect that \(AB\neq C\) with probability at least \(1-\varepsilon\) by checking \((AB)^{T}\mathbf{s}=C^{T}\mathbf{s}\).
Below we show how to extend the algorithm of Theorem 3.5 to the case of integer matrices.
**Corollary 3.6**.: _Let \(0\leq\delta\leq 2\) be a constant. Then for every \(\varepsilon>0\), there is a randomized algorithm (given in Figure 2) for \(\text{MMV}_{\mathbb{Z}}^{\delta}\) for matrices with entries from \([-M,M]\) that runs in \(n(n+1/\varepsilon)\cdot\operatorname{poly}(\log(nM/\varepsilon))\) time, succeeds with probability \(1-\varepsilon\), and uses at most \(\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\) bits of randomness. In particular, when \(\varepsilon\geq 1/n\), the algorithm runs in \(n^{2}\cdot\operatorname{poly}(\log n,\log M)\) time._
Proof.: Let \(k=\lceil n^{\delta/2}/\varepsilon\rceil\), and \(p\) be an arbitrary prime \(p\) satisfying \(k+n\leq p<2(k+n)\). Let \(S\in\mathbb{F}_{p}^{n\times k}\) be an \(n^{\delta/2}\)-regular matrix, and \(\mathbf{s}\in\mathbb{F}_{p}^{n}\) be a uniformly random column of \(S\). Our algorithm reports
Figure 2: Randomized Algorithm for \(\text{MMV}_{\mathbb{Z}}^{t}\).
that \(AB=C\) if and only if \(AB\mathbf{s}=C\mathbf{s}\) and \((AB)^{T}\mathbf{s}=C^{T}\mathbf{s}\), where the multiplication is over \(\mathbb{Z}\).
Using the Sieve of Eratosthenes (see, e.g., [21, Theorem 18.10 (ii)]), one can find a prime \(p\) satisfying \(k+n\leq p<2(k+n)\) in deterministic time \(\widetilde{O}(k+n)\leq(n/\varepsilon)\cdot\operatorname{poly}(\log(n/ \varepsilon))\). From Corollary 2.17, \(s\) can be computed in deterministic time \(n\cdot\operatorname{poly}(\log(n/\varepsilon))\). Finally \(AB\mathbf{s}\) and \((AB)^{T}\mathbf{s}\) can be computed in time \(n^{2}\cdot\operatorname{poly}(\log(nM/\varepsilon))\). The algorithm uses \(\lceil\log_{2}(k)\rceil=\lceil\delta/2\cdot\log_{2}(n)+\log_{2}(1/\varepsilon)\rceil\) random bits to generate a uniformly random column index \(1\leq i\leq k\).
If \(AB=C\), then for every vector \(\mathbf{s}\), we have \(AB\mathbf{s}=C\mathbf{s}\) and \((AB)^{T}\mathbf{s}=C^{T}\mathbf{s}\), and the algorithm correctly concludes that \(AB=C\). Assume that \(AB\neq C\). Since \(\|AB-C\|_{0}\leq n^{\delta}\), \(AB-C\) contains a non-zero row or column of sparsity at most \(n^{\delta/2}\). We will assume that \(AB-C\) has a non-zero row \(\mathbf{r}\in\mathbb{Z}^{n}\) of sparsity at most \(n^{\delta/2}\) (the other case is analogous to this one). Assume, towards a contradiction, that \(\mathbf{r}S\) has at least \(n^{\delta/2}\) zero coordinates. Let \(I\subseteq[n],|I|=n^{\delta/2}\) be an arbitrary set containing (the indices of) all non-zero coordinates of \(\mathbf{r}\), and let \(J\subseteq[n],|J|=n^{\delta/2}\) be a set of indices of some \(n^{\delta/2}\) zero coordinates of \(\mathbf{r}S\). Then the matrix \(S\) restricted to the rows \(I\) and columns \(J\) is singular over the integers. On the other hand, since \(S\) is \(n^{\delta/2}\)-regular, \(S\) restricted to \(I\) and \(J\) is non-singular over \(\mathbb{F}_{p}\), and, thus, over \(\mathbb{Z}\). This leads to a contradiction. Therefore, \(\mathbf{r}S\) has at most \(n^{\delta/2}\) zero coordinates, and \(ABS\) contains at least \(k-n^{\delta/2}\geq(1-\varepsilon)k\) non-zero columns.
### A Barrier for Linear Algebraic Approaches to MMV
In this section, we give a barrier for algorithms for the AllZeroes problem that are based on "zero tests." On an input instance \(A,B\), such zero tests compute the matrix product \(L(AB)R\) for fixed \(L\in\mathbb{F}^{n^{\alpha}\times n}\) and \(R\in\mathbb{F}^{n\times n^{\beta}}\) with \(\alpha,\beta\in[0,1]\) and report "\(AB=0\)" if and only if \(L(AB)R=0\). In fact, we consider algorithms based on \(k\) such tests, i.e., algorithms that report "\(AB=0\)" if and only if \(L_{i}(AB)R_{i}=0\) for fixed \(L_{i}\in\mathbb{F}^{n^{\alpha_{i}}\times n}\) and \(R_{i}\in\mathbb{F}^{n\times n^{\beta_{i}}}\) with \(\alpha_{i},\beta_{i}\in[0,1]\) for \(i=1,\dots,k\). This approach captures a large class of algorithms for MMV, including our deterministic algorithm presented in Figure 1.
In Theorem 3.7, we show a barrier to getting a fast algorithm using this approach to MMV. The idea behind the barrier is as follows. Treating the \(n^{2}\) entries \(C_{i,j}\) of \(C=AB\) as variables, the \(i\)th zero test \(L_{i}ABR_{i}\) for fixed \(L_{i}\in\mathbb{F}^{n^{\alpha_{i}}\times n},R_{i}\in F^{n\times n^{\beta_{i}}}\) corresponds to a system of \(n^{\alpha_{i}+\beta_{i}}\) homogeneous linear equations. So, for \(AB=C=0\) to be the unique solution to this system of equations, we must have \(\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\geq n^{2}\).
**Theorem 3.7**.: _Let \(\mathbb{F}\) be a field, \(k\) be an integer, and for every \(1\leq i\leq k\), \(L_{i}\in\mathbb{F}^{n^{\alpha_{i}}\times n},R_{i}\in\mathbb{F}^{n\times n^{ \beta_{i}}}\) be matrices such that \(\operatorname{rank}(L_{i})=n^{\alpha_{i}}\) and \(\operatorname{rank}(R_{i})=n^{\beta_{i}}\) for \(0\leq\alpha_{i},\beta_{i}\leq 1\). Assume that for all \(A,B\in\mathbb{F}^{n\times n}\), \(AB=0\) if and only if \(L_{i}ABR_{i}=0\) for all \(1\leq i\leq k\). Then_
\[\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\geq n^{2}\,,\text{ and }\] \[\sum_{i=1}^{k}n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))}\geq\sum_{ i=1}^{k}n^{\omega(\alpha_{i},1,\beta_{i})}\geq n^{\omega}\,.\]
Proof.: Let \(C=AB\), and let us view each of the \(n^{2}\) entries \(C_{i,j}\) of \(C\) as a variable. Then each constraint \(L_{i}CR_{i}=0\) induces a homogeneous system of \(n^{\alpha_{i}+\beta_{i}}\) linear equations over these variables, and the constraints together induce a system of \(\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\) such equations. In order for a homogeneous system of linear equations to have \(C=0\) (the all-zeroes solution) as its only solution, the
number of equations needs to be at least \(n^{2}\). Hence, we have that \(\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\geq n^{2}\). Furthermore, by Lemma 2.4, \(\omega(\alpha_{i},1,\beta_{i})\geq\omega-2+\alpha_{i}+\beta_{i}\). This implies that
\[\sum_{i=1}^{k}n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))}\geq\sum_{i=1}^{k}n^{ \omega(\alpha_{i},1,\beta_{i})}\geq\sum_{i=1}^{k}n^{\omega-2+\alpha_{i}+\beta _{i}}\geq n^{\omega-2}\cdot\sum_{i=1}^{k}n^{\alpha_{i}+\beta_{i}}\geq n^{\omega}\,,\]
which finishes the proof.
From Theorem 3.7, we get the following corollary, which says that any algorithm for MMV using "zero tests" and computing the matrix each of the matrix products \(L_{i}ABR_{i}\) independently in \(\Omega(n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))})\) time runs in \(\Omega(n^{\omega})\) time. We note that \(\Omega(n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))})\) lower bounds the minimum worst-case cost of multiplying (1) a \(n^{\alpha_{i}}\times n\) and an \(n\times n\) matrix and (2) an \(n\times n\) matrix and an \(n\times n^{\beta_{i}}\) matrix. Any parenthesization of \(L_{i}ABR_{i}\) (e.g., \(L_{i}(A(BR_{i}))\)) requires computing a matrix product either of form (1) or (2).
**Corollary 3.8**.: _Let \(\mathcal{A}\) be an algorithm for MMV\({}_{\mathbb{F}}\) that behaves in the following way. On input \(A,B\in\mathbb{F}^{n\times n}\), \(\mathcal{A}\) computes matrices \(L_{1},R_{1},\ldots,L_{k},R_{k}\) with \(L_{i}\in\mathbb{F}^{n^{\alpha_{i}}\times n},R_{i}\in\mathbb{F}^{n^{\beta_{i}}}\) for \(0\leq\alpha_{i},\beta_{i}\leq 1\) such that \(AB=0\) if and only if \(L_{i}ABR_{i}=0\) for all \(1\leq i\leq k\). It then computes each product \(L_{i}ABR_{i}\) independently using \(\Omega(n^{\omega(1,1,\min(\alpha_{i},\beta_{i}))})\geq\Omega(n^{\omega(\alpha_ {i},1,\beta_{i})})\) time. Then \(\mathcal{A}\) runs in \(\Omega(n^{\omega})\) time._
We note that Corollary 3.8 does _not_ rule out more clever ways of computing the matrix products \(L_{i}ABR_{i}\), such as exploiting the structure of \(L_{i},R_{i}\) or computing multiple products simultaneously. However, a stronger version of Corollary 3.8 with a more subtle proof does seem to rule out some of these algorithm variants too.
## 4 On the non-SETH-hardness of Matrix Multiplication
In this section, we show a barrier for proving a lower bound for Matrix Multiplication (and, thus, for MMV) under SETH. More specifically, we show that such a result would imply either super-linear circuit lower bounds, or an explicit construction of rigid matrices.
### A Nondeterministic Algorithm for Multiplying Non-Rigid Matrices
We will use the following algorithm for multiplying sparse matrices.
**Theorem 4.1** ([25, Theorem 3.1]).: _Let \(\mathbb{F}\) be a field, \(0\leq s\leq 2\), and \(A,B\in\mathbb{F}^{n\times n}\) be \(n^{s}\)-sparse matrices. If \(\omega>2\), then there is a deterministic algorithm that computes \(AB\) using_
\[n^{(2s(\omega-2)+2-\omega\omega^{\perp})/(\omega-1-\omega^{\perp})+o(1)}\]
_arithmetic operations over \(\mathbb{F}\)._
For every \(\gamma\geq 2\), we define the largest matrix sparsity and the largest dimension of rectangular matrices, such that matrix product can be computed in time \(n^{\gamma}\).
**Definition 4.2**.: For every \(\gamma\geq 2\), let
\[r(\gamma) :=\sup\{r>0:\omega(1,1,r)\leq\gamma\}\,,\] \[s(\gamma) :=\sup\{s>0:\text{product of $n^{s}$-sparse matrices can be computed in $n^{\gamma}$ field operations}\}\,.\]
Assuming \(\omega>2\), for all \(\gamma>2\), from Theorem 2.3 and Theorem 4.1 we have that
\[r(\gamma) \geq\frac{(\gamma-2)(1-\omega^{\perp})}{\omega-2}+\omega^{\perp}\,, \tag{5}\] \[s(\gamma) \geq\frac{\gamma(\omega-1-\omega^{\perp})+\omega\omega^{\perp}-2}{ 2(\omega-2)}\,. \tag{6}\]
We now present a simple _non-deterministic_ algorithm that efficiently multiplies non-rigid matrices.
**Lemma 4.3**.: _Let \(\mathbb{F}_{q}\) be a finite field, \(\gamma\geq 2\) be a constant, and let \(A,B\in\mathbb{F}_{q}^{n\times n}\) be two matrices satisfying_
\[\mathcal{R}_{n^{r(\gamma)}}^{\mathbb{F}_{q}}(A)\leq n^{s(\gamma)}\,,\quad \mathcal{R}_{n^{r(\gamma)}}^{\mathbb{F}_{q}}(B)\leq n^{s(\gamma)}\,.\]
_Then for every \(\alpha>0\), there is a non-deterministic algorithm that computes \(AB\) in time \(n^{\gamma+\alpha}\cdot\mathrm{poly}(\log q)\)._
Proof.: First, we non-deterministically guess decompositions of the non-rigid matrices \(A\) and \(B\) into the sum of a low-rank matrix and a sparse matrix, and then guess a rank factorization of the two low-rank matrices. More specifically, we non-deterministically guess a tuple \((L_{A},R_{A},S_{A},L_{B},R_{B},S_{B})\) such that
* \(A=L_{A}\cdot R_{A}+S_{A}\) and \(B=L_{B}\cdot R_{B}+S_{B}\);
* \(L_{A},L_{B}\in\mathbb{F}_{q}^{n\times n^{r(\gamma)}}\), and \(R_{A},R_{B}\in\mathbb{F}_{q}^{n^{r(\gamma)}\times n}\);
* \(S_{A},S_{B}\in\mathbb{F}_{q}^{n\times n}\), \(S_{A}\) and \(S_{B}\) are \(n^{s(\gamma)}\)-sparse.
By the definition of rigidity (Definition 2.23), such decompositions exist for all matrices \(A,B\) satisfying the premise of the lemma. To verify the decomposition, we need to
* compute \(L_{A}\cdot R_{A}\) and \(L_{B}\cdot R_{B}\) in deterministic \(n^{\gamma+\alpha}\cdot\mathrm{poly}(\log q)\) time;
* verify that \(S_{A}\) and \(S_{B}\) are \(n^{s(\gamma)}\)-sparse in deterministic \(n^{2}\cdot\mathrm{poly}(\log q)\) time;
* check that \(A=L_{A}\cdot R_{A}+S_{A}\) and \(B=L_{B}\cdot R_{B}+S_{B}\) in deterministic \(n^{2}\cdot\mathrm{poly}(\log q)\) time.
Now we will present a deterministic algorithm that runs in time \(n^{\gamma+\alpha}\cdot\mathrm{poly}(\log q)\) and computes the product
\[AB=(L_{A}\cdot R_{A}+S_{A})(L_{B}\cdot R_{B}+S_{B})=L_{A}\cdot R_{A}\cdot L_{ B}\cdot R_{B}+L_{A}\cdot R_{A}\cdot S_{B}+S_{A}\cdot L_{B}\cdot R_{B}+S_{A} \cdot S_{B}\,.\]
Recall from Theorem 2.5 that \(\omega(r(\gamma),1,1)=\omega(1,r(\gamma),1)=\omega(1,1,r(\gamma))\). Then the first three terms in the sum above can be computed using rectangular matrix multiplication in time
\[n^{\omega(1,1,r(\gamma))+\alpha}\cdot\mathrm{poly}(\log q)\leq n^{\gamma+ \alpha}\cdot\mathrm{poly}(\log q)\,.\]
Finally, since \(S_{A}\) and \(S_{B}\) are \(n^{s(\gamma)}\)-sparse, the term \(S_{A}\cdot S_{B}\) can be computed in time \(n^{\gamma+\alpha}\cdot\mathrm{poly}(\log q)\). This finishes the proof of the lemma.
### The Main SETH-Hardness Barrier Result
We are now ready to present the main result of this section: a barrier to proving SETH-hardness of Matrix Multiplication (and hence also of Matrix Multiplication Verification). Below, by \(\mathsf{QP}=\mathsf{DTIME}[n^{\mathrm{poly}(\log n)}]\) we denote the class of deterministic algorithms running in quasi-polynomial time.
**Theorem 4.4**.: _Let \(q\) be a prime power. If Matrix Multiplication over \(\mathbb{F}_{q}\) is \(n^{\gamma}\cdot\mathrm{poly}(\log q)\)-SETH-hard for a constant \(\gamma>2\), then one of the following holds:_
* _NSETH is false._
* _There is a_ \(\mathsf{QP}^{\mathsf{NP}}\) _algorithm_ \(\mathcal{A}\) _such that for infinitely many values of_ \(N\)_,_ \(\mathcal{A}(1^{N})\) _outputs a matrix_ \(A\in\mathbb{F}_{q}^{M\times M}\) _satisfying_ \[\mathcal{R}_{M^{r(\gamma^{\prime})}}^{\mathbb{F}_{q}}(A)\geq M^{s(\gamma^{ \prime})}\] _for any_ \(\gamma^{\prime}<\gamma\)_, where_ \(M=N^{\Theta(1)}\)_. Specifically, the running time of the algorithm_ \(\mathcal{A}\) _on input_ \(1^{N}\) _is_ \(N^{O(\log\log N)}\)_._
Proof.: The high-level idea of the proof is the following. Assume there is a fine-grained reduction \(R_{k}\) from \(k\)-SAT to \(\mathrm{MM}_{\mathbb{F}_{q}}\) for every \(k\). For every \(k\)-SAT formula on \(n\) variables, \(R_{k}\) adaptively produces a (possibly exponentially large) sequence of MM instances. If for all large enough \(n\), all the produced matrices are non-rigid, then from Lemma 4.3 we have an efficient non-deterministic algorithm for all encountered instances of MM, and, thus, an efficient non-deterministic algorithm for \(k\)-TAUT. This refutes NSETH. On the other hand, if for infinitely many values of \(n\), at least some of the produced instances of MM are rigid, then we have an algorithm that brute forces all \(k\)-SAT instances, applies \(R\) to them, and then uses the \(\mathsf{NP}\) oracle to verify if they are rigid. We will show that this algorithm finds rigid matrices \(M\in\mathbb{F}_{q}^{N\times N}\) in time \(N^{O(\log\log N)}\) with an \(\mathsf{NP}\) oracle. Below we formalize this argument.
Assume that \(\mathrm{MM}_{\mathbb{F}_{q}}\) is \(n^{\gamma}\)-SETH-hard (see Definition 2.22). Then there exists a function \(\delta\colon\mathbb{R}_{>0}\to\mathbb{R}_{>0}\) such that for every \(k\in\mathbb{N}\), \((k\text{-SAT},2^{n})\leq_{\delta}(\mathrm{MM}_{\mathbb{F}_{q}},n^{\gamma})\). Let us fix \(\varepsilon_{0}=(\gamma-\gamma^{\prime})/(2\gamma)\in(0,1)\) and \(\delta_{0}=\delta(\varepsilon_{0})\in(0,1)\), where \(\delta\) is the function from the definition of SETH-hardness. For every \(k\), let \(R_{k}\) be the fine-grained reduction from \(k\)-SAT to \(\mathrm{MM}_{\mathbb{F}_{q}}\) guaranteed by the \(n^{\gamma}\)-SETH-hardness of \(\mathrm{MM}_{\mathbb{F}_{q}}\). An \(n^{\gamma(1-\varepsilon_{0})}\)-time algorithm for \(\mathrm{MM}_{\mathbb{F}_{q}}\) would imply (via \(R_{k}\)) a \(2^{n(1-\delta_{0})}\)-time algorithm for \(k\)-SAT. In particular, the reduction \(R_{k}\) runs in time \(O(2^{n(1-\delta_{0})})\).
For every \(k\), let us consider the following algorithm \(\mathcal{A}_{k}\) equipped with an \(\mathsf{NP}\) oracle. On input \(1^{N}\), if \(N\) is not a power of two, the algorithm halts. Otherwise, the algorithm sets \(n=\log_{2}N\) and \(\lambda=\delta_{0}/4\). Let \(c=c(k,\lambda)\) be the constant from the Sparsification Lemma (Theorem 2.20). The algorithm \(\mathcal{A}_{k}\) enumerates all \(k\)-SAT formulas with \(n\) variables and at most \(cn\) clauses, applies the fine-grained reduction \(R_{k}\) to each of them, solves all queried instances of \(\mathrm{MM}_{\mathbb{F}_{q}}\) by the trivial cubic-time algorithm, and this way adaptively obtains all MM instances produced by \(R_{k}\). Let us denote these instances of MM by \((A_{1},B_{1}),\ldots,(A_{T},B_{T})\). For each instance \((A_{i},B_{i})\) where \(A_{i},B_{i}\in\mathbb{F}_{q}^{M\times M}\), if \(M>2^{\delta_{0}n/12}\) the algorithm checks if it satisfies
\[\mathcal{R}_{M^{r(\gamma^{\prime})}}^{\mathbb{F}_{q}}(A_{i}),\mathcal{R}_{M^{ r(\gamma^{\prime})}}^{\mathbb{F}_{q}}(B_{i})\geq M^{s(\gamma^{\prime})}\,.\]
Since the problem of checking rigidity is trivially in \(\mathsf{coNP}\), \(\mathcal{A}_{k}\) checks this using the \(\mathsf{NP}\) oracle. If \(\mathcal{A}_{k}\) finds a rigid matrix, it outputs the first such found matrix, otherwise it outputs nothing.
Now we bound the running time of the algorithms \(\mathcal{A}_{k}\). Enumeration of all \(k\)-SAT instances with \(n\) variables and \(cn\) clauses takes time \(\binom{O(n^{k})}{\leq cn}=2^{O(n\log n)}\). The running time of the Sparsification Lemma is \(2^{\lambda n}=2^{\delta_{0}n/4}\). The running time of each execution of the fine-grained reduction is \(O(2^{n(1-\delta_{0})})\), and the time needed to perform each matrix multiplication is at most \(O(2^{3n(1-\delta_{0})})\). This leads to the total running time of \(2^{O(n\log n)}=N^{O(\log\log N)}\).
If one of the constructed algorithms \(\mathcal{A}_{k}\) outputs matrices on inputs \(1^{N}\) infinitely often, then we are done, as we have a \(\mathsf{QP}^{\mathsf{NP}}\) algorithm outputting rigid matrices infinitely often. Indeed, the algorithms \(\mathcal{A}_{k}\) output only rigid matrices, and the dimension of each such matrix is \(M>2^{\delta_{0}n/12}=N^{\Theta(1)}\).
In the following, we assume that for each \(k\), there exists \(N_{k}\) such that the algorithm \(\mathcal{A}_{k}\) does not output matrices on inputs \(1^{N}\) for all \(N\geq N_{k}\). Equivalently, there exists \(n_{k}\) such that all \(k\)-SAT formulas with \(n\geq n_{k}\) variables and \(cn\) clauses, when fed to the reduction \(R_{k}\), result in instances \((A_{i},B_{i})\) of \(\mathrm{MM}_{\mathbb{F}_{q}}\) with \(A_{i},B_{i}\in\mathbb{F}_{q}^{M\times M}\) either of size \(M\leq 2^{\delta_{0}n/12}\) or satisfying
\[\mathcal{R}_{M^{r}(\gamma^{\prime})}^{\mathbb{F}_{q}}(A_{i})<M^{s(\gamma^{ \prime})}\quad\text{and}\quad\mathcal{R}_{M^{r(\gamma^{\prime})}}^{\mathbb{F}_ {q}}(B_{i})<M^{s(\gamma^{\prime})}. \tag{7}\]
We will use this to design a non-deterministic algorithm that solves \(k\)-TAUT in time \(O(2^{n(1-\delta_{0})})\) for every \(k\), which refutes NSETH.
Given a \(k\)-DNF formula \(\varphi\), we first apply the Sparsification Lemma to (the negation of) \(\varphi\) with \(\lambda=\delta_{0}/4\). This gives us \(2^{\lambda n}\)\(k\)-CNF formulas \(\varphi_{i}\) such that \(\varphi\) is a tautology if and only each \(\varphi_{i}\) is unsatisfiable. Therefore, solving \(k\)-TAUT for the negation of each \(\varphi_{i}\) is sufficient for solving the original instance \(\varphi\). Now we apply the reduction \(R_{k}\) to each \(\varphi_{i}\). Recall that the fine-grained reduction runs in time \(2^{(1-\delta_{0})n}\), and, in particular, creates at most \(2^{n(1-\delta_{0})}\) instances of MM, each of size at most \(2^{n(1-\delta_{0})}\). Therefore, the total number \(T\) of instances of \(\mathrm{MM}_{\mathbb{F}_{q}}\) we obtain is at most \(T\leq 2^{\lambda n}\cdot 2^{n(1-\delta_{0})}=2^{(1-3\delta_{0}/4)n}\). We solve each instance of size \(M\leq 2^{\delta n/12}\) by a trivial cubic-time algorithm. The overall running time to solve all such small instances is at most \(T\cdot 2^{3\delta_{0}n/12}\leq 2^{n(1-\delta_{0}/2)}\).
Now we use the assumption that the algorithm \(\mathcal{A}_{k}\) does not find rigid matrices for all but finitely many inputs. Therefore, for every \(n\geq n_{k}\), every instance \((A_{i},B_{i})\), \(A_{i},B_{i}\in\mathbb{F}^{M\times M}\) where \(M>2^{\delta_{0}n/12}\) satisfies Equation (7). We apply Lemma 4.3 with \(\alpha=(\gamma-\gamma^{\prime})\) to compute \(A_{i}\cdot B_{i}\) in non-deterministic time \(M^{\gamma^{\prime}+\alpha}=M^{(\gamma+\gamma^{\prime})/2}=M^{\gamma(1-\varepsilon _{0})}\). By the guarantee of the SETH-hardness reduction \(R_{k}\), we have that a non-deterministic algorithm solving \(\mathrm{MM}_{\mathbb{F}_{q}}\) in time \(M^{\gamma(1-\varepsilon_{0})}\) implies a non-deterministic algorithm for \(k\)-TAUT with running time \(2^{n(1-\delta_{0})}\). This refutes NSETH, and finishes the proof of the theorem.
### Comparison with Known Rigidity Results
While the rigidity parameters that come out of \(n^{\gamma}\)-SETH-hardness of matrix multiplication via Theorem 4.4 are not strong enough for Valiant's program for proving circuit lower bounds, they would improve on all currently known constructions of rigid matrices. The main problem in this area is to find an _explicit_ matrix (i.e., a matrix from a complexity class where we cannot prove circuit lower bounds: \(\mathsf{P},\mathsf{NP}\), or even \(\mathsf{E}^{\mathsf{NP}}\)) of high rigidity for linear rank \(r=\Theta(n)\).
The best known rigidity lower bound for matrices constructible in polynomial time (i.e., from the class \(\mathsf{P}\)) is \(\mathcal{R}_{r}^{\mathbb{F}}(A)\geq\Omega\left(\frac{n^{2}}{r}\cdot\log\frac{n} {r}\right)\)[10, 11, 12], and it falls short of achieving the rigidity parameters needed for Valiant's program (that requires rigidity \(\Omega(n^{1+\varepsilon})\) for \(r=\Theta(n)\) for a
constant \(\varepsilon>0\)). Goldreich and Tal [14] give a matrix \(A\in\mathbb{F}^{n\times n}\) constructible in time \(2^{O(n)}\) (i.e., in the class E) of rigidity \(\mathcal{R}_{r}^{\mathbb{F}}(A)\geq\Omega\left(\frac{n^{3}}{r^{2}\log(n)}\right)\) for any \(r\geq\sqrt{n}\), which improves on the previous constructions for \(r=o\left(\frac{n}{\log(n)\log\log(n)}\right)\). Alman and Chen, and Bhangale et al. [1, 1] give a matrix \(A\in\mathbb{F}_{2}^{n\times n}\) achieving very high rigidity \(\mathcal{R}_{r}^{\mathbb{F}_{2}}(A)\geq\Omega(n^{2})\) for sub-polynomial rank \(r=2^{\varepsilon\log(n)/\log\log(n)}\) in \(\mathsf{P}^{\mathsf{NP}}\).
We now compare the rigidity bounds implied by Theorem 4.4 from \(n^{\gamma}\)-SETH-hardness for various values of \(\gamma>2\) to the known lower bounds on rigidity. For this, we use the bounds from Equations (5) and (6) and the values of \(\omega\) and \(\omega^{\perp}\) from Equations (2) and (3). The rigidity bound implied by Theorem 4.4 from \(n^{\gamma}\)-SETH-hardness for \(\gamma\geq 2.37\) would give us a matrix \(A\) of rigidity \(\mathcal{R}_{r}^{\mathbb{F}}(A)\geq n^{1.68}\) for rank as high as \(r=n^{0.997}\). The rigidity bound obtained from \(\gamma\geq 2.24\) would improve upon the best known construction in E [14] for every \(r\geq n^{0.751}\). The rigidity bound obtained from any \(\gamma\geq 2.17\) would already improve upon the best known constructions in P [13, 12, 14] for every \(r\geq n^{0.63}\). Finally, the bound obtained from any \(\gamma\geq 2\) would give us rigidity for higher rank than the known \(\mathsf{P}^{\mathsf{NP}}\) constructions [1, 1].
## 5 Reductions Between Variants of MMV
In this section, we describe variants of MMV and study their relative difficulty compared to (standard) MMV and each other under deterministic reductions using \(O(n^{2})\) ring operations. (For clarity and conciseness, in this section we assume that basic arithmetic operations over arbitrary rings take constant time, and so refer to these reductions simply as \(O(n^{2})\)-time deterministic operations.) See Figure 3 for a summary of these variants and reductions between them.
### Problems Equivalent to MMV
In this section we present two problems (special cases of MMV) that are in fact equivalent to (standard) MMV under deterministic \(O(n^{2})\)-time reductions. These problems are: (1) the Inverse Verification Problem, and (2) the Symmetric MMV problem.
Figure 3: A diagram of reductions among MMV and related problems on \(n\times n\) matrices. Arrows represent deterministic \(O(n^{2})\)-time reductions (and double-headed arrows denote equivalences under such reductions). Red arrows indicate new (non-trivial) reductions.
#### 5.1.1 Inverse Verification
It is known that matrix multiplication is \(O(n^{2})\)-time reducible to _matrix inversion_[1, Proposition 16.6], and in fact that the problems are equivalent under \(O(n^{2})\)-time reductions. Here we use a very similar reduction to relate the verification analogs of these problems. We first define the Inverse Verification Problem.
**Definition 5.1** (Inverse-Verification\({}_{R}\)).: The Inverse Verification Problem over a ring \(R\) (Inverse-Verification\({}_{R}\)) is defined as follows. Given matrices \(A,B\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) as input, decide whether \(B=A^{-1}\).
We next give the reduction.
**Theorem 5.2**.: _For any ring \(R\), there is an \(O(n^{2})\)-time reduction from AllZeroes\({}_{R}\) on \(n\times n\) matrices to Inverse-Verification\({}_{R}\) on \(3n\times 3n\) matrices._
Proof.: Let \(A,B\in R^{n\times n}\) be the input instance of AllZeroes\({}_{R}\), and define the block matrices
\[A^{\prime}=\begin{bmatrix}I&A&0\\ 0&I&B\\ 0&0&I\end{bmatrix},\quad\ B^{\prime}=\begin{bmatrix}I&-A&0\\ 0&I&-B\\ 0&0&I\end{bmatrix}\.\]
It is straightforward to check that
\[A^{\prime}B^{\prime}=\begin{bmatrix}I&0&-AB\\ 0&I&0\\ 0&0&I\end{bmatrix}\,\]
and so \(A^{\prime}B^{\prime}=I_{3n}\) if and only if \(AB=0\).
#### 5.1.2 Symmetric MMV
Inverse-Verification is not the only variant (special case) of MMV that is equivalent to MMV. Suppose that the input matrices \(A\) and \(B\) in an MMV instance are symmetric. One might hope this would make MMV easier, however we next show that this is not the case.
**Definition 5.3** (Symmetric-MMV\({}_{R}\)).: The Symmetric Matrix Multiplication Verification Problem over a ring \(R\) (Symmetric-MMV\({}_{R}\)) is defined as follows. Given symmetric matrices \(A,B\in R^{n\times n}\) and a matrix \(C\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) as input, decide whether \(AB=C\).
Note that this definition does not require that \(C\) be a symmetric matrix.
**Theorem 5.4**.: _For any ring \(R\), there is an \(O(n^{2})\)-time reduction from MMV\({}_{R}\) on \(n\times n\) matrices to Symmetric-MMV\({}_{R}\) on \(3n\times 3n\) matrices._
Proof.: Let \(A,B,C\in R^{n\times n}\) be the input instance of MMV\({}_{R}\), and define matrices \(A^{\prime},B^{\prime},C^{\prime}\) as follows:
\[A^{\prime}=\begin{bmatrix}I&0&A\\ 0&I&-A\\ A^{T}&-A^{T}&I\end{bmatrix},\quad B^{\prime}=\begin{bmatrix}I&0&B^{T}\\ 0&I&B^{T}\\ B&B&I\end{bmatrix},\quad C^{\prime}=\begin{bmatrix}I+C&C&B^{T}+A\\ -C&I-C&B^{T}-A\\ A^{T}+B&-A^{T}+B&I\end{bmatrix}\.\]
It is easy to check that \(A^{\prime}\) and \(B^{\prime}\) are both symmetric, and, because adding and taking the tranpose of \(n\times n\) matrices takes \(O(n^{2})\) time, \(A^{\prime},B^{\prime},C^{\prime}\) are all computable in \(O(n^{2})\) time.
Moreover, it is easy to check that
\[A^{\prime}B^{\prime}=\begin{bmatrix}I+AB&AB&B^{T}+A\\ -AB&I-AB&B^{T}-A\\ A^{T}+B&-A^{T}+B&I\end{bmatrix}\,\]
and hence that \(A^{\prime}B^{\prime}=C^{\prime}\) if and only if \(AB=C\).
### Problems no Harder than MMV
Next, in this section, we present two problems that are \(O(n^{2})\)-time reducible to MMV, but that are not obviously equivalent to MMV and are not obviously solvable in less time than MMV. The problems are: (1) the Monochromatic All-Pairs Orthogonal Vectors Problem, and (2) the Strong Symmetric MMV Problem. We propose these problems (as special cases of MMV) as good targets for making algorithmic progress on MMV.
#### 5.2.1 Monochromatic All Pairs Orthogonal Vectors
**Definition 5.5** (Monochromatic-All-Pairs-\(\text{OV}_{R}\)).: The Monochromatic All Pairs Orthogonal Vectors Problem over a ring \(R\) (Monochromatic-All-Pairs-\(\text{OV}_{R}\)) is defined as follows. Given vectors \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n}\in R^{n}\) for some \(n\in\mathbb{Z}^{+}\), decide whether for all \(i,j\in[n],i\neq j\), \(\langle\boldsymbol{a}_{i},\boldsymbol{a}_{j}\rangle=0\).
We next show that Monochromatic-All-Pairs-OV reduces to \(\text{MMV}_{\mathbb{Z}}\).
**Lemma 5.6**.: _There is a \(O(n^{2})\)-time reduction from Monochromatic-All-Pairs-\(\text{OV}_{R}\) on \(n\) vectors of length \(n\) to \(\text{MMV}_{R}\) on \(n\times n\) matrices._
Proof.: Let \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n}\in R^{n}\) be the input instance of Monochromatic-All-Pairs-\(\text{OV}\), and define \(A:=(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n})\in R^{n\times n}\). Additionally, compute
\[C=\text{diag}(\langle\boldsymbol{a}_{1},\boldsymbol{a}_{1}\rangle,\langle \boldsymbol{a}_{2},\boldsymbol{a}_{2}\rangle,\ldots,\langle\boldsymbol{a}_{n },\boldsymbol{a}_{n}\rangle)=\begin{bmatrix}\langle\boldsymbol{a}_{1}, \boldsymbol{a}_{1}\rangle&0&\cdots&0\\ 0&\langle\boldsymbol{a}_{2},\boldsymbol{a}_{2}\rangle&\cdots&0\\ \vdots&\vdots&\ddots&0\\ 0&0&\cdots&\langle\boldsymbol{a}_{n},\boldsymbol{a}_{n}\rangle\end{bmatrix}\,\]
and output the \(\text{MMV}_{R}\) instance \(A^{T},A,C\). It is straightforward to check that \(A^{T}A=C\) if and only if for all \(i,j\in[n],i\neq j\), \(\langle\boldsymbol{a}_{i},\boldsymbol{a}_{j}\rangle=0\). Moreover, computing each of \(A\), \(A^{T}\), and \(C\) requires \(O(n^{2})\) time, as needed.
We note that, interestingly, while it is relatively straightforward to reduce Monochromatic-All-Pairs-\(\text{OV}_{R}\) to \(\text{MMV}_{R}\) in \(O(n^{2})\) time, it is not clear that such a reduction in the other direction exists. In contrast, the bichromatic analogue of Monochromatic-All-Pairs-\(\text{OV}\) (in which given \(\boldsymbol{a}_{1},\ldots,\boldsymbol{a}_{n},\boldsymbol{b}_{1},\ldots, \boldsymbol{b}_{n}\) the goal is to decide whether \(\langle\boldsymbol{a}_{i},\boldsymbol{b}_{j}\rangle=0\) for all \(1\leq i,j\leq n\)) is equivalent to AllZeroes.
#### 5.2.2 Strong Symmetric MMV
For our second problem that is no harder than MMV, we consider the special case of Symmetric-MMV when \(C\) must also be symmetric. We call this problem Strong-Symmetric-MMV and define it as follows.
**Definition 5.7** (Strong-Symmetric-\(\text{MMV}_{R}\)).: The Strong Symmetric Matrix Multiplication Verification Problem over a ring \(R\) (Strong-Symmetric-\(\text{MMV}_{R}\)) is defined as follows. Given symmetric matrices \(A,B,C\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) as input, decide whether \(AB=C\).
We note that the reduction in Theorem 5.4 does _not_ work as a reduction to Strong-Symmetric-MMV. On the other hand, it is not clear how to solve Strong-Symmetric-MMV more efficiently than MMV. However, finding faster algorithms for Strong-Symmetric-MMV seems like a good starting point for finding faster algorithms for general MMV.
We make several structural observations to this end. First, we note that the product of two symmetric matrices is itself symmetric if and only if the two matrices commute. That is, for symmetric matrices \(A\) and \(B\), \(AB=(AB)^{T}\) if and only if \(AB=BA\). Second, since symmetric matrices (over a field \(\mathbb{F}\)) are always diagonalizable, we know that two diagonalizable matrices commute if and only if they are simultaneously diagonalizable (see [12, Theorem 1.3.21]).10
Footnote 10: We say that matrices \(A,B\in\mathbb{F}^{n\times n}\) are _simultaneously diagonalizable_ if there exists some matrix \(P\in\mathbb{F}^{n\times n}\) such that \(PAP^{-1}\) and \(PBP^{-1}\) are both diagonal matrices.
While it seems like it may be possible to exploit these properties to find a faster algorithm, there are some caveats. The added restriction that \(C\) is symmetric still does not guarantee that \(AB\) is itself symmetric, meaning the above properties (\(A\) and \(B\) commuting and being simultaneously diagonalizable) are not guaranteed, so we cannot assume them to be true. That is, if \(AB\) is symmetric and therefore \(A\) and \(B\) commute and are simultaneously diagonalizable, it does not guarantee that \(AB=C\). So, although these observations about symmetric matrices \(A\) and \(B\) may be useful, they do not obviously lead to a faster algorithm for Strong-Symmetric-MMV.
### Problems at Least as Hard as MMV but no Harder than MM
Finally, we present two problems to which MMV is deterministically \(O(n^{2})\)-time reducible, and that are deterministically \(O(n^{2})\)-time reducible to MM. These problems are: (1) the \(k\)-MMV problem, and (2) the Matrix Product Sparsity Problem (MPS). We also present the \(k\)-AllZeroes problem, and show that it is equivalent to \(k\)-MMV.
#### 5.3.1 Generalized AllZeroes Equivalence
So far, we have only discussed verifying the product of _two_ matrices. But, it is also natural to ask about the complexity of verifying the product of \(k>2\) matrices. Accordingly, we define the following problems.
**Definition 5.8** (k-\(\text{MMV}_{R}\)).: Let \(k\geq 2\) be a fixed integer. The \(k\)-Matrix Multiplication Verification Problem over a ring \(R\) (k-\(\text{MMV}_{R}\)) is defined as follows. Given matrices \(A_{1},\ldots,A_{k},C\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) as input, decide whether \(\prod_{i=1}^{k}A_{i}=C\).
**Definition 5.9** (k-AllZeroes\({}_{R}\)).: Let \(k\geq 2\) be a fixed integer. The \(k\)-All Zeroes Problem over a ring \(R\) (k-AllZeroes\({}_{R}\)) is defined as follows. Given matrices \(A_{1},\ldots,A_{k}\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) as input, decide whether \(\prod_{i=1}^{k}A_{i}=0\).
We note that \(2\text{-MMV}_{R}\) and \(2\text{-AllZeroes}_{R}\) are "standard" \(\text{MMV}_{R}\) and \(\text{AllZeroes}_{R}\), respectively.
It turns out that many algorithms (including ours) for MMV also work for \(k\text{-MMV}\), and in fact we can generalize certain results about the complexity of \(\text{MMV}\). Additionally, it is easy to reduce MMV to \(k\text{-MMV}\) for any \(k\geq 2\), but the reduction in the opposite direction is unclear. We first show that \(k\text{-MMV}\) and \(k\text{-AllZeroes}\) are equivalent.
**Theorem 5.10**.: _Let \(k\) be some fixed positive integer, and let \(R\) be a ring. There is a \(O(n^{2})\)-time reduction from \(k\text{-MMV}_{R}\) on \(n\times n\) matrices to \(k\text{-AllZeroes}_{R}\) on \(2n\times 2n\) matrices._
Proof.: Let \(A_{1},\ldots,A_{n},C\in R^{n\times n}\) be the input instance of \(k\text{-MMV}_{R}\). We output
\[A_{1}^{\prime} :=\begin{bmatrix}A_{1}&I\\ 0&0\end{bmatrix}\,,\] \[A_{i}^{\prime} :=\begin{bmatrix}A_{i}&I\\ 0&0\end{bmatrix}\text{ for }1<i<k,\text{ and }\] \[A_{k}^{\prime} :=\begin{bmatrix}A_{k}&0\\ -C&0\end{bmatrix}\]
as our \(k\text{-AllZeroes}_{R}\) instance.
Note that
\[\prod_{i=1}^{k}A_{i}^{\prime}=\begin{bmatrix}A_{1}&I\\ 0&0\end{bmatrix}\cdot\prod_{i=2}^{k-1}\begin{bmatrix}A_{i}&0\\ 0&I\end{bmatrix}\cdot\begin{bmatrix}A_{k}&0\\ -C&0\end{bmatrix}=\begin{bmatrix}\prod_{i=1}^{k}A_{i}-C&0\\ 0&0\end{bmatrix}\,,\]
and that this product equals the all-zeroes matrix if and only if \(\prod_{i=1}^{k}A_{i}=C\).
We also note that the reduction in the other direction (from \(k\text{-AllZeroes}_{R}\) to \(k\text{-MMV}_{R}\)) follows immediately by setting \(C=0\).
#### 5.3.2 Relation to Orthogonal Vectors and Matrix-Product-Sparsity
We noted previously how both the monochromatic and bichromatic versions of All-Pairs-OV reduce to MMV, and how the bichromatic version is equivalent to AllZeroes. However, essentially all variations of Orthogonal Vectors can be generalized by the (monochromatic or bichromatic) _counting version_ of the Orthogonal Vectors Problem (#OV), in which the goal is to decide whether the number of orthogonal pairs is less than or equal to an input parameter \(t\). It is natural then to study how #OV is related to MMV. To do so, we introduce an equivalent matrix formulation of #OV that we call the _Matrix Product Sparsity Problem_, the goal of which is to decide whether \(\|AB\|_{0}\leq t\) for some input matrices \(A,B\) and number \(t\).
**Definition 5.11** (\(\text{MPS}_{R}\)).: The Matrix Product Sparsity Problem over a ring \(R\) (\(\text{MPS}_{R}\)) is defined as follows. Given matrices \(A,B\in R^{n\times n}\) for some \(n\in\mathbb{Z}^{+}\) and \(t\in\mathbb{Z}^{+}\) as input, decide whether \(\|AB\|_{0}\leq t\).
It immediately follows that MPS is intermediate in difficulty to MMV and MM.
**Lemma 5.12**.: _Let \(R\) be a ring. There is an \(O(n^{2})\)-time reduction from \(\text{MMV}_{R}\) on \(n\times n\) matrices to \(\text{MPS}_{R}\) on \(n\times n\) matrices, and from \(\text{MPS}_{R}\) on \(n\times n\) matrices to \(\text{MM}_{R}\) on \(n\times n\) matrices._
Proof.: The reduction from \(\text{MMV}_{R}\) goes through \(\text{AllZeroes}_{R}\), to which \(\text{MMV}_{R}\) is equivalent. Indeed, \(\text{AllZeroes}_{R}\) is the special case of \(\text{MPS}_{R}\) when \(t=0\). For the reduction from \(\text{MPS}_{R}\) to \(\text{MM}_{R}\), given an instance \(A,B,t\) of \(\text{MPS}_{R}\) as input, compute \(AB\) and then count the number of non-zero entries in \(AB\) in \(O(n^{2})\) time.
|
2301.00136 | Power of Decision Trees with Monotone Queries | In this paper, we initiate study of the computational power of adaptive and
non-adaptive monotone decision trees - decision trees where each query is a
monotone function on the input bits. In the most general setting, the monotone
decision tree height (or size) can be viewed as a measure of non-monotonicity
of a given Boolean function. We also study the restriction of the model by
restricting (in terms of circuit complexity) the monotone functions that can be
queried at each node. This naturally leads to complexity classes of the form
DT(mon-C) for any circuit complexity class C, where the height of the tree is
O(log n), and the query functions can be computed by monotone circuits in the
class C. In the above context, we prove the following characterizations and
bounds.
For any Boolean function f, we show that the minimum monotone decision tree
height can be exactly characterized (both in the adaptive and non-adaptive
versions of the model) in terms of its alternation (alt(f) is defined as the
maximum number of times that the function value changes, in any chain in the
Boolean lattice). We also characterize the non-adaptive decision tree height
with a natural generalization of certification complexity of a function.
Similarly, we determine the complexity of non-deterministic and randomized
variants of monotone decision trees in terms of alt(f).
We show that DT(mon-C) = C when C contains monotone circuits for the
threshold functions (for e.g., if C = TC0). For C = AC0, we are able to show
that any function in AC0 can be computed by a sub-linear height monotone
decision tree with queries having monotone AC0 circuits. To understand the
logarithmic height case in case of AC0 i.e., DT(mon-AC0), we show that
functions in DT(mon-AC0) have AC0 circuits with few negation gates. | Prashanth Amireddy, Sai Jayasurya, Jayalal Sarma | 2022-12-31T06:35:17Z | http://arxiv.org/abs/2301.00136v1 | # Power of Decision Trees with Monotone Queries
###### Abstract
In this paper, we initiate study of the computational power of adaptive and non-adaptive monotone decision trees - decision trees where each query is a monotone function on the input bits. In the most general setting, the monotone decision tree height (or size) can be viewed as a _measure of non-monotonicity_ of a given Boolean function. We also study the restriction of the model by restricting (in terms of circuit complexity) the monotone functions that can be queried at each node. This naturally leads to complexity classes of the form \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) for any circuit complexity class \(\mathcal{C}\), where the height of the tree is \(\mathcal{O}(\log n)\), and the query functions can be computed by monotone circuits in the class \(\mathcal{C}\). In the above context, we prove the following characterizations and bounds.
* For any Boolean function \(f\), we show that the minimum monotone decision tree height can be exactly characterized (both in the adaptive and non-adaptive versions of the model) in terms of its _alternation_ (\(\mathsf{alt}(f)\) is defined as the maximum number of times that the function value changes, in any chain in the Boolean lattice). We also characterize the non-adaptive decision tree height with a natural generalization of certification complexity of a function. Similarly, we determine the complexity of non-deterministic and randomized variants of monotone decision trees in terms of \(\mathsf{alt}(f)\).
* We show that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathcal{C}\) when \(\mathcal{C}\) contains monotone circuits for the threshold functions (for e.g., if \(\mathcal{C}=\mathsf{TC}^{0}\)). For \(\mathcal{C}=\mathsf{AC}^{0}\), we are able to show that any function in \(\mathsf{AC}^{0}\) can be computed by a sub-linear height monotone decision tree with queries having monotone \(\mathsf{AC}^{0}\) circuits.
* To understand the logarithmic height case in case of \(\mathsf{AC}^{0}\) i.e., \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\), we show that for any \(f\) (on \(n\) bits) in \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\), and for any positive constant \(\epsilon\leq 1\), there is an \(\mathsf{AC}^{0}\) circuit for \(f\) with \(\mathcal{O}(n^{\epsilon})\) negation gates. En route our main proofs, we study the monotone variant of the decision list model, and prove corresponding characterizations in terms of \(\mathsf{alt}(f)\) and also derive as a consequence that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathsf{DL}(\textit{mon-}\mathcal{C})\) if \(\mathcal{C}\) has appropriate closure properties (where \(\mathsf{DL}(\textit{mon-}\mathcal{C})\) is defined similar to \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) but for _decision lists_).
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 A Normal Form for MDLs
* 3 Our Tool: Monotone Decomposition of Boolean Functions
* 3.1 Uniqueness of Alternation Decomposition in a Special Case
* 3.2 Constraints on Monotone Components for \(\mathsf{TC}^{0}\) and beyond
* 4 Deterministic Adaptive Monotone Decision Trees
* 4.1 Monotone Decision Trees and Monotone Decision Lists
* 4.2 Characterizing Adaptive Decision Tree Height
* 4.3 Constructing Adaptive MDTs from Negation Limited Circuits
* 5 Deterministic Non-Adaptive Monotone Decision Trees
* 5.1 Monotone Certificate Complexity
* 6 Non-deterministic Monotone Decision Trees
* 6.1 Two equivalent definitions
* 6.2 Characterization using Alternation
* 7 Randomized Monotone Decision Trees
* 8 Monotone Decision Trees with Query Restrictions
* 8.1 Height Constraints on \(\mathsf{DT}(\textit{mon-C})\)
* 8.2 Deterministic MDTs with Query Restrictions: \(\mathsf{DT}(\textit{mon-C})\) vs \(\mathcal{C}\)
* 8.3 Monotone Decision Trees and \(\mathsf{AC}^{0}\)
* 8.4 Randomized MDTs with Query Restrictions
* 9 Discussion and Open Problems
* A Appendix
* A.1 Proof of Monotone Decomposition Lemma (Lemma 3.1)
* A.2 \(\mathsf{AC}^{0}\) Inverter Construction for Sorted Inputs - Adaptation from [18]
## 1 Introduction
The _decision tree_ model is a fundamental abstraction that captures computation appearing in various scenarios, ranging from query based decision making procedures to learning algorithms for
Boolean functions. The model represents the algorithmic steps in order to compute a Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\), as a sequence of branching operations based on queries to the input bits and the branching depends on the result of the query. It is quite natural to view the branching as a rooted binary tree where the leaves of the tree are labeled with \(0\) or \(1\) to represent value of the function if the computation reaches that leaf.
The simplest form studied is when the queries are directly to bits of the input [9, 15] - and hence the nodes of a decision tree (except for leaves) are labeled with input variables which it queries. For a Boolean function \(f\), the deterministic decision tree complexity, \(\mathsf{DT}(f)\), is the minimum height of any decision tree computing \(f\). By _height_, we always refer to the maximum number of internal nodes in a path from root to a leaf. The size of the decision tree, which is defined as the number of leaves in the tree is an independently interesting measure of complexity of \(f\), and indeed, since the tree is binary, the size cannot be more than exponential in \(\mathsf{DT}(f)\). Generalization of the model of decision trees in the algorithmic setting has been studied - namely randomized and quantum decision trees (see [9]). Decision trees can be adaptive and non-adaptive depending on whether, in the algorithm, the next query depends on the Boolean result of the previous queries or not. In the interpretation of the tree, this translates to whether the tree queries the same variable at all nodes in the same level.
The (adaptive) decision tree height, \(\mathsf{DT}(f)\) is related to many fundamental complexity measures of Boolean functions. It is known to be polynomially related to degree of \(f\) over \(\mathbb{R}\), block sensitivity, certificate complexity (see survey [9]) and with the recent resolution of sensitivity conjecture [14], even to sensitivity of the Boolean function \(f\). Non-adaptive decision trees are not as powerful.
An important way of generalizing the decision tree model is by allowing stronger queries than the individual bit queries. One of the well-studied models in this direction is that of parity decision trees where each query is a parity of a subset of input bits [16]. Each node in the tree is associated with a subset \(S\subseteq[n]\)1 and the query to the input at the node is the function \(\oplus_{i\in S}x_{i}\), where \(x_{i}\) stands for the \(i^{th}\) bit of \(x\). The model of parity decision trees received a lot of attention due to its connection to a special case of log-rank conjecture known as the XOR-log-rank conjecture [24]. The conjecture, in particular, implies that the non-adaptive (\(\mathsf{DT}_{\oplus}^{\mathsf{na}}(f)\)) and adaptive (\(\mathsf{DT}_{\oplus}(f)\)) parity decision complexity measures of functions are not polynomially related in general2.
Footnote 1: We denote the set \(\{1,2,\ldots,n\}\) by \([n]\).
Footnote 2: If \(\mathsf{supp}(f)=\{S\subseteq[n]\mid\bar{f}(S)\neq 0\}\), \(\mathsf{sps}(f)=|\mathsf{supp}(f)|\) and \(\mathsf{fdim}(f)=\dim(\mathsf{supp}(f))\), then by [24], \(\log\mathsf{sps}(f)/2\leq\mathsf{DT}_{\oplus}(f)\leq\mathsf{fdim}(f)=\mathsf{ DT}_{\oplus}^{\mathsf{na}}(f)\)[12, 19]. The XOR-logrank conjecture [24] states that \(\mathsf{DT}_{\oplus}(f)\leq\mathsf{poly}\left(\log\mathsf{sps}(f)\right)\), and \(\exists f\) for which \(\mathsf{fdim}(f)\) and \(\log(\mathsf{sps}(f))\) are exponentially far apart.
Other well-studied generalizations of the standard decision tree model include _linear decision trees_[23, 10, 21] (where each node queries a linear function of the form \(\sum_{i}\alpha_{i}x_{i}+\beta>0\)) and _algebraic decision trees_[22, 4, 5] (where each node queries the sign of a polynomial evaluation of degree at most \(d\) in terms of the input variable). Polynomial size linear decision trees can compute knapsack problem which is \(\mathsf{NP}\)-complete and the above studies prove exponential size lower bounds for explicit languages. Ben-Asher and Newman [3], studied the decision trees when conjunction
and disjunction of variables are allowed as queries on the internal nodes and showed lower bounds for the height of such decision trees required to compute the threshold functions.
**Our results:**
We initiate the study of a new generalization of the decision tree model based on allowing more general queries. The most general version of our model allows the algorithm to query arbitrary _monotone_ functions on the input3. We define the deterministic monotone decision tree complexity of a function \(f\), denoted by \(\mathsf{DT}_{m}(f)\) to be the minimum height of any decision tree with monotone queries at each node, that computes \(f\). When the decision tree is non-adaptive (i.e., when query functions do not depend on the result of previous queries) we denote it by \(\mathsf{DT}_{m}^{na}(f)\).
Footnote 3: Indeed, this generalized model is still universal since in normal decision trees, the queries are monotone functions.
\(\mathsf{DT}_{m}\) **and \(\mathsf{DT}_{m}^{na}\) as measures of non-monotonicity:** Monotone decision tree complexity measures can also be interpreted as a measure of non-monotonicity of the function \(f\). Our first result is an exact characterization of this measure in terms of a well-studied measure of non-monotonicity called _alternation_. Our first main result is the following connection between the monotone decision tree height and the alternation of the function in the case of adaptive and non-adaptive setting. They are exponentially far apart similar to what is conjectured in the case of _parity decision trees_.
**Theorem 1.1**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}(f)=\lceil\log(\mathsf{alt}(f)+1)\rceil\), and \(\mathsf{DT}_{m}^{na}(f)=\mathsf{alt}(f)\)._
En route to proving the above theorem, we also relate a similar generalization of a well-studied computational model called decision lists (see Section 2 for a definition). If \(\mathsf{DL}_{m}(f)\) stands for the minimum length of any monotone decision list computing a Boolean function \(f\), then we show that, \(\mathsf{DL}_{m}(f)=\mathsf{alt}(f)+1\). We also provide a natural generalization of certificate complexity of a Boolean function, denoted by \(\mathsf{C}_{m}\) (and its non-adaptive version denoted by \(\mathsf{C}_{m}^{na}\)) and show that for every function \(f\), \(\mathsf{C}_{m}^{na}(f)=\mathsf{DT}_{m}^{na}(f)\) (Proposition 5.2).
**Non-deterministic and randomized monotone decision trees:** We study non-deterministic and randomized monotone decision trees (see Sections 6 and 7) and consider variants of the definitions, and show equivalences and bounds. In particular, we show constant upper bounds for the height of non-deterministic monotone decision trees (Theorem 6.1) and show characterizations for the height of the randomized version in terms of deterministic monotone decision tree complexity, and thus alternation (Theorem 7.1).
**Decision trees with restricted (monotone) queries:** While the above models provide a measure of non-monotonicity, one of the main drawbacks of the above decision tree model is that, the computational model is not succinctly representable. It is natural to see if we can restrict the query functions to circuit complexity classes which allow succinct representation for functions. An immediate direction is to understand the power of the model if the query functions are restricted to circuit complexity classes; studied in Section 8. More formally, we define \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) to be the
class of functions that can be computed by monotone decision trees of height \(\mathcal{O}(\log n)\) where each query function has a monotone circuit in \(\mathcal{C}\), or equivalently all the queries belong to _mon-\(\mathcal{C}\)_.
To justify the bound of \(\mathcal{O}(\log n)\) on the height of monotone decision trees, we show that if we allow the upper bound on height to be asymptotically different from \(\Theta(\log n)\), then the class of functions computed by the model will be different from \(\mathcal{C}\)4. More precisely, if \(\mathsf{DT}^{d(n)}\) denotes the class of functions computed by monotone decision trees of height at most \(d(n)\) (thus \(\mathsf{DT}(\textit{mon-}\mathcal{C})\equiv\mathsf{DT}^{\mathcal{O}(\log n)}( \textit{mon-}\mathcal{C})\)), we show that, for any \(g(n)=o(\log n)\), and \(h(n)=\omega(\log n)\), \(\mathsf{DT}^{g(n)}(\textit{mon-}\mathcal{C})\subsetneq\mathcal{C}\) and \(\mathsf{DT}^{h(n)}(\textit{mon-}\mathcal{C})\nsubseteq\mathcal{C}\). This justifies the question of \(\mathsf{DT}^{\mathcal{O}(\log n)}(\textit{mon-}\mathcal{C})\) vs \(\mathcal{C}\), which we answer (in some cases) in the following theorem.
Footnote 4: We assume that \(\mathsf{in}\mathcal{C}\), all the circuits are polynomial sized, and that there is at least one function with \(\Omega(n)\) alternation. This is true for the complexity classes \(\mathsf{AC}^{0},\mathsf{TC}^{0},\mathsf{NC}^{1}\) etc.
**Theorem 1.2**.: _For any circuit complexity class \(\mathcal{C}\) such that \(\textit{mon-}\mathsf{TC}^{0}\subseteq\textit{mon-}\mathcal{C}\), \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathcal{C}\)._
Hence, in particular, \(\mathsf{DT}(\textit{mon-}\mathsf{TC}^{0})=\mathsf{TC}^{0}\). The situation when \(\mathcal{C}\) does not contain \(\mathsf{TC}^{0}\) is less understood. We start by arguing that all functions in \(\mathsf{AC}^{0}\) can be computed by monotone decision trees in sub-linear height. More specifically, for any constant \(r\), \(\mathsf{AC}^{0}\subseteq\mathsf{DT}^{d(n)}(\textit{mon-}\mathsf{AC}^{0})\) when \(d(n)=\Omega\left(\frac{n}{\log^{r}n}\right)\) (Theorem 8.1). It is natural to ask whether the sub-linear height can be improved further. In particular, whether \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\) is equal to \(\mathsf{AC}^{0}\) or not. Towards this, by using a technique from [18], we first show a negation limited circuit for functions in \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\):
**Theorem 1.3**.: _If a Boolean function \(f\) on \(n\) variables is in \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\), then for any positive constant \(\epsilon\leq 1\), there is an \(\mathsf{AC}^{0}\) circuit for \(f\) with \(\mathcal{O}(n^{\epsilon})\) negation gates._
**Remark 1.1**.: _By Theorem 4.3 of [18], we know that the negation function \(\mathsf{Neg}:\{0,1\}^{n}\to\{0,1\}^{n}\) defined as \(\mathsf{Neg}(x)=x\oplus 1^{n}\) cannot be computed by circuits of constant depth and \(\mathcal{O}(\sqrt{n})\) negations. We note that this does not immediately show that \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\neq\mathsf{AC}^{0}\) as the output of \(\mathsf{Neg}\) is not single-bit._
In a tight contrast to Theorem 1.3, it can be derived using [18] that if \(f\in\mathsf{AC}^{0}\) with \(\mathsf{alt}(f)=\Omega(n)\), then any \(\mathsf{AC}^{0}\) circuit computing it must have at least \(\Omega(n^{\epsilon})\) negations for some constant \(\epsilon>0\) (see Theorem 8.2). Thus, an asymptotic improvement to this, with respect to the number of negations, would imply \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\neq\mathsf{AC}^{0}\).
En route these main results, we also note that the analogously defined class of functions \(\mathsf{DL}(\textit{mon-}\mathcal{C})\) for decision lists (defined in Section 2) is exactly equal to \(\mathsf{DT}(\textit{mon-}\mathcal{C})\). Defining \(\mathsf{RDT}(\textit{mon-}\mathcal{C})\) similar to \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) but for randomized decision trees, we show \(\mathsf{RDT}(\textit{mon-}\mathcal{C})=\mathsf{DT}(\textit{mon-}\mathcal{C})= \mathsf{DL}(\textit{mon-}\mathcal{C})=\mathcal{C}\) if \(\textit{mon-}\mathsf{TC}^{0}\subseteq\textit{mon-}\mathcal{C}\), and \(\mathsf{DL}(\textit{mon-}\mathsf{AC}^{0})=\mathsf{DT}(\textit{mon-}\mathsf{AC}^ {0})\subseteq\mathsf{RDT}(\textit{mon-}\mathsf{AC}^{0})\subseteq\mathsf{AC}^{0}\).
Preliminaries
In this section, we define basic terms and notations along with the main monotone decision tree complexity measures. The definitions of non-deterministic, randomized MDTs, and other modifications are deferred to the corresponding sections. Unless mentioned otherwise, Boolean functions discussed in this paper are from \(\{0,1\}^{n}\) to \(\{0,1\}\). For standard definitions of Boolean circuits and related complexity classes, we refer the reader to [15].
**Monotonicity and alternation:** For \(x\neq y\in\{0,1\}^{n}\), we say \(x\prec y\) if \(\forall i\in[n]\), \(x_{i}\leq y_{i}\). A _chain_\(\mathcal{X}\) on \(\{0,1\}^{n}\) is a sequence \(\langle x^{(1)},x^{(2)},\ldots,x^{(\ell-1)},x^{(\ell)}\rangle\) such that \(\forall i\in[\ell],x^{(i)}\in\{0,1\}^{n}\) and \(x^{(1)}\prec x^{(2)}\prec\ldots\prec x^{(\ell)}\). The alternation of a function \(f\) for a chain \(\mathcal{X}\), denoted as \(alt(f,\mathcal{X})\) is the number of bit flips (or alternations) in the sequence \(\langle f(x^{(1)}),f(x^{(2)}),\ldots f(x^{(\ell)})\rangle\). We define the _alternation_ of \(f\) as, \(\mathsf{alt}(f):=\max_{\text{ chain }\mathcal{X}}alt(f,\mathcal{X})\).
We say that a Boolean function is _monotone_ if for all \(x,y\in\{0,1\}^{n}\), \(x\prec y\Rightarrow f(x)\leq f(y)\). We say that a Boolean circuit is monotone if all the gates in it compute monotone functions over the respective inputs. For any circuit complexity class \(\mathcal{C}\), we define _mon-\(\mathcal{C}\subseteq\mathcal{C}\)_ as the class of functions which can be computed by using monotone circuits in \(\mathcal{C}\).
**Threshold functions:** For \(x\in\{0,1\}^{n}\), we denote the number of \(1\)'s in \(x\) by \(\mathsf{wt}(x)\). We define the (\(k\)-th) threshold function as, \(\mathsf{Th}_{k}(x)=1\) if \(\mathsf{wt}(x)\geq k\) and \(\mathsf{Th}_{k}(x)=0\) otherwise.
**Monotone decision trees and lists:** We now present our generalizations of the decision tree (and list) model. Further into the paper, we introduce and study more variants by allowing non-adaptivity, randomness, restricted queries etc.
**Definition 2.1** (**Monotone Decision Tree)**.: A _monotone decision tree_\(T\) is a rooted directed binary tree. Each internal node \(v\) is labeled by a monotone function \(f_{v}:\{0,1\}^{n}\rightarrow\{0,1\}\), and each leaf is labeled by a \(0\) or \(1\), e. Each internal node has two outgoing edges, one labeled by \(0\) and another by \(1\). A computation of \(T\) on input \(x\in\{0,1\}^{n}\) is the path from the root to one of the leaves \(L\) that in each of the internal vertices \(v\) follows the edge that has label equal to the value of \(f_{v}(x)\). The label of the leaf that is reached by the path is the output of the computation. A monotone decision tree \(T\) computes a function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) if and only if on each input \(x\in\{0,1\}^{n}\) the output of \(T\) is equal to \(f(x)\).
The monotone decision tree complexity of \(f\) is the minimum height5 of such a tree computing \(f\). We denote this value by \(\mathsf{DT}_{m}(f)\).
Footnote 5: _Height_ always refers to the max. no. of internal nodes in path from root to any leaf.
**Definition 2.2** (**Monotone Decision List)**.: The _monotone decision list_ model, denoted by \(L=(f_{1},c_{1})(f_{2},c_{2})\ldots(f_{k},c_{k})\) is a series of tuples \((f_{i},c_{i})\) where each \(f_{i}\) is a monotone function on \(n\) variables, and each \(c_{i}\) is a Boolean constant \(0\) or \(1\). Here, each \((f_{i},c_{i})\) is called as a node; \(f_{i}\) the query
function of that node and \(c_{i}\) the value of the node. The last query \(f_{k}\) may be often assumed to be the constant function \(\mathbf{1}\) w.l.o.g. An input \(x\in\{0,1\}^{n}\) is said to _activate6_ the node \((f_{i},c_{i})\) if \(f_{i}(x)=1\) and \(\forall j<i,f_{j}(x)=0\). Here \(L\) is said to represent/compute the following Boolean function \(f_{L}\) defined as: \(f_{L}(x)=c_{i},\text{ where }i\in[k]\) is the unique index such that \(x\) activates the \(i^{th}\) node of \(L\).
Footnote 6: Any input activates exactly one node by this definition.
The monotone decision list complexity of a Boolean function \(f\), denote by \(\mathsf{DL}_{m}(f)\), is the minimum size (i.e, number of nodes) of a monotone decision list computing it.
A version of decision list that has been considered in the literature is when we allow the query functions to be a simple AND of variables; called _monotone term decision lists_[13]. When the query functions are allowed to be general (not necessarily monotone) terms, then they are called _term decision lists_[8] and the class of functions computed by TDLs of size at most \(\mathsf{poly}(n)\) is denoted by \(\mathsf{TDL}\).
### A Normal Form for MDLs
We show, by standard arguments, that monotone decision lists can be assumed to have certain properties when the queries are allowed from any reasonably rich class of Boolean functions - in particular, the set of functions from _mon-\(\mathsf{AC}^{0}\)_, _mon-\(\mathsf{TC}^{0}\)_, _mon-\(\mathsf{NC}^{1}\)_ etc.
Property 1 - Alternating constants: We can convert any decision list \(L=(f_{1},c_{1})(f_{2},c_{2})\ldots(f_{k},c_{k})\) computing a Boolean function \(f\) on \(n\) variables to an \(\widetilde{L}=(\widetilde{f}_{1},\widetilde{c_{1}})(\widetilde{f}_{2}, \widetilde{c_{2}})\ldots(\widetilde{f}_{k},\widetilde{c_{k}})\) computing the same function where the constants \(\widetilde{c_{i}}\)'s are alternating between \(0\) and \(1\).
The idea is to simply club the contiguous nodes with same \(c_{i}\) into a single node using an OR operation over the corresponding queries (observed in e.g. Theorem 3.1 in [2]).
Suppose a maximal series of nodes \((f_{i},c_{i}),(f_{i+1},c_{i+1}),\ldots(f_{j},c_{j})\) have the same constant value (i.e, \(c_{i}=c_{i+1}\ldots c_{j}\)). We can substitute a node \((f_{i}\lor f_{i+1}\cdots\lor f_{j},c_{i})\) in place of this entire series of nodes. On application of this simplification to all the maximal contiguous nodes with identical constants, we finally get an equivalent monotone decision list with alternating constants.
To observe that this transformation does not affect the output, we argue that the following simplification holds. If \((f_{1},c)(f_{2},c)\) are two consecutive nodes in a decision list, the both of them can be replaced with the single node \((f_{1}\lor f_{2},c)\) to get an equivalent decision list. For this, we note that if on an input \(x\), both \(f_{1}\) and \(f_{2}\) fail (evaluate to \(0\)), then since neither of the original two nodes would have been activated and neither does the new node \((f_{1}\lor f_{2},c)\) the replacement does not alter the output returned by the decision list. On the other hand, say the node \((f_{1},c)\) is the activated node. This means all the queries before \(f_{1}\) would have evaluated to \(0\). Since these nodes remain unaltered, and \(f_{1}\lor f_{2}\) would pass on \(x\), the node \((f_{1}\lor f_{2},c)\) is the activated one, and therefore returns \(c\), which is the same value returned by the original decision list. The same
argument holds when \((f_{2},c)\) is the node that \(x\) activates in the original decision list.
Property 2 - Forward firing: By forward firing, we mean that on any input \(x\in\{0,1\}^{n}\), if certain query of a decision list passes (i.e., evaluates to 1), then so do all the queries that follow it.
We claim that the decision list \(\widetilde{L}=(g_{1},c_{1})(g_{2},c_{2})\ldots(g_{k},c_{k})\), where \(g_{i}=\bigvee_{j=1}^{i}f_{j}\), represents \(f\) and has the forward firing property. First, note that since \(g_{i+1}\equiv g_{i}\lor f_{i+1}\), we immediately have that \(g_{i}\Rightarrow g_{i+1}\) holds true. Now to show that \(\widetilde{L}\) is equivalent to \(L\), suppose that on an input \(x\), the \(t^{th}\) node is activated in \(L\). Then note that \(g_{t}(x)=\bigvee_{j=1}^{t}f_{j}(x)=1\), since \(f_{t}(x)=1\) by the definition of \(t\). For \(i<t\), we have \(g_{i}(x)=\bigvee_{j=1}^{i}f_{j}(x)=0\), since each \(f_{j}\) is failed for \(j<t\) as \(x\) failed all the first \(t-1\) queries of \(L\). Therefore, \(x\) indeed activates the \(t^{th}\) node of \(\widetilde{L}\) too, hence \(\widetilde{L}\) on input \(x\) outputs \(c_{t}=f(x)\).
It is easy to note that the procedure for ensuring Property 2 does not disturb Property 1 if the decision list already has it. That is, there exists a decision list with both the properties. The above normal forms are invoked in a context that if the \(f_{i}\)'s are from a circuit complexity class that is closed under taking OR of polynomially many bits, and when \(k\) is polynomial in \(n\), then the new query functions also belong to that class (they admit monotone circuits in that class).
Recalling the definition of _alternation_ of a function from Section 2, we state the following characterization of Boolean functions originally proved in [6].
**Lemma 2.1** (**Characterization of Alternation [6]**).: _For any \(f:\{0,1\}^{n}\to\{0,1\}\) there exists \(k=\mathsf{alt}(f)\) monotone functions \(f_{1},\ldots,f_{k}\) each from \(\{0,1\}^{n}\) to \(\{0,1\}\) such that_
\[f(x)=\begin{cases}\oplus_{i=1}^{k}f_{i}(x)&\text{, if }f(0^{n})=0\\ \lnot\oplus_{i=1}^{k}f_{i}(x)&\text{, if }f(0^{n})=1.\end{cases}\]
## 3 Our Tool: Monotone Decomposition of Boolean Functions
Motivated by the characterization stated in previous section we define a monotone decomposition of a Boolean function as follows. This notion will be helpful while analyzing the monotone decision tree complexity measures.
**Definition 3.1** (**Monotone Decomposition of a Boolean function)**.: For any Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\), a _monotone decomposition_ of \(f\) is a minimal set of monotone functions \(M=\{f_{1},f_{2},\ldots f_{k}\}\) such that \(f=\oplus_{i\in[k]}f_{i}\) - here \(k\) is said to be the length of the decomposition. We call each \(f_{i}\) to be a _monotone component_ in the decomposition.
We consider two variants of monotone decompositions obtained by imposing additional constraints:
Implication property: There exists an ordering of \(M\) such that \(\forall i\in[k-1]\), \(f_{i}\Rightarrow f_{i+1}\) holds. In this
case, the functions are called _boundary functions_ of the decomposition of \(f\). We call monotone decompositions that satisfy this property as _boundary decompositions_ of \(f\).
Optimality Property: If the set \(M\) is also of minimum size monotone decomposition of \(f\). That is, there does not exist fewer set of monotone functions whose parity is the given function \(f\). We call monotone decompositions that satisfy this property as _alternation decompositions_ of the function \(f\).
The proof of decomposition of Boolean functions into parity of monotone functions in [6] actually implies a monotone decomposition of length \(\mathsf{alt}(f)\) (or \(\mathsf{alt}(f)+1\) if \(f(0^{n})=1\)) which has the optimality and implication properties. Their proof also implies that if optimality property holds for a monotone decomposition, then the length of the decomposition must necessarily be equal to \(\mathsf{alt}(f)\) or \(\mathsf{alt}(f)+1\). Because of this reason, we call decompositions with optimality property as an _alternation decomposition_. We now state the following lemma (already proved in [6]) bringing out the details to substantiate the extra properties that we need - we present its proof in Appendix A.1.
**Lemma 3.1** (**Monotone Decomposition Lemma**).: _For any Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) there is a monotone decomposition with implication and optimality properties of length \(\mathsf{alt}(f)\) (if \(f(0^{n})=0\)) and length \(\mathsf{alt}(f)+1\) (if \(f(0^{n})=1\))._
We will often use the fact that monotone decomposition with implication property corresponds to a monotone decision list:
**Proposition 3.1**.: _Let \(k\) be an even number7 and \(f_{1}\Rightarrow\cdots\Rightarrow f_{k}\) be Boolean functions. The following functions are equivalent to one another:_
Footnote 7: This is a minor constraint as we can prepend or append constant functions to a monotone decomposition or an MDL.
* \(f_{1}\oplus f_{2}\oplus\cdots\oplus f_{k}\)_,_
* \((f_{1},0)(f_{2},1)\ldots(f_{k},1)(\textbf{1},0)\)_,_
* \(\overline{f_{1}}f_{2}\vee\overline{f_{3}}f_{4}\vee\cdots\vee\overline{f_{k-1 }}f_{k}\)_,_
* \((\textbf{0}\vee\overline{f_{1}})\wedge(f_{2}\vee\overline{f_{3}})\cdots \wedge(f_{k-2}\vee\overline{f_{k-1}})\wedge(f_{k}\vee\overline{\textbf{1}})\)_._
Proof.: Because of the implication property of \(f_{i}\)'s, for any input \(x\), the sequence of bits \(s:=\langle f_{1}(x),f_{2}(x),\ldots,f_{k}(x)\rangle\) is sorted (from \(0\) to \(1\)). All the above four functions compute whether the number of \(1\)'s in the sequence \(s\) is odd. To see this, first check that all the functions evaluate to \(0\) if \(s=1^{k}\). As we keep flipping the last \(1\)-bit of \(s\) to \(0\), the function value (of each of the above four functions) alternates between \(0\) and \(1\).
### Uniqueness of Alternation Decomposition in a Special Case
For a given function, there could be multiple monotone decompositions with implication and optimality properties. For example, consider the function \(f=(x=0^{n/2}1^{n/2})\) for any even \(n>2\). The function value is \(1\) if and only if the input is \(0^{n/2}1^{n/2}\). Clearly, the alternation of this function is \(2\). We will give two different decompositions for this function. One is \(f(x)=\mathsf{Th}_{n/2}(x)\oplus(\mathsf{Th}_{n/2}(x)\wedge(x\neq 0^{n/2}1^{n/2}))\). Another decomposition is \(f(x)=\mathsf{Th}_{n/2+1}(x)\oplus(\mathsf{Th}_{n/2+1}(x)\vee(x=0^{n/2}1^{n/2}))\). It is easy to verify that these two monotone decompositions satisfy the additional properties. However, we do have a unique monotone decomposition with implication and optimality properties in a special case:
**Proposition 3.2**.: _If a Boolean function \(f\) exhibits uniform alternation for all chains of maximal length in the Boolean hypercube, then \(f\) has a unique alternation decomposition with implication property._
Proof.: Suppose that there are two alternation decompositions for \(f\), both having implication property. Denote them by \(\{f_{1},f_{2},\ldots f_{k}\}\) and \(\{f^{\prime}_{1},f^{\prime}_{2},\ldots f^{\prime}_{k}\}\) such that \(f_{1}\Rightarrow f_{2}\Rightarrow\ldots f_{k-1}\Rightarrow f_{k}\) and \(f^{\prime}_{1}\Rightarrow f^{\prime}_{2}\Rightarrow\ldots f^{\prime}_{k-1} \Rightarrow f^{\prime}_{k}\) and \(k=\mathsf{alt}(f)\) (assuming \(f(0^{n})=0\)). We will argue by contradiction that the set of functions must be the same i.e., \(f_{i}=f^{\prime}_{i}\) for all \(i\). Consider the largest \(i\) for which there exists \(x\in\{0,1\}^{n}\) such that \(f_{i}(x)\neq f^{\prime}_{i}(x)\). Without loss of generality, suppose \(f_{i}(x)=1\) and \(f^{\prime}_{i}(x)=0\). Fix such an \(x\) and a chain \(0^{n}=x^{(0)}\prec x^{(1)}\prec x^{(2)}\prec\cdots\prec x^{(n)}=1^{n}\) containing \(x\) such that all the inputs \(y\) before \(x\) in the above chain satisfy \(f_{i}(y)=0\) (so that \(f_{i-1}(y)=0\) as well). By using the optimality property, we have \(f=f_{1}\oplus f_{2}\cdots\oplus f_{k}=f^{\prime}_{1}\oplus f^{\prime}_{2} \cdots\oplus f^{\prime}_{k}\).
If \(i=1\), then the two decompositions do not agree on the input \(x\) as \(f_{i}=f^{\prime}_{i}\) for \(i>1\), which is a contradiction.
If \(i>1\), we must necessarily have that \(f_{i-1}(x)=1\), as otherwise \(1=f_{1}\oplus\cdots\oplus f_{i}=f^{\prime}_{1}\oplus\cdots\oplus f^{\prime}_{i}=0\) (as \(i\) is the largest index such that \(f_{i}\neq f^{\prime}_{i}\) and due to implication property). Therefore, \(f_{i}\) and \(f_{i-1}\) both change their evaluation from \(0\) to \(1\) on changing the input to \(x\) from the input that is just before \(x\) in the chain we considered. This contradicts the fact that this chain has \(k\) alternations.
### Constraints on Monotone Components for \(\mathsf{TC}^{0}\) and beyond
We continue the study in this section by imposing complexity constraints on the function \(f\). A natural question to ask is if the monotone components of \(f\) in its monotone decomposition are necessarily harder than \(f\) in terms of circuit complexity classes. We first answer this question for classes that contain monotone circuits for the threshold functions. In this case, we show that we can always find a monotone decomposition where the component functions are in _mon-\(\mathcal{C}\)_.
**Lemma 3.2**.: _If \(\text{mon-}\mathsf{TC}^{0}\subseteq\text{mon-}\mathcal{C}\), then for any \(f\) computed by a circuit in the class \(\mathcal{C}\), there is a monotone decomposition \(f_{1}\oplus f_{2}\oplus\ldots\oplus f_{2n+1}\) with implication property such that each \(f_{i}\) is in mon-\(\mathcal{C}\)._
Proof.: Recall that \(\mathsf{Th}_{k}\) denotes the function \(\mathsf{Th}_{k}(x)=1\) iff the number of \(1\)'s in \(x\) (denoted by \(\mathsf{wt}(x)\)) is at least \(k\). We directly write the decomposition. For each \(1\leq i\leq 2n+1\), \(f_{i}(x)=\mathsf{Th}_{k+1}(x)\vee(\mathsf{Th}_{k}(x)\wedge f(x))\) when \(i\) is \(2k+1\) else \(f_{i}(x)=\mathsf{Th}_{k}(x)\) when \(i\) is \(2k\).
**Correctness:** Let \(w=\mathsf{wt}(x)\). For any \(x\in\{0,1\}^{n}\):
\[\mathsf{For}\ i<w\]
: we have
\[\mathsf{Th}_{i}(x)=1\]
and
\[\mathsf{Th}_{i+1}(x)\vee(\mathsf{Th}_{i}(x)\wedge f(x))=1\]
\[\mathsf{For}\ i>w\]
: we have
\[\mathsf{Th}_{i}(x)=0\]
and
\[\mathsf{Th}_{i+1}(x)\vee(\mathsf{Th}_{i}(x)\wedge f(x))=0\]
\[\mathsf{For}\ i=w\]
: we have
\[\mathsf{Th}_{i}(x)=1\]
and
\[\mathsf{Th}_{i+1}(x)\vee(\mathsf{Th}_{i}(x)\wedge f(x))=f(x)\]
Using this we can compute the boundary functions as follows:
\[\forall i,1\leq i\leq 2w:\ f_{i}(x) = 1\text{ and }\forall i,2w+2\leq i\leq 2n+1:\ f_{i}(x)=0\] \[f_{2w+1}(x) = \mathsf{Th}_{w+1}(x)\vee(\mathsf{Th}_{w}(x)\wedge f(x))=f(x)\]
Observing the above evaluations, note that the implication property holds for \(f_{i}\)'s as \(f_{2n+1}\Rightarrow f_{2n}\Rightarrow\cdots\Rightarrow f_{2}\Rightarrow f_{1}\). The expression \(f_{1}(x)\oplus f_{2}(x)\cdots\oplus f_{2n+1}(x)\) evaluates to \(1\oplus 1\ldots 1\oplus f(x)\oplus 0\cdots\oplus 0\) where number of \(1\)'s before \(f(x)\) in the above expression is \(2w\) (which is even). Thus, on a given input \(x\), the decomposition evaluates to \(f(x)\).
**Complexity bound for \(f_{i}\):** Now we will prove that the query functions actually are in _mon-\(\mathcal{C}\)_. The \(f_{i}\)'s for even \(i\), being threshold functions, satisfy this property. We now show that \(f_{2k+1}=\mathsf{Th}_{k+1}\vee(\mathsf{Th}_{k}\wedge f)\) has a monotone circuit in \(\mathcal{C}\) circuit for \(0\leq k\leq n\).
Let \(C\) be a circuit in \(\mathcal{C}\) computing \(f\). We first push down all the negation gates to the input variables in \(C\), by using the fact that \(\neg(\mathsf{Th}_{\ell}(a_{1},a_{2},\ldots a_{m}))=\mathsf{Th}_{m-\ell+1}( \neg a_{1},\neg a_{2},\ldots\neg a_{m})\). This can be done with only polynomial increase in size of \(C\), and same depth. In the resulting circuit, further remove the negations at the variables as follows: if \(\overline{x_{j}}\) appears, replace it with \(\mathsf{Th}_{k}(\{x_{1},x_{2},\ldots x_{n}\}\setminus\{x_{j}\})\). Call the final circuit \(C^{\prime}\). Note that \(C^{\prime}\) does not have any negation gates, and therefore is a monotone circuit in \(\mathcal{C}\). We shall argue that the circuit \(C^{\prime\prime}=\mathsf{Th}_{k+1}\vee(\mathsf{Th}_{k}\wedge C^{\prime})\) computes \(f_{2k+1}\). It can be seen that \(C^{\prime\prime}\) outputs \(0\) for inputs of weight less than \(k\) and outputs \(1\) for inputs of weight more than \(k\), which is exactly what \(f_{2k+1}\) evaluates to on these inputs. Therefore, it suffices to show that \(C^{\prime\prime}\) correctly outputs \(f_{2k+1}(x)=f(x)\) on inputs of weight exactly equal to \(k\). To see this, note that the final transformation of \(\overline{x_{j}}=\mathsf{Th}_{k}(\{x_{1},x_{2},,\ldots x_{n}\}\setminus\{x_{ j}\})\) holds when \(\mathsf{wt}(x)=k\).
By the above lemma, for any \(f\in\mathsf{TC}^{0}\) we have given a decomposition into \(2n+1\) monotone \(\mathsf{TC}^{0}\) functions. However, the monotone decomposition can be far from having the optimality property. If the function has uniform alternation among all chains of length \(n+1\), then this can be improved to \(\mathsf{alt}(f)\) keeping the complexity of the monotone components to be within \(\mathsf{TC}^{0}\), as show below.
**Monotone decomposition for'special' functions in \(\mathsf{TC}^{0}\):**
**Proposition 3.3**.: _If \(f\in\mathsf{TC}^{0}\) has uniform alternation across all chains of length \(n+1\), say \(\mathsf{alt}(f)=k\), then there is a monotone decomposition of \(f\) of length \(k\) or \(k+1\) where all the components are in \(\mathsf{TC}^{0}\)._
Proof.: It is worthwhile to recall that the decomposition is unique when all the chains of length \(n+1\) have same alternation (Proposition 3.2). We will give the proof only when \(f(0^{n})=0\) (decomposition into \(k\) monotone \(\mathsf{TC}^{0}\) components). In the other case, we may just take the negation of the decomposition of the complement function (which would still be in \(\mathsf{TC}^{0}\) and have uniform alternation).
Let \(f\equiv f_{1}\oplus f_{2}\oplus\cdots\oplus f_{k}\) where we have \(k=\mathsf{alt}(f)\) and \(\forall i\in[k-1]:\ f_{i}\Rightarrow f_{i+1}\). We exploit the following structure of \(f_{i}\) to obtain a \(\mathsf{TC}^{0}\) circuit computing it. This observation comes from the uniform alternation feature of \(f\).
**Observation 3.1**.: _For any \(1\leq i\leq k\), \(f_{i}(x)=1\) iff there is a chain from \(0^{n}\) to \(x\) with at least \(k-i+1\) alternations._
Since all chains of length \(\mathsf{wt}(x)+1\) from \(0^{n}\) to \(x\) have same alternations, we can take any arbitrary such chain to decide the value of \(f_{i}(x)\). We will be considering the chain of inputs obtained by repeatedly making the leftmost \(1\) to \(0\).
For input \(x\in\{0,1\}^{n}\), we define \(n\) new strings \(y^{(i)}\) for \(1\leq i\leq n\) inductively. The string \(y^{(1)}\) is obtained by making the leftmost \(1\) of \(x\) into a \(0\); and similarly for any other \(i>1\), we obtain \(y^{(i)}\) by making the leftmost \(1\) of \(y^{(i-1)}\) into a \(0\). If the number of \(1\)'s in \(x\) happens to be lesser than \(i\) then we define \(y^{(i)}\) to be \(0^{n}\). We will now argue that \(y^{(i)}\)'s can be computed in \(\mathsf{TC}^{0}\). Since Bitcount\(\in\mathsf{TC}^{0}\), we can obtain all the prefix sums \(p^{(i)}\in\{0,1\}^{\log n}=\sum_{j=1}^{j=i}x_{j}\) in \(\mathsf{TC}^{0}\). Then, the \(j^{th}\) bit of \(y^{(i)}\), namely \(y^{(i)}_{j}:=x^{(i)}_{j}\wedge(p^{(j)}>i)\). This works because for the first \(i\) 1's of \(x\), the prefix sum \(p^{(i)}\) is at most \(i\), and it is greater than \(i\) for all other bits. Also, this can be implemented in \(\mathsf{TC}^{0}\) as we know that the relation \(>\) is in \(\mathsf{AC}^{0}\).
Once we have obtained \(y^{(i)}\)'s, we construct the Boolean sequence \(s\) of length \(n+1\), \(s:=f(x).f(y_{1}).f(y_{2}).\ldots f(y_{n})\). This also can be done in \(\mathsf{TC}^{0}\) as we know \(f\in\mathsf{TC}^{0}\) and each of \(y^{(i)}\)'s can be parallelly evaluated in \(\mathsf{TC}^{0}\). Notice that the sequence \(\langle y^{(n)},y^{(n-1)},\ldots y^{(1)},x\rangle\) represents the same input \(0^{n}\) till a point and represents a chain from \(0^{n}\) to \(x\) from the next point. Hence by our observation and the fact that \(f(0^{n})=0\), we note that \(f_{i}(x)=1\) iff \(alternations(s)\geq i\). Now to count the number of alternations in the sequence \(s\), we compare every two successive bits of \(s\) and check whether the number of times it changes is at least \(i\). That is, \(f_{i}\equiv\mathsf{Th}_{k-i+1}(s_{1}\oplus s_{2},s_{2}\oplus s_{3},\ldots,s_{n }\oplus s_{n+1})\). Clearly this can also be done in \(\mathsf{TC}^{0}\) as bit-parity and threshold gates are in \(\mathsf{TC}^{0}\).
It has to be noted that we only gave a decomposition with monotone functions in \(\mathsf{TC}^{0}\). We do not know whether these functions have monotone \(\mathsf{TC}^{0}\) circuits. The other limitation of course, is that it only works for functions with uniform alternation.
Deterministic Adaptive Monotone Decision Trees
The model that we study first is the deterministic adaptive monotone decision trees and the associated complexity measures like \(\mathsf{DT}_{m}\) and \(\mathsf{DL}_{m}\) defined in Introduction.
### Monotone Decision Trees and Monotone Decision Lists
In this section, we relate complexity of monotone decision trees and monotone decision lists. In particular, we prove the following theorem:
**Theorem 4.1**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}(f)=\lceil\log\mathsf{DL}_{m}(f)\rceil\)8._
Footnote 8: Using the same proof, we also get for any circuit complexity class \(\mathcal{C}\) with appropriate closure properties that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathsf{DL}(\textit{mon-}\mathcal{C})\) – this will be used in Section 8.
Proof.: (\(\geq\)): To show \(\mathsf{DL}_{m}(f)\leq 2^{\mathsf{DT}_{m}(f)}\), note that it suffices to show that, from a monotone decision tree we can construct a monotone decision list of the same size. As the number of leaves in the optimal tree is upper bounded by \(2^{\mathsf{DT}_{m}(f)}\), the inequality then follows. Now, going to proving this relation itself, it follows from a known construction given by Blum [7]. Since we need it to work in the context of monotone queries as well, we provide a self-contained argument in the following lemma.
**Lemma 4.1**.: _Let \(T\) be a monotone decision tree of size \(k\) computing a function \(f\) on \(n\) variables. Then there is a monotone decision list of size \(k\) computing \(f\)._
Proof.: For each leaf \(\ell\) in \(T\), denote the path from source/root to \(\ell\) as a string \(s_{\ell}\in\{0,1\}^{*}\) defined as \(s_{\ell}[i]=1\) iff the \(i^{th}\) edge in the path is positive labeled.
Let \(S=s_{1},s_{2},\ldots,s_{k}\) be the ordering of all such strings lexicographically, from larger to smaller. This is equivalent to ordering the leaves of the tree from right to left (under the convention that the left-edges correspond to \(0\) and the right-edges to \(1\) at each query in the decision tree).
We claim that the following monotone decision list \(L\) computes \(f\):
\[L=(t_{1},label(\ell_{1}))(t_{2},label(\ell_{2}))\ldots(t_{k},label(\ell_{k})),\]
where \(label(\ell_{i})\in\{0,1\}\) denotes the label of the leaf \(\ell_{i}\) and \(t_{i}\) is the function obtained by the conjunction of all _passed queries_ along the path from root of \(T\) to \(\ell_{i}\) (see Figure 2).
To show this, let an input \(x\) activate the \(i^{th}\) node of \(L\). We have \(t_{i}(x)=1\). Now it suffices to prove that the computation by \(T\) on \(x\) reaches the leaf \(\ell_{i}\) and hence outputs the same value \(label(\ell_{i})\).
To prove this by contradiction, suppose not; that is, let the leaf reached in the decision tree on input \(x\) be \(\ell_{j}\neq\ell_{i}\). Let the first position in which corresponding paths (strings) \(s_{j}\) and \(s_{i}\) differ be at the \(k\)-th index. Such a position exists as otherwise one string is a prefix of the other, which is impossible as these binary strings 'encode' the paths from root to leaves in the tree.
Case(1) - \(s_{j}[k]=0\) and \(s_{i}[k]=1\): Since the strings are same till the \((k-1)^{th}\) position, so are the corresponding paths in the decision tree. Let the least-common-ancestor node of \(\ell_{i}\) and \(\ell_{j}\) be labeled by a (monotone) query \(f_{1}\) (w.l.o.g). Notice that since \(s_{i}[k]=1\) it makes \(f_{1}\) a passed query along the path to \(\ell_{i}\) and so \(f_{1}\in t_{i}\) (i.e., \(t_{i}\) is an AND of \(f_{1}\) and other functions, by the definition of \(t_{i}\)). Since we have that \(t_{i}(x)=1\), it implies \(f_{1}(x)=1\) which means \(s_{j}[k]=1\), a contradiction. Case(2) - \(s_{j}[k]=1\) and \(s_{i}[k]=0\): This means \(s_{j}>s_{i}\). Note that since the computation in the decision tree reaches \(\ell_{j}\) on \(x\) and \(t_{j}\) is a conjunction of passed queries in the path to \(\ell_{j}\), we must have \(t_{j}(x)=1\). And since \(s_{j}>s_{i}\), it will happen that the query \(t_{j}\) appears before \(t_{i}\) in \(L\). Hence, the node \((t_{i},label(\ell_{i}))\) would not be activated on \(x\), contradicting the definition of \(i\) itself.
Hence, the proof of the lemma follows.
(\(\leq\)): To now show that \(\mathsf{DT}_{m}(f)\) is at most \(\lceil\log\mathsf{DL}_{m}(f)\rceil\), we construct a monotone decision tree of height \(\lceil\log k\rceil\) given an equivalent monotone decision list of length \(k\). This is proved in the following lemma.
**Lemma 4.2**.: _Let \(L=(f_{1},c_{1})(f_{2},c_{2})\ldots(f_{k-1},c_{k-1})(\textbf{1},c_{k})\) be a monotone decision list for the Boolean function \(f\) on \(n\) variables. Then there is a monotone decision tree \(T\) of height \(\lceil\log k\rceil\) computing \(f\)._
Proof.: Without loss of generality, we assume that the decision list \(L\) is in the normal form (Subsection 2.1). On an input \(x\in\{0,1\}^{n}\), there exists exactly one node where the corresponding query and all the queries to its right pass, and all the queries to its left fail. A natural approach to search for this switch point (the activated node) algorithmically by querying the corresponding functions, is to do a binary search and return the \(c_{i}\) corresponding to the index resulting from the search. We shall use this idea to construct a decision tree \(T\) for \(f\).
At the root of \(T\), we query \(f_{\lceil(1+k)/2\rceil}\). If it is zero (go left in the tree), it means the activated node is to the right of it, otherwise (go right in the tree) to its left (or itself). This way we can give a recursive construction for \(T\) (Use the right half of \(L\) on the left branch and the left half of \(L\) on the right branch). To obtain the labels for the leaves of \(T\), we write down \(c_{1},c_{2}\ldots c_{k}\) from the right to left.
As an example, if \(L=(f_{1},c_{1})(f_{2},c_{2})(f_{3},c_{3})(f_{4},c_{4})(f_{5},c_{5})(f_{6},c_{6} )(f_{7},c_{7})(1,c_{8})\), the constructed tree \(T\) would be as shown in Figure 1. It can be seen that the height of \(T\) would be \(\lceil\log k\rceil\), since at each query the nodes of the residual decision list are nearly distributed equally onto left and right sides. Finally, we note that the queries in \(T\) are identical to the queries in \(L\), and hence are monotone.
Figure 1: MDT corresponding to the MDL \((f_{1},c_{1})(f_{2},c_{2})\ldots(f_{7},c_{7})(1,c_{8})\)
Figure 2: MDL corresponding to the above MDT is \((f_{1}.f_{3}.f_{7},c_{8})(f_{1}.f_{3},c_{7})(f_{1}.f_{6},c_{6})(f_{1},c_{5}) \ldots(1,c_{1})\)
This completes the proof of the relation between monotone decision list complexity and monotone decision tree complexity.
### Characterizing Adaptive Decision Tree Height
We will now prove the main theorem of this section which characterizes \(\mathsf{DT}_{m}(f)\) (and en route \(\mathsf{DL}_{m}(f)\) too) in terms of \(\mathsf{alt}(f)\).
**Theorem 4.2**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}(f)=\lceil\log(\mathsf{alt}(f)+1)\rceil\)._
Proof.: It suffices to show that \(\mathsf{DL}_{m}(f)=\mathsf{alt}(f)+1\) because of Theorem 4.1.
\(\mathsf{DL}_{m}(f)\leq\mathsf{alt}(f)+1\): First, suppose \(f(0^{n})=0\). Then by Lemma 3.1 there are \(k=\mathsf{alt}(f)\) many monotone functions such that \(f=f_{1}\oplus f_{2}\oplus\cdots\oplus f_{k}\), and \(\forall i,f_{i}\Rightarrow f_{i+1}\). It can be shown easily that the monotone decision list \((f_{1},0)(f_{2},1)(f_{3},0)(f_{4},1)\ldots(f_{k},1)(\mathbf{1},0)\) or \((f_{1},1)(f_{2},0)(f_{3},1)(f_{4},0)\ldots(f_{k},1)(\mathbf{1},0)\) computes \(f\), when \(k\) is even or odd respectively. On the other hand, if \(f(0^{n})=1\), we have \(f=f_{1}\oplus f_{2}\oplus\cdots\oplus f_{k}\oplus 1\), which gives the monotone decision lists \((f_{1},1)(f_{2},0)\ldots(\mathbf{1},1)\) or \((f_{1},0)(f_{2},1)\ldots(\mathbf{1},1)\) computing \(f\) depending on whether \(k\) is even or odd respectively. \(\mathsf{DL}_{m}(f)\geq\mathsf{alt}(f)+1\): We claim that if a Boolean function \(f\) on \(n\) variables can be computed by a monotone decision list \(L=(f_{1},c_{1})(f_{2},c_{2})\ldots(f_{\ell},c_{\ell})\) of length \(\ell\), we have \(\mathsf{alt}(f)\leq\ell-1\). To show this, it suffices to argue that for any chain \(x^{(1)}\prec x^{(2)}\prec\ldots\prec x^{(s)}\) in the Boolean hypercube, where \(1\leq s\leq n+1\); the number of alternations of the function \(f\) along the chain is at most \(\ell-1\). Consider the sequence \(S\) of length \(s\) where for \(1\leq i\leq s\), the integer \(S[i]\) is the index of the node activated on inputting \(x^{(i)}\) to \(L\). Hence, \(1\leq S[i]\leq\ell\) for every \(i\). By definition of the activated node, observe that for any \(1\leq i<s\), \(f_{S[i]}(x^{(i)})=1\), which implies \(f_{S[i]}(x^{(i+1)})=1\) too, since \(x^{(i)}\prec x^{(i+1)}\). This implies that the node that \(x^{(i+1)}\) activates cannot be after \(f_{S[i]}\). That is, \(S[i+1]\leq S[i]\) for all \(1\leq i<s\). If two consecutive elements in chain activate the same node, \(L\) outputs the same value on these assignments and hence there is no alternation at that point of the chain. Thus, the number of alternations of the function \(f\) along this chain is upper bounded by the number pairs \((S[i],S[i+1])\) such that \(S[i]\neq S[i+1]\). Since \(1\leq S[i]\leq\ell\) and \(S[i]\geq S[i+1]\) for all \(i\), we get \(\mathsf{alt}(f)\leq\ell-1\).
### Constructing Adaptive MDTs from Negation Limited Circuits
The above theorem provides a characterization for decision tree height in terms of alternation \(\mathsf{alt}(f)\) of the Boolean function. A classical result by Markov [17], implies that any Boolean function can
be computed by Boolean circuits that use at most \(\lceil\log(\mathsf{alt}(f)+1)\rceil\) many negation gates. Since the number of negation gates in the circuit can be logarithmically smaller, the following result is interesting.
**Theorem 4.3**.: _Let \(f\) be a Boolean function computed by a circuit \(C\) using \(k\) negations. Then there is a monotone decision tree of height \(k+1\) computing \(f\)._
Proof.: Call the bottom-most negation gate in \(C\) as \(g\). The input to \(g\) would be computed by a monotone circuit (call it \(C_{+}\)). The first query in our MDT shall be the function computed by \(C_{+}\). Let \(C_{0}\) and \(C_{1}\) be the circuits obtained upon replacing9\(g\) by the constants \(0\) and \(1\) respectively. Note that these circuits have \(k-1\) negation gates. On the \(C_{+}(x)=0\) branch of the decision tree, we will use \(C_{1}\) and on the other side we use \(C_{0}\) instead of \(C\) to obtain the queries at the second level of the decision tree. We keep applying the same principle as we did to \(C\) until there are no negations left, when we can directly query the entire function to obtain the return value. In any branch of the tree, at each query, the height of the tree increases by \(1\) while the number of negations in the residual circuit decreases by \(1\). Therefore, the height of the constructed monotone decision tree is \(k+1\).
Footnote 9: using \(0\) and \(1\) (respectively) whenever the output of \(g\) is required
## 5 Deterministic Non-Adaptive Monotone Decision Trees
We establish a relation between the non-adaptive monotone decision tree and alternation:
**Theorem 5.1**.: _For any Boolean function, \(\mathsf{alt}(f)=k\) if and only if \(f\) can be computed by a non-adaptive monotone decision tree of height \(k\)._
Proof.: To outline the idea used here - for the forward implication we use Lemma 3.1 to design the decision tree. For the reverse implication, since the tree is non-adaptive, the query function at each level of the decision tree will be the same. Using this fact, we argue that any chain must have alternation at most \(k\) with respect to \(f\). To be more detailed,
\((\Rightarrow)\): Suppose \(\mathsf{alt}(f)=k\). Applying Lemma 3.1, we describe the non-adaptive monotone decision tree of height \(k\) is as follows. At level \(i\) (root being level \(1\)), all nodes will query the function value \(f_{i}\). For labeling the leaves, simply label each leaf by the parity of the results of the queries along the path from root of the tree to that leaf. This way, it is non-adaptive by definition and in each path, the values of all \(f_{i}(x)\)'s are known and can compute the value of \(f(x)\) as described above.
\((\Leftarrow)\): Let \(f\) be computed by decision tree \(T\) of height \(k\). Since \(T\) is non-adaptive, the function queried at a certain height is independent of the earlier queries, let the function queried at height \(i\) be \(f_{i}\). As \(T\) is non-adaptive it is also a complete binary tree. Represent each leaf of the binary tree by a string \(s=s_{1}\ldots s_{k}\in\{0,1\}^{k}\). The string \(s\) can be thought of as the results of the queries
to \(f_{i}\). That is, to reach a given leaf (represented by \(s\)) of \(T\), the query functions \(f_{1},f_{2},\ldots f_{k}\) must evaluate to \(s_{1},s_{2},\ldots s_{k}\) respectively.
Consider any chain of inputs \(\mathcal{X}=x^{(1)}\prec x^{(2)}\prec\ldots x^{(\ell)}\) to \(f\) and let the corresponding binary respectively. We will argue that the alternation along this chain is bounded by \(k\). Consider what happens when we move along the chain (say \(x^{(j)}\) to \(x^{(j+1)}\)). Suppose that on inputs \(x^{(j)}\) and \(x^{(j+1)}\), \(T\) evaluates to the leaves represented by the \(k\) bit strings \(s^{(j)}\) and \(s^{(j+1)}\) respectively. Since \(f_{i}\)'s are monotone functions and \(x^{(j)}\prec x^{(j+1)}\), we have \(f_{i}(x^{(j)})\leq f_{i}(x^{(j+1)})\) for all \(i\). Thus, \(s^{(j)}\prec s^{(j+1)}\) or \(s^{(j)}=s^{(j+1)}\). As the value of \(f\) is completely determined by the values of \(f_{i}\)'s, if \(s^{(j)}=s^{(j+1)}\) then \(f(x^{(j)})=f(x^{(j+1)})\). Hence, the number of alternations of \(f\) along \(\mathcal{X}\) is at most the number of strings \(s^{(1)}\prec s^{(2)}\prec\ldots\), which is at most the length of each string i.e., \(k\).
The above theorem along with Theorem 4.2 finishes the proof of Theorem 1.1 from the introduction.
**Theorem 1.1**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}(f)=\lceil\log(\mathsf{alt}(f)+1)\rceil\), and \(\mathsf{DT}_{m}^{na}(f)=\mathsf{alt}(f)\)._
### Monotone Certificate Complexity
We now discuss a characterization of non-adaptive monotone decision tree complexity through a generalization of certificate complexity of the function.
**Definition 5.1** (**Monotone certificate complexity)**.: For an input \(x\in\{0,1\}^{n}\) of a Boolean function \(f\), we call a set \(S_{x}=\{f_{1},f_{2}\ldots f_{k}\}\) of monotone Boolean functions on \(n\) variables as a _monotone certificate_ (set) if for any input \(y\in\{0,1\}^{n}\), we have that \([\forall_{i=1}^{k}\ f_{i}(y)=f_{i}(x)]\Rightarrow[f(y)=f(x)]\). The monotone certificate complexity of \(x\), denoted \(\mathsf{C}_{m}(f,x)\) is defined as the minimum size \(|S_{x}|\) of a monotone certificate \(S_{x}\) of \(x\). The monotone certificate complexity of the function \(f\) itself is defined as \(\mathsf{C}_{m}(f):=max_{x}\{\mathsf{C}_{m}(f,x)\}\).
Interestingly, there is a constant upper bound of the size monotone certificate set for any function \(f\). We show that, \(\mathsf{C}_{m}(f)\) is at most \(2\).
**Proposition 5.1**.: \(\mathsf{C}_{m}(f)\leq 2\)_._
Proof.: For any input \(x\), let \(I_{x}\) be the set of all variables set to \(1\) in \(x\), and \(J_{x}\) be the set of remaining indices. The set \(S_{x}=\{f_{1}:=\bigwedge I_{x},f_{2}:=\bigvee J_{x}\}\) is a valid monotone certificate for an input \(x\). To show that this is indeed a certificate, we argue that \([(f_{1}(y)=f_{1}(x))\wedge(f_{2}(y)=f_{2}(x))]\Rightarrow[f(y)=f(x)]\). First, note that \(f_{1}(x)=1\) and \(f_{2}(x)=0\) by definition of \(f_{i}\)'s. Now suppose the LHS of the above targeted implication is true. That is, \(f_{1}(y)=1\) and \(f_{2}(y)=0\). Hence, \(\bigwedge I_{x}(y)=1\) and \(\bigvee J_{x}(y)=0\), which means that \(I_{y}\subseteq I_{x}\) and \(J_{y}\subseteq J_{x}\). But note that \((I_{y},J_{y})\) (and \((I_{x},J_{x})\)) is a partition of the total set of variables. Therefore, it must be the case that \(I_{x}=I_{y}\) and \(J_{x}=J_{y}\), meaning \(x=y\). Then the RHS of the desired implication follows trivially.
If the monotone certificate sets are constrained to be the same for all inputs, we call such a measure as the non-adaptive monotone certificate complexity of the function \(f\), denoted by \(\mathsf{C}_{m}^{na}(f)\). By a simple argument, we have:
**Proposition 5.2**.: \(\mathsf{C}_{m}^{na}(f)=\mathsf{DT}_{m}^{na}(f)=\mathsf{alt}(f)\)_._
Proof.: (\(\leq\)) Let \(h=\mathsf{DT}_{m}^{na}(f)\) denote the minimum height of a non-adaptive monotone decision tree (call it \(T\)) computing the Boolean function \(f\) on \(n\) variables. Let the query functions from root to any leaf be \(f_{1},f_{2}\ldots f_{h}\) in order, where each \(f_{i}\) is a monotone function on \(n\) variables. We claim that \(S:=\{f_{1},f_{2},\ldots f_{h}\}\) is a monotone certificate for any input. Then immediately, \(\mathsf{C}_{m}^{na}(f)\leq|S|=h=\mathsf{DT}_{m}^{na}(f)\). To observe that \(S\) indeed is a certificate set, consider two inputs \(x\) and \(y\) for which all the functions in \(S\) evaluate identically. Then in the tree \(T\), both these inputs reach the same leaf, and hence return the same value. We obtain \([\forall_{i=1}^{h}\ f_{i}(y)=f_{i}(x)]\Rightarrow[f(y)=f(x)]\), and therefore \(S\) is a monotone certificate for any input.
**(\(\geq\))** Given that \(S=\{f_{1},f_{2}\ldots f_{k}\}\) is a monotone certificate set, our goal is to design a non-adaptive monotone decision tree \(T\) for \(f\) of height \(k\). The idea is to use the same functions that are in \(S\) as queries in \(T\). We fix an arbitrary ordering of \(S\) and query the functions in the same order in all the paths. To get the labels of the leaves of \(T\), we fix the label of a leaf \(\ell\) as \(f(x_{\ell})\), where \(x_{\ell}\) is any arbitrary input that reaches \(\ell\) in its computation by \(T\). If no such \(x_{\ell}\) exists, then \(\ell\) may be labeled arbitrarily. Let some input \(x\) reach a leaf \(\ell\) in \(T\). As \(x_{\ell}\) also reaches \(\ell\), both \(x\) and \(x_{\ell}\) must evaluate identically on all the functions in \(S\). By the certificate property of \(S\), this implies that \(f(x)=f(x_{\ell})\), which is exactly what is returned by \(T\) on input \(x\).
## 6 Non-deterministic Monotone Decision Trees
Inspired by the definitions of a non-deterministic decision tree and certificate complexity of a Boolean function, we study a non-deterministic variants of monotone decision trees as well. We define a non-deterministic monotone decision tree as a tree where there can be single or multiple outgoing edges at each internal node, and each edge in the tree is labeled by a monotone function or the negation of a monotone function, and the leaves are labeled 0 or 1. An input is said to be accepted if there is at least one path from the root to a leaf labeled 1 along which all the functions appearing as labels on the edges evaluate to 1.
### Two equivalent definitions
We define two variants of a non-deterministic monotone decision tree. The first variant is the one that we will use in the later part of this section.
* Model1: A tree where there can be any number of outgoing edges at each internal node, and each edge is labeled by a monotone or negation of a monotone function. The leaves are labeled with 0's and 1's.
An input \(x\) to the tree is said to be accepted (same as outputting 1 on \(x\)) iff there exists a path from the root of the tree to any leaf labeled 1 along which all the edges are _active_. Here we say an edge labeled \(f_{i}\) is _active_ on \(x\) when \(f_{i}(x)=1\).
* Model2: A tree where there can be any number of outgoing edges at each internal node, and each edge is labeled 0 or 1. The leaves are labeled with 0's and 1's as usual. In addition to these labels, the internal nodes are also labeled, with monotone functions. An input \(x\) to the tree is said to be accepted (same as outputting 1 on \(x\)) iff there exists a path from root of the tree to any leaf labeled 1 along which all the edges are _active_. Here, for an edge \(e\) outgoing from a node labeled \(f_{i}\), we say \(e\) is _active_ on input \(x\), if the label of \(e\) is equal to the value \(f_{i}(x)\).
We say a tree \(T\) (could be of type Model1 or Model2) is said to compute a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) iff for all \(x\in\{0,1\}^{n}\), \(T\) outputs \(f(x)\) on \(x\).
The above defined models can be shown to equivalent. That is, we show that if a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) can be computed by a tree of type Model2, then there is also a tree of type Model1 of same height (upto a constant factor) computing \(f\); and vice-versa.
To prove the first part, let \(T\) be a Model2 tree computing \(f\). We simply re-label \(T\) as follows, to obtain a Model1 tree still computing \(f\). At each internal node labeled \(f_{i}\); if an outgoing edge is labeled 0, replace the label with \(\overline{f_{i}}\), if an outgoing edge is labeled 1, replace the label with \(f_{i}\). Finally, remove all the labelings at the internal nodes. Retain the labels of the leaves. Thus, we get a tree \(T^{\prime}\). Note that an edge in \(T\) is active if and only if the corresponding edge in \(T^{\prime}\) is active for the same input. Therefore, any path that is active in \(T\) remains active in \(T^{\prime}\), meaning \(T\equiv T^{\prime}\equiv f\).
The other direction is proved as the following lemma.
**Lemma 6.1**.: _If a Boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) is computed by a tree \(T\) of type Model1, there is also a tree \(T^{\prime}\) of type Model2 of height twice that of \(T\), computing \(f\)._
Proof.: To construct \(T^{\prime}\), we perform the following operation over all the edges of \(T\) from the top to bottom. Let \(e\) be an outgoing edge in \(T\), from node \(i\) labeled \(f_{i}\) (could be a monotone or negation of a monotone function) to a node/leaf called \(j\). A new node called \(k\) is introduced in \(T^{\prime}\) between the nodes \(i\) and \(j\), and the edges \(i\to k\) and \(k\to j\) are added. The label at the node \(i\) is changed to the constant function \(\mathbf{1}\) from \(f_{i}\), the \(i\to k\) edge is labeled 1, and \(k\) is labeled with the function \(f_{i}\). If \(f_{i}\) is monotone, \(k\to j\) edge is labeled 1, Otherwise, \(k\to j\) edge is labeled 0. By this labeling principle, notice that for any input \(x\), the edge \(i\to k\) is always active and the edge \(k\to j\) is active if and only if \(x\) activates the edge \(e\) in \(T\). Therefore, after the complete construction of \(T^{\prime}\), there is an active path to a leaf if and only if there is an active path to that same (corresponding) leaf in \(T\). Since, the existence of active path to a leaf with label 1 is the criterion of acceptance in either tree, both trees output the same value for the same input, and hence \(T^{\prime}\) computes \(f\). Finally, we
notice that the height of \(T^{\prime}\) is twice the height of \(T\), as each edge in \(T\) produces two edges in \(T^{\prime}\) by our design.
### Characterization using Alternation
Analogous to the normal monotone decision trees, we prove a bound on the height and the size as the complexity measures of the non-deterministic monotone decision tree (height denoted by \(\mathsf{DT}_{m}^{n}(f)\)) in the following Theorem.
**Theorem 6.1**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}^{n}(f)\leq 2\) and size of the optimal non-deterministic monotone decision tree is \(\lceil\frac{\mathsf{alt}(f)}{2}\rceil\)._
Proof.: We know that any Boolean function \(f\) has a monotone decomposition of length at most \(k:=\mathsf{alt}(f)+1\) (Lemma 3.1), so let \(f=f_{1}\oplus f_{2}\oplus\cdots\oplus f_{k}\). Due to the implication property of \(f_{i}\)'s, we have the following equivalent evaluation for \(f\), that is \(f=\overline{f_{1}}f_{2}\vee\overline{f_{3}}f_{4}\vee\ldots\overline{f_{k-1}} f_{k}\)10 (Proposition 3.1). We now construct the non-deterministic monotone decision tree \(T\) of height \(2\) size as much as the number of "terms" in the above representation of \(f\), which is \(s:=\lceil k/2\rceil\). The root of \(T\) would contain \(s\) edges with labels \(\overline{f_{1}},\overline{f_{3}},\ldots,\overline{f_{k-1}}\), each for one edge. Then for each of these children of root, we give one child with edge label \(f_{i+1}\), corresponding to \(\overline{f_{i}}\). Finally we attach a leaf labeled \(1\) to all the leaves of the tree. There is an active path from root to a leaf if and only if \(f\) evaluates to \(1\) on that input, thereby showing that of \(T\) computes \(f\).
Footnote 10: If \(k\) is odd, we can just make \(f_{1}\) as the constant function \(\mathbf{0}\).
## 7 Randomized Monotone Decision Trees
We also study randomized monotone decision trees. In this model, monotone query nodes in the decision tree, random bit choices are also allowed at the internal nodes of the tree and each of the random choice nodes also has two outgoing edges to children one with labeled \(0\) and the other labeled \(1\). We say the tree computes a Boolean function \(f\) if for any input \(x\), the probability (over the choice of the settings for the random bit choices in the tree) of the computation reaching a leaf with label \(f(x)\) is at least \(2/3\). By \(\mathsf{DT}_{m}^{r}(f)\), we denote the minimum height of a RMDT computing a Boolean function \(f\). The following theorem implies that randomization does not help when the monotone queries are unrestricted.
**Theorem 7.1**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}^{r}(f)=\mathsf{DT}_{m}(f)=\lceil\log(\mathsf{alt}(f)+1)\rceil\)._
Proof.: Since any monotone decision tree is trivially also a RMDT, we directly have \(\mathsf{DT}_{m}^{r}(f)\leq\mathsf{DT}_{m}(f)\). To prove the reverse inequality, we will construct a circuit for \(f\) with at most \(\mathsf{DT}_{m}^{r}(f)-1\) negation gates. Then by using Theorem 4.3, we get a (deterministic) MDT of height \(\mathsf{DT}_{m}^{r}(f)\) computing \(f\).
Let \(T\) be a RMDT of minimum height \(h=\mathsf{DT}_{m}^{r}(f)\), computing \(f\). Let the number of leaves in \(T\) labeled with \(1\) and \(0\) be \(k\) and \(k^{\prime}\) correspondingly. We have \(k+k^{\prime}=(\text{total no. of leaves in }T)\leq 2^{h}\). We will give the proof only when \(k\leq k^{\prime}\). Otherwise, we can consider the RMDT obtained by flipping all the leaf labels of \(T\), which would then compute \(\overline{f}\). As \(\mathsf{DT}_{m}(f)=\mathsf{DT}_{m}(\overline{f})\) and \(\mathsf{DT}_{m}^{r}(f)=\mathsf{DT}_{m}^{r}(\overline{f})\)11, the required result follows. So, we may assume \(k\leq k^{\prime}\).
Footnote 11: On flipping the leaf labels in a MDT or RMDT, it computes the complement function.
We will first derive a characterization for the function \(f\), which will be helpful in other proofs too.
Consider a fixed input \(x\) to \(T\). By the definition of RMDT, \(f(x)=1\) iff the probability of the computation of \(T\) (on \(x\)) reaching a \(1\)-labeled leaf is at least half 12. Let us now express this probability in terms of the structure of \(T\).
Footnote 12: Although a stronger bound of \(2/3\) is implied, the bound of \(1/2\) will be simpler to work with.
Let \(\ell_{1},\ell_{2},\ldots\ell_{k}\) be all the \(1\)-labeled leaves, \(r_{1},r_{2},\ldots r_{k}\) be the number of random nodes from root to the corresponding leaf; and let \(c_{i}:=(\bigwedge f_{pi})\wedge(\bigwedge\overline{f_{qi}})\) denote the _characteristic function_ corresponding to \(\ell_{i}\), where the \(f_{pi}\)'s are the monotone queries to be passed and \(f_{qi}\)'s to be failed in the root to \(\ell_{i}\) path. First we calculate the probability that a specific leaf \(\ell_{i}\) is reached upon the computation. Once we get this value, since in any given circumstance, the computation reaches a unique leaf, the above events for various \(i\)'s are mutually exclusive, meaning the desired probability is simply the sum of probabilities that the computation reaches a particular \(\ell_{i}\).
Now, coming to the probability of reaching a particular \(\ell_{i}\), it is zero when \(c_{i}(x)=0\). Indeed when \(c_{i}(x)=0\), by its definition, we can observe that it means at least of the monotone functions that is supposed to pass has failed or at least one that is supposed to fail has passed, either of which is a contradiction. Now for the case \(c_{i}(x)=1\), the probability solely depends on \(r_{i}\): As all the intermediate monotone queries support the computation to reach \(\ell_{i}\) (i.e, \(c_{i}(x)=1\)), for any of the random nodes in the path, there is exactly one result that will keep the computation in the right path to \(\ell_{i}\). Hence the probability then is equal to \((1/2)^{r_{i}}\). The probabilities in both these cases can be compacted as \((1/2)^{r_{i}}c_{i}(x)\). We can see that it is \(0\) when \(c_{i}(x)=0\), and \((1/2)^{r_{i}}\) when it is equal to \(1\).
Going back to the overall probability, it is equal to the sum \(\sum_{i}(1/2)^{r_{i}}c_{i}(x)\). We thus have the characterization:
\[f(x)=1\ \ \text{iff}\ \ \sum_{i=1}^{k}2^{h-r_{i}}c_{i}(x)\geq 2^{h-1}. \tag{1}\]
Notice that to compute any \(c_{i}\), at most one negation is needed, since \(c_{i}=(\bigwedge f_{pi})\wedge(\neg(\bigvee f_{qi}))\). Now, to construct a circuit for \(f\) with \(h-1\) negations as stated in the beginning, we observe that: since the threshold function is monotone, if we can obtain all the \(c_{i}\)'s using a (multi-output) circuit using at most \(h-1\) negations, we are done. We make use of Fischer's construction [11] for this, where a multi-output circuit using \(\lceil\log(m+1)\rceil\) negations is designed to compute the inverting
function \(I(z_{1},z_{2},\ldots,z_{m})=(\overline{z_{1}},\overline{z_{2}}\ldots,\overline{z_ {m}})\). Taking \(m=k\) and \(z_{i}=\bigvee f_{qi}\); the number of negations used in the circuit would be \(\mathsf{neg}:=\lceil\log(m+1)\rceil=\lceil\log(k+1)\rceil\).
If \(k<k^{\prime}\), then \(\mathsf{neg}=\lceil\log(k+1)\rceil\leq\lceil\log(2^{h-1}-1+1)\rceil=h-1\).
Otherwise, \(k=k^{\prime}=2^{h-1}\). Suppose the right-most leaf in \(T\) is labeled \(1\). Then note that there is no negation in \(c_{k}\) and so, we do not have to negate \(z_{k}\); making the number of inputs used in the Fischer's construction only \(k-1=2^{h-1}-1\). Hence, \(\mathsf{neg}=\lceil\log(2^{h-1}-1+1)\rceil=h-1\). Now, if the right-most leaf in \(T\) is instead labeled \(0\), then we flip all the leaf labels in \(T\) to obtain a RMDT for \(\overline{f}\), which then falls into the above case (of the right-most label being \(1\)).
By Theorem 4.3, we can now using this circuit, obtain an MDT of height \(h-1+1=h=\mathsf{DT}_{m}^{\tau}(f)\) computing \(f\). Hence, \(\mathsf{DT}_{m}^{r}(f)=\mathsf{DT}_{m}(f)\).
**A variant of Randomized Monotone Decision Tree Model:** We also study a more powerful variant of the randomized model where each node is allowed to have a multi-set of \(w\) monotone functions associated with it (which we call the query set) and on an input \(x\) to the decision tree, at each node, one of the query functions is chosen uniformly at random from the corresponding query set. Again, we say that the tree computes a Boolean function \(f\) if for any input \(x\), the probability of the computation reaching a leaf with label \(f(x)\) is at least \(2/3\). We denote by \(\mathsf{DT}_{m}^{R,w}(f)\), the minimum height of such a randomized decision tree that computes \(f\). It can be observed that any RMDT can be implemented in this model as well (query sets are of size \(w\)): a monotone query \(f_{i}\) can be replaced with the query set \(\{f_{i},\ldots,f_{i}(w\text{ times})\}\), and a node with a random bit choice by the query set \(\{\mathbf{0},\ldots,\mathbf{0}(w/2\text{ times}),\mathbf{1},\ldots,\mathbf{1}(w /2\text{ times})\}\). This gives \(\mathsf{DT}_{m}^{R,w}(f)\leq\mathsf{DT}_{m}^{r}(f)\) for any even \(w\geq 2\). Even if \(w\) is odd, the relation still holds up to a constant factor. For the other direction, we show the following:
**Theorem 7.2**.: _For any Boolean function \(f\), \(\mathsf{DT}_{m}^{r}(f)\leq(1+k).\mathsf{DT}_{m}^{R,w}(f)\), if \(w=2^{k}\) for an integer \(k\)._
Proof.: Let \(T\) be the tree corresponding to the stronger model with query sets of size \(w=2^{k}\) of minimum height computing \(f\). To transform \(T\) into an RMDT, we perform the following transformation at each internal node from bottom to top.
Let \(u\) be an internal node of \(T\) with left branch going to the sub-tree \(u_{L}\), right branch to \(u_{R}\), and with possible query function at \(u\) being from \(\{q_{1},q_{2},\ldots q_{w}\}\). Consider a complete binary tree \(B\) with nodes in the first \(k\) levels making random queries to \(0\) or \(1\), and the \(2^{k}=w\) nodes in the bottom-most level making corresponding queries to the \(q_{i}\)'s.
We replace the subtree of \(T\) rooted at \(u\) with the following tree: \(B\) is introduced at the top (in place of \(u\)) and each of the bottom-most nodes of \(B\) are branched into (copies of) the sub-trees \(u_{L}\) on left and \(u_{R}\) on right. Clearly, upon applying this procedure for all the internal nodes of \(T\) bottom-up, we get an RMDT.
To justify that this transformation still computes \(f\), it is sufficient to argue that for an input \(x\), due to the transformation corresponding to a node \(u\), the probabilities that \(u_{L}\) and \(u_{R}\) are
taken by the computation remain the same after the transformation. This is indeed true as these quantities are equal to \(|\{q_{i}\ |\ q_{i}(x)=0\}|/w\) and \(|\{q_{i}\ |\ q_{i}(x)=1\}|/w\) respectively, before and after the transformation.
Since each original node is replaced with a complete binary tree \(B\), the height increases by a factor equal to the height of \(B\) (including bottom-most level), i.e, \(1+k\). Hence, \(\mathsf{DT}_{m}^{r}(f)\leq(1+k).\mathsf{DT}_{m}^{R,w}(f)\).
## 8 Monotone Decision Trees with Query Restrictions
In this section, we study the power of monotone decision trees under restricted monotone query functions. We define \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) as the set of Boolean functions (rather Boolean function families) that admit decision trees of height \(\mathcal{O}(\log n)\), where \(n\) is the number of variables and all the query functions are from _mon-\(\mathcal{C}\)_. We could instead consider polynomial size decision trees, but both formulations turn out to be equivalent for interesting classes \(\mathcal{C}\). Similarly, we define \(\mathsf{DL}(\textit{mon-}\mathcal{C})\) as the set functions computed by MDLs of polynomial size where the queries are in _mon-\(\mathcal{C}\)_.
We first justify our reason to consider the height bound as \(\mathcal{O}(\log n)\). We show that for any \(h=\omega(\log n)\), there is a function \(f\) on \(n\) variables that has a decision tree of height \(h\) with query functions computed by monotone polynomial sized circuits, but \(f\) cannot be computed by a polynomial size circuit. In contrast, for any \(h=o(\log n)\), if a function \(f\in\mathcal{C}\) on \(n\) variables has alternation \(\Omega(n)\), then \(f\) does not have a decision tree of height \(h\), with query functions computable by monotone circuits in \(\mathcal{C}\). Hence, the question of whether \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) is equal to \(\mathcal{C}\) is well-motivated only when the height is \(\mathcal{O}(\log n)\). With this background, we will then study \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) as defined above.
### Height Constraints on \(\mathsf{DT}(\textit{mon-}\mathcal{C})\)
**Proposition 8.1**.: _For any \(h=\omega(\log n)\), there is a Boolean function \(f\) on \(n\) variables that has a decision tree of height \(h\) with query functions computed by monotone polynomial sized circuits, but \(f\) cannot be computed by polynomial size circuits._
Proof.: We will actually show the existence of a function that has a'simple decision tree' (all queries are to input variables) of height \(h=\omega(\log n)\) but not any polynomial sized circuit. By Shannon's counting argument (see [15]) we know that there exists a function on \(h\) variables which requires a circuit of size \(\Omega(2^{h}/h)\) to compute it. Our function \(f\) would be the same function but with an additional \((n-h)\) many dummy variables. Since \(f\) depends on only \(h\) variables, we can obtain a (non-adaptive) decision tree where the queries are the bits that the function depends on. Its height clearly is \(h\). By definition, any circuit computing \(f\) has size \(\Omega(2^{h}/h)\geq c_{1}.(2^{h}/h)\). 13
Footnote 13: \(c_{1},c_{2},c_{3}\) are some fixed positive constants and the inequalities are asymptotic i.e, for sufficiently large \(n\).
For the sake of contradiction, suppose that \(f\) does have a polynomial sized circuit. It means \(c_{1}.(2^{h}/h)\leq n^{c_{2}}\). Taking logarithm, we get \(c_{3}+h-\log h\leq c_{2}\log n\), so we have, \(h\leq 2h-2\log h\leq c_{1}.(2^{h}/h)\leq n^{c_{2}}\).
\(2.(c_{3}+h-\log h)\leq 2c_{2}\log n=\mathcal{O}(\log n)\), which contradicts \(h=\omega(\log n)\). Therefore, \(f\) cannot be computed by a polynomial size circuit family.
**Proposition 8.2**.: _Let \(\mathcal{C}\) be any circuit complexity class which contains a function \(f\) with \(\operatorname{\mathsf{alt}}(f)=\Omega(n)\). For any \(h=o(\log n)\), there is a function \(f\in\mathcal{C}\) on \(n\) variables that does not have a decision tree of height \(h\), with query functions computable by monotone circuits in \(\mathcal{C}\)._
Proof.: Let \(f\in\mathcal{C}\) be a function such that \(\operatorname{\mathsf{alt}}(f)=\Omega(n)\). For the sake of contradiction, assume that \(f\) does have a monotone decision tree of height \(h=o(\log n)\). We know that \(\operatorname{\mathsf{alt}}(f)\leq 2^{h}\) where \(h\) is the height of the decision tree (recall Theorem 4.2). Then, \(\operatorname{\mathsf{alt}}(f)\leq 2^{o(\log n)}=o(n)\), contradicting \(\operatorname{\mathsf{alt}}(f)=\Omega(n)\).
Deterministic MDTs with Query Restrictions: \(\mathsf{DT}(\textit{mon-}\mathcal{C})\) vs \(\mathcal{C}\)
As mentioned in the introduction, we ask: How much can monotone decision tree computation, with query functions computable by monotone circuits in the class \(\mathcal{C}\), simulate general computation in the class \(\mathcal{C}\). In this direction, we first show that \(\mathsf{DT}(\textit{mon-}\mathcal{C})\subseteq\mathcal{C}\) when \(\mathcal{C}\) has reasonable closure properties.
**Lemma 8.1**.: _For a circuit complexity class \(\mathcal{C}\) closed under polynomially many \(\neg,\wedge,\vee\) operations, \(\mathsf{DT}(\textit{mon-}\mathcal{C})\subseteq\mathcal{C}\)._
Proof.: The proof of Theorem 4.1 also establishes that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathsf{DL}(\textit{mon-}\mathcal{C})\) as log-height MDTs and poly-size MDLs are inter-convertible, it suffices to show that \(\mathsf{DL}(\textit{mon-}\mathcal{C})\subseteq\mathcal{C}\). Let a Boolean function \(f\) belong to \(\mathsf{DL}(\textit{mon-}\mathcal{C})\) via the decision list \(L=(f_{1},c_{1})(f_{2},c_{2})\ldots(f_{k},c_{k})\) where \(k=\mathsf{poly}(n)\); and each query function \(f_{i}\) has a (monotone) circuit \(C_{i}\) from the class \(\mathcal{C}\). Using the normal form for the decision lists for circuit classes with the above property, we will assume that the \(c_{i}\)'s are alternating; and the query functions \(f_{i}\) are forward firing, i.e \(f_{1}\Rightarrow f_{2}\Rightarrow\cdots\Rightarrow f_{k}\). As we can always prepend a \((\mathbf{0},0)\) node at the beginning or append a \((\mathbf{1},1)\) node at the end of \(L\) while still maintaining the normal form; w.l.o.g we may assume that \(c_{1}=0\) and \(c_{k}=1\) (hence \(k\) is even). Due to the alternating constants property, this means \(c_{2i}=1\) and \(c_{2i-1}=0\).
We know from Proposition 3.1 that the Boolean function \(g:=\overline{f_{1}}f_{2}\sqrt{f_{3}}f_{4}\cdots\sqrt{f_{k-1}}f_{k}\) is equivalent to \(f\). Finally, we note that because of the closure properties of \(\mathcal{C}\) and \(k=\mathsf{poly}(n)\), the expression \(g\) can be used to obtain a circuit in \(\mathcal{C}\) computing \(f\).
If the class \(\mathcal{C}\) is rich enough to include monotone circuits for the threshold functions, for example say the class \(\mathsf{TC}^{0}\) itself, then we can actually prove equality: Note that the Monotone Decomposition given in Lemma 3.2 can be easily transformed into a MDL with the same functions being queries using Proposition 3.1. Thus, we get \(\mathcal{C}\subseteq\mathsf{DL}(\textit{mon-}\mathcal{C})\), which when combined with the fact that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathsf{DL}(\textit{mon-}\mathcal{C})\) and Lemma 8.1 completes the proof of Theorem 1.2.
**Theorem 1.2**.: _For any circuit complexity class \(\mathcal{C}\) such that \(\text{mon-}\mathsf{TC}^{0}\subseteq\text{mon-}\mathcal{C}\), \(\mathsf{DT}(\text{mon-}\mathcal{C})=\mathcal{C}\)._
### Monotone Decision Trees and \(\mathsf{AC}^{0}\)
We attempt to address the question \(\mathsf{DT}(\text{mon-}\mathsf{AC}^{0})\) vs \(\mathsf{AC}^{0}\). We know that \(\mathsf{DT}(\text{mon-}\mathsf{AC}^{0})=\mathsf{DL}(\text{mon-}\mathsf{AC}^{0})\) is contained in \(\mathsf{AC}^{0}\) by Lemma 8.1. An interesting challenge is to prove or disprove the reverse containment. As a warm-up, we show that \(\mathsf{DT}(\text{mon-}\mathsf{AC}^{0})\) is more powerful than polynomial sized term decision lists (which is a strict subset of \(\mathsf{AC}^{0}\)).
**Proposition 8.3**.: \(\mathsf{DT}(\text{mon-}\mathsf{AC}^{0})\nsubseteq\mathsf{TDL}\)_._
Proof.: We show that \(\mathsf{DL}(\text{mon-}\mathsf{AC}^{0})\nsubseteq\mathsf{TDL}\). Using the construction in Lemma 8.1 we know that all functions in \(\mathsf{TDL}\) have polynomial sized circuits of depth \(4\) since the query functions are depth \(2\) monotone circuits. So, if we can show the existence of a function \(f\) that is in \(\mathsf{DL}(\text{mon-}\mathsf{AC}^{0})\) but has no polynomial sized circuit of depth \(4\), we are done. It is known for any depth \(d\), there is a monotone function \(f\) which can be computed by a monotone circuit of depth \(d\) but cannot be computed by any polynomial size (even non-monotone) circuits of depth \(d-1\) (See [20]). As the decision list \(L=(f,1)(1,0)\) computes \(f\) and the query function \(f\) has a monotone \(\mathsf{AC}^{0}\) circuit, we have \(f\in\mathsf{DL}(\text{mon-}\mathsf{AC}^{0})\). And \(f\notin\mathsf{TDL}\), because all functions in \(\mathsf{TDL}\) have \(\mathsf{AC}^{0}\) circuits of depth \(4\) and \(f\) by definition has none.
Moving towards comparing the class with \(\mathsf{AC}^{0}\), we first apply Propositions 8.214 and 8.1 to \(\mathsf{AC}^{0}\), and conclude that: For any \(g(n)=o(\log n)\), and \(h(n)=\omega(\log n)\), \(\mathsf{DT}^{g(n)}(\text{mon-}\mathsf{AC}^{0})\subsetneq\mathsf{AC}^{0}\) and \(\mathsf{DT}^{h(n)}(\text{mon-}\mathsf{AC}^{0})\nsubseteq\mathsf{AC}^{0}\). In contrast to this, we show that the whole of \(\mathsf{AC}^{0}\) can be computed by monotone decision trees with some sub-linear height. By using a theorem from due to Santha and Wilson (See Theorem 4.1 of [18]), which reduces the number of negations in the circuit to \(\frac{n}{\log^{r}n}\), and then applying Theorem 4.3, we show:
Footnote 14: To apply Prop. 8.2, take \(f(x)=x_{1}\overline{x_{2}}\lor x_{3}\overline{x_{4}}\cdots\lor x_{n-1} \overline{x_{n}}\).
**Theorem 8.1**.: _For any constant \(r\), \(\mathsf{AC}^{0}\subseteq\mathsf{DT}^{d(n)}(\text{mon-}\mathsf{AC}^{0})\) where \(d(n)=\Omega\left(\frac{n}{\log^{r}n}\right)\)._
Proof.: We use the following theorem from due to Santha and Wilson (See Theorem 4.1 of [18]): For any constant \(r\), every \(\mathsf{AC}^{0}\) circuit family can be translated to a constant depth polynomial size circuit that uses at most \(\mathcal{O}\left(\frac{n}{\log^{r}n}\right)\) negations. We can directly combine this with Theorem 4.3 to get a decision tree of height at most \(\mathcal{O}\left(\frac{n}{\log^{r}n}\right)\) by observing that since the queries used in the proof of Theorem 4.3 are monotone sub-circuits15 of the original circuit, in this case, they are computable in monotone \(\mathsf{AC}^{0}\).
Footnote 15: By sub-circuit, we mean the circuit obtained by fixing the values of some intermediate gates to \(0\) or \(1\).
**A negation-limited computation of \(\mathsf{DT}(\textit{mon-AC}^{0})\):** We show that the functions in \(\mathsf{DT}(\textit{mon-AC}^{0})\) have \(\mathsf{AC}^{0}\) circuits with 'limited' negation gates.
**Theorem 1.3**.: _If a Boolean function \(f\) on \(n\) variables is in \(\mathsf{DT}(\textit{mon-AC}^{0})\), then for any positive constant \(\epsilon\leq 1\), there is an \(\mathsf{AC}^{0}\) circuit for \(f\) with \(\mathcal{O}(n^{\epsilon})\) negation gates._
Proof.: As \(f\in\mathsf{DL}(\textit{mon-AC}^{0})\), by assuming that the MDL is in normal form and applying Proposition 3.1, we can write \(f=\overline{f_{1}}f_{2}\vee\overline{f_{3}}f_{4}\vee\ldots\overline{f_{\ell-1 }}f_{\ell}\), where \(\ell=\mathcal{O}(n^{k})\) for some constant \(k\). In addition, all the \(f_{i}\)'s have monotone \(\mathsf{AC}^{0}\) circuits and \(\forall i\in[\ell-1],f_{i}\Rightarrow f_{i+1}\). Thus, it suffices to produce \(\overline{f_{i}}\) for every \(i\in[\ell]\) which is odd, from \(f_{1},\ldots f_{n}\), using a constant depth polynomial size circuit that uses \(\mathcal{O}(n^{\epsilon})\) negations. Indeed, the trivial circuit uses \(\ell=\mathcal{O}(n^{k})\) negations.
The main observation is that the bits (the outputs of \(f_{i}\) where \(i\) is odd) we need to invert are already in sorted order, since \(\forall i,f_{i}\Rightarrow f_{i+1}\). Let this bit-string be \(s=0^{j}1^{m-j}\), where \(m:=\lceil\ell/2\rceil\). We need to output \(\overline{s}=1^{j}0^{m-j}\).
As proved in [18] (Theorem 3.6 in [18]), this can be implemented using an iterative construction which uses only \(\mathcal{O}(n^{\epsilon})\) negations. In the proof of Theorem 3.6 in [18], the authors also observe, this part of their construction uses only polynomially many \(\neg\), and (unbounded) \(\wedge\) and \(\vee\) gates. Hence the final circuit is within \(\mathsf{AC}^{0}\). We present the construction in our context in Appendix A.2 for reference.
In contrast, we note that certain \(\mathsf{AC}^{0}\) circuits require a lot of negations.
**Theorem 8.2** (Theorem 3.2 of [18]).: _For every \(f\in\mathsf{AC}^{0}\) with \(\mathsf{alt}(f)=\Omega(n)\), and for every \(\epsilon>0\), any \(\mathsf{AC}^{0}\) circuit computing \(f\) will have at least \(\Omega(n^{\epsilon})\) negation gates for some positive constant \(\epsilon\) (that can depend on the circuit)._
Proof.: This follows directly from [18] where the authors show that16, if any Boolean function \(f\) of alternation \(k\) is computed by a circuit \(C\) of depth \(d\), then the number of negations in \(C\) is at least \(d(k+1)^{1/d}-d\). This is \(\Omega(n^{\epsilon})\) as \(d\) is constant and \(k=\Omega(n)\).
Footnote 16: Although this was stated for multi-output functions in [18], it holds for single-output functions as well.
Thus, if Theorem 8.2 can be improved asymptotically for some \(f\in\mathsf{AC}^{0}\) and fixed \(\epsilon\), then we can show that \(\mathsf{DT}(\textit{mon-AC}^{0})\) is strictly contained in \(\mathsf{AC}^{0}\).
**A candidate function for \(\mathsf{DT}(\textit{mon-AC}^{0})\) vs \(\mathsf{AC}^{0}\) question:** We now show that there is a simple function that can be computed by depth two \(\mathsf{AC}^{0}\) circuits, which if shown to be in \(\mathsf{DT}(\textit{mon-AC}^{0})\) will imply that \(\mathsf{DT}(\textit{mon-AC}^{0})=\mathsf{AC}^{0}\). This in particular gives a potential candidate function for the separation of the two classes.
**Theorem 8.3**.: _If the family of functions \(f^{n}(x)=\overline{x_{1}}x_{2}\vee\overline{x_{3}}x_{4}\vee\ldots\overline{x _{n-1}}x_{n}\) is in \(\mathsf{DT}(\textit{mon-AC}^{0})\), then \(\mathsf{DT}(\textit{mon-AC}^{0})=\mathsf{AC}^{0}\)._
Proof.: Suppose that the function \(f^{n}\) is in \(\mathsf{DT}(\textit{mon-AC}^{0})=\mathsf{DL}(\textit{mon-AC}^{0})\). Assuming the MDL is in normal form, by Proposition 3.1, \(f^{n}\) can be expressed in various forms as follows:
\[f^{n} =(f_{1}^{n},0)(f_{2}^{n},1)\ldots(f_{p(n)}^{n},1)(f_{p(n)+1}^{n},0)\] \[=f_{1}^{n}\oplus\cdots\oplus f_{p(n)+1}^{n}\] \[=\overline{f_{1}^{n}}f_{2}^{n}\vee\overline{f_{3}^{n}}f_{4}^{n} \vee\cdots\vee\overline{f_{p(n)-1}^{n}}f_{p(n)}^{n}\] \[=(f_{2}^{n}\vee\overline{f_{3}^{n}})\wedge(f_{4}^{n}\vee\overline {f_{5}^{n}})\wedge\cdots\wedge(f_{p(n)}^{n}\vee\overline{f_{p(n)+1}^{n}}), \tag{2}\]
where for all \(i\), \(f_{i}^{n}\in\textit{mon-AC}^{0}\), and \(f_{i}^{n}\Rightarrow f_{i+1}^{n}\), and \(p(n)\) is even and some polynomial in \(n\). Suppose \(p(n)=n^{c_{1}}\) and each \(f_{i}^{n}\) has a monotone \(\mathsf{AC}^{0}\) circuit of depth \(d\) and size at most \(n^{c_{1}}\) for some constants \(d\geq 1,c_{1}\geq 2\).
Now we will show that \(\mathsf{AC}^{0}\subseteq\mathsf{DT}(\textit{mon-AC}^{0})\). Consider an arbitrary function \(g\in\mathsf{AC}^{0}\) computed by a De-Morgan formula17\(G\) of depth at most \(e\) and size at most \(n^{c_{2}}\) for some constants \(e\) and \(c_{2}\geq 2\). We will show that \(g\) has a MDL \(L_{g}\) of size (in its normal form) at most \(n^{(c_{1}c_{2})^{2e}}\) in which all the queries have monotone circuits of depth at most \(de\) and size at most \(n^{(c_{1}c_{2})^{2e}}\). Then, since \(e,c_{1},c_{2}\) are constants, the size of the monotone circuits and the number of queries are polynomial in \(n\), so \(g\in\mathsf{DL}(\textit{mon-AC}^{0})\).
Footnote 17: \(\mathsf{AC}^{0}\) is equivalent to polynomial size constant depth De-Morgan formulas i.e., Boolean formulas in which the negations are only at input variables. W.l.o.g., we also assume that the AND and OR gates are in alternate levels of the formula.
We show the existence of \(L_{g}\) by induction on \(e\).
**Base case: \(e=1\).** Depth-\(1\) formulas are just OR or AND of literals. If \(g=(\bigvee_{i}x_{i})\vee(\bigvee_{j}\overline{x_{j}})\) where \(x_{i}\)'s and \(x_{j}\)'s are variables, then \(L_{g}=(\bigvee_{i}x_{i},1)(\bigwedge_{j}x_{j},0)(\mathbf{1},1)\) works. The case of OR gate is similarly handled. The depth, size, and number of queries in \(L_{g}\) are therefore bounded as expected (assuming \(n\) is not too small).
**Induction step: \(e\geq 2\).** Again, we assume that the root gate is an OR gate: \(g=\bigvee_{i=1}^{s}h_{i}\), where each \(h_{i}\) has an \(\mathsf{AC}^{0}\) formula of depth at most \(e-1\) and size at most \(n^{c_{2}}\), which means by the induction hypothesis, that it has an MDL \(L_{h_{i}}\) of size at most \(n^{(c_{1}c_{2})^{2e-2}}\) such that all its queries have monotone circuits of depth at most \(d(e-1)\) and size at most \(t:=n^{(c_{1}c_{2})^{2e-2}}\) - let us say that \(L_{h_{i}}=\overline{f_{i,1}}f_{i,2}\vee\overline{f_{i,3}}f_{i,4}\vee\cdots \vee\overline{f_{i,t-1}}f_{i,t}\). Then, we have \(g=\bigvee_{i=1}^{s}h_{i}=\bigvee_{i=1}^{s}(\overline{f_{i,1}}f_{i,2}\vee \overline{f_{i,3}}f_{i,4}\vee\cdots\vee\overline{f_{i,t-1}}f_{i,t})\). The trick now is to notice that this expression for \(g\) looks exactly like the Boolean function \(f^{n}\), except the variables are replaced by some monotone functions \(f_{i,j}\)'s, each of which has monotone circuits of depth at most \(d(e-1)\) and size at most \(t\). That is, \(g=f^{st}(f_{1,1},\ldots,f_{1,t},f_{2,1},\ldots,f_{2,t},\ldots,f_{s,1},\ldots,f_ {s,t})\). Now, using the MDL (family) we have for \(f\) from Equation (2), we get an MDL \(L_{g}\) for \(g\) by substituting the variables \(x_{i}\)'s with functions \(f_{i,j}\)'s. The number of queries is \(p(st)=(st)^{c_{1}}\), each query has a monotone formula of depth at most \(d+d(e-1)=de\) and size at most \((st)^{c_{1}}+st.t\leq(st)^{c_{1}+2}\leq(n^{c_{2}}n^{(c_{1}c_{2})^{2e-2}})^{c_{1 }+2}\leq n^{(c_{2}+(c_{1}c_{2})^{2e-2})(c_{1}+2)}\leq n^{(c_{1}c_{2})^{2e}}\) (using \(e,c_{1},c_{2}\geq 2\)).
A similar construction works if the root gate is an AND gate, in which case we can make use of the "CNF form" for \(f\) instead of the "DNF form".
### Randomized MDTs with Query Restrictions
Similar to the deterministic case, when the height is restricted to \(\mathcal{O}(\log n)\), we can define \(\mathsf{RDT}(\textit{mon-}\mathcal{C})\) for a circuit complexity class \(\mathcal{C}\). By using threshold gates to compute the probability bounds, we show that \(\mathsf{RDT}(\textit{mon-}\mathcal{C})=\mathcal{C}\) if \(\textit{mon-}\mathsf{TC}^{0}\subseteq\textit{mon-}\mathcal{C}\). By using a carefully constructed normal form for randomized monotone decision trees we then show that \(\mathsf{RDT}(\textit{mon-}\mathsf{AC}^{0})\subseteq\mathsf{AC}^{0}\).
**Theorem 8.4**.: _For any circuit complexity class \(\mathcal{C}\) such that \(\textit{mon-}\mathsf{TC}^{0}\subseteq\textit{mon-}\mathcal{C}\) and closed under polynomial \(\vee,\wedge,\neg\), we have \(\mathsf{RDT}(\textit{mon-}\mathcal{C})=\mathcal{C}\)._
Proof.: To show \(\mathsf{RDT}(\textit{mon-}\mathcal{C})\subseteq\mathcal{C}\), let \(f:\{0,1\}^{n}\to\{0,1\}\) be any function in \(\mathsf{RDT}(\textit{mon-}\mathcal{C})\). By definition, there is an RMDT called \(T\) with height \(h=\mathcal{O}(\log n)\). Let \(\ell_{1},\ell_{2},\ldots\ell_{k}\) be all the \(1\)-labeled leaves; \(r_{1},r_{2},\ldots r_{k}\) be the number of random nodes from root to the corresponding leaf; and let \(c_{i}:=\bigwedge f_{pi}\bigwedge\overline{f_{qi}}\) denote the characteristic function corresponding to \(\ell_{i}\), where the \(f_{pi}\)'s are the monotone queries to be passed and \(f_{qi}\)'s to be failed in the root-\(\ell_{i}\) path. Notice that since the queries have circuits in \(\mathcal{C}\), so do the \(c_{i}\) functions. By the characterization in the proof of Theorem 7.1 (Equation (1)), we have \(f(x)=1\) iff \(\sum_{i}2^{h-r_{i}}c_{i}(x)\geq 2^{h-1}\).
The RHS may be written as \(\sum_{i}P_{i}(n)c_{i}(x)\geq Q(n)\), where the coefficients \(P_{i}\)'s and \(Q\) are polynomial in \(n\), since \(h=\mathcal{O}(\log n)\).
We have \(f(x)=1\iff\sum_{i}P_{i}(n)c_{i}(x)\geq Q(n)\). As polynomial weighted threshold can be done in \(\mathsf{TC}^{0}\) (and hence has a circuit in \(\mathcal{C}\)) and so can be the \(c_{i}\)'s, we can construct a circuit in class \(\mathcal{C}\) for the task on the RHS, which clearly is the function \(f\) on the LHS.
We therefore have \(\mathsf{RDT}(\textit{mon-}\mathcal{C})\subseteq\mathcal{C}\). Since \(\mathsf{DT}(\textit{mon-}\mathcal{C})\subseteq\mathsf{RDT}(\textit{mon-} \mathcal{C})\), and we already proved that \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathcal{C}\) (Theorem 1.2), we get \(\mathsf{RDT}(\textit{mon-}\mathcal{C})=\mathsf{DT}(\textit{mon-}\mathcal{C})= \mathcal{C}\).
We have a partial result for query restricted randomized MDTs in case of \(\mathsf{AC}^{0}\):
**Theorem 8.5**.: \(\mathsf{RDT}(\textit{mon-}\mathsf{AC}^{0})\subseteq\mathsf{AC}^{0}\)_._
Before describing the proof, we will first obtain normal forms for Randomized Monotone Decision Tree model. Let any Boolean function \(f\) on \(n\) variables be computed by a RMDT called \(T\). We will modify \(T\) into an RMDT \(T^{\prime}\) so that \(T^{\prime}\) also computes \(f\), and additionally the following properties hold for \(T^{\prime}\).
* **The number of random nodes is the same in all the paths from root of \(T^{\prime}\) to the leaves.** To achieve this, let \(r\) be the maximum number of random nodes along any root to leaf path in \(T\), and let \(\ell\) be a leaf whose path to the root of \(T\) contains \(d(<r)\) many random
nodes. Then we replace the leaf \(\ell\) with a random node and make both its children leaves with label same as \(\ell\). Now, both the newly added leaves can be seen to be at a distance of \(d+1\) from the root of \(T\). It is important to note that this modification does not change the probability of reaching a correct leaf. If the original leaf \(\ell\) was reached in a computation with some probability, it is with the same probability that some child of \(\ell\) will be reached (it does not matter which as both are labeled same as label of \(\ell\)). We perform this operation for all the leaves of \(T\) that are at distance less than \(h\) from the root of \(T\), until no such leaves exist. Thus, in the final tree \(T^{\prime}\), all the paths contain the same number of random nodes (namely \(r\)).
* \(T^{\prime}\) **is a complete binary tree**. If \(T\) already satisfies this property, we are done. Otherwise, suppose \(\ell\) is some leaf of \(T\) whose distance (i.e., number of queries) from the root is \(d<h\), where \(h\) is the height of \(T\). Now consider the following transformation of \(T\). We replace \(\ell\) with the monotone query **1**, and label its left leaf arbitrarily and the right leaf with the label of \(\ell\). The two newly introduced leaves can be seen to be at a distance of \(d+1\) from the root of \(T\). Also, \(T\) still computes the same function as in any event that the leaf \(\ell\) is reached upon computation on an input in the original \(T\), the computation in the new \(T\) ends at the newly introduced right leaf, whose label is the same as \(\ell\). We keep performing this transformation for all the leaves that are at distance less than \(h\) from the root, until finally all leaves are at a distance of \(h\) from the root. Note that the above properties can both be made satisfied by \(T^{\prime}\) by making the same transformations; but first make the number of random nodes same in all paths, and only then make all paths the same length.
* **All the random nodes are at the top levels of \(T^{\prime}\).** By this, we mean that in any path from root to a leaf, the first few nodes are random, and the remaining are monotone nodes. For this, we first perform the above two transformations on \(T\), after which say there are \(r\) random nodes in every path and each path length is same as the height \(h\) of \(T\). The idea to achieve this is: instead of making random queries in the process of computation by the monotone queries, we first fix a "randomness" and then proceed with the computation by the monotone queries. We define \(2^{r}\) many deterministic versions of \(T\) as the set \(\{T_{s}\mid s\) is a binary sequence of length \(r\}\). A tree \(T_{s}\) has the same structure as \(T\) in terms of monotone nodes, leaves and their labels. A random node \(v\) is replaced with the monotone function **0** if the \(i^{th}\) bit of \(s\) is 0, and with **1** otherwise, where \(i\) is the number of random nodes in the path from \(v\) to the root of \(T\). The new tree \(T^{\prime}\), whose height will be \(r+h\) is constructed as follows: The top of \(T^{\prime}\) shall be a complete binary MRDT of height \(r\) with all the nodes making random queries. This results in \(2^{r}\) "dangling ends". For all binary sequences \(s\), the dangling end _addressed_ by the binary
sequence \(s\) shall lead to the root of \(T_{s}\). This completes the construction of \(T^{\prime}\). Immediately observe that \(T^{\prime}\) satisfies the desired property of all the random nodes being on the top. Now, to show that \(T^{\prime}\) is equivalent to \(T\), we will argue that for any input \(x\), the probability of reaching \(1\) in \(T\) is equal to that of reaching \(1\) in \(T^{\prime}\). The former is equal to \(\sum_{i}(1/2)^{r}.c_{i}(x)\), where \(i\) runs over all the \(1\)-labeled leaves \(\ell_{1},\ell_{2},\dots\) of \(T\) and \(c_{i}\)'s are their respective characteristic functions defined in Theorem 17. As there are \(2^{r}\) leaves in \(T^{\prime}\) corresponding to each leaf \(\ell_{i}\) in \(T\), the latter probability is equal to \(\sum_{i}\sum_{s}(1/2)^{r}.c_{i}^{s}(x)\). Here \(c_{i}^{s}\) denotes the characteristic function of the leaf corresponding to \(\ell_{i}\) in the tree \(T_{s}\). To show the equality of these two probabilities, we will show that for any \(i\), \(c_{i}(x)=\sum_{s}c_{i}^{s}(x)\). As the trees \(T_{s}\)'s are same as \(T\) with random nodes changed to \(\mathbf{0}\) or \(\mathbf{1}\), observe that for any \(s\), \(c_{i}^{s}(x)\Rightarrow c_{i}(x)\). Now, suppose \(c_{i}^{s}(x)=1\) for some \(s\). In addition to \(c_{i}(x)=1\), this also means that the characteristic product corresponding only to the originally random nodes in the root to \(\ell_{i}\) path of \(T_{s}\) is \(1\). This means that each of the literals in the product is \(1\), which translates to the fact that we can determine whether each of the originally random nodes of \(T_{s}\) were \(\mathbf{0}\) or \(\mathbf{1}\) (since we know the root to \(\ell_{i}\) path). But this uniquely determines an \(s\) due to the definition of \(T_{s}\). Thus at most one of \(c_{i}^{s}(x)\) would be \(1\). Using this along with \(\forall s,c_{i}^{s}(x)\Rightarrow c_{i}(x)\), we get the desired equation \(c_{i}(x)=\sum_{s}c_{i}^{s}(x)\). Hence \(T^{\prime}\) computes \(f\) too.
Proof of Theorem 8.5.: Let a Boolean function \(f\) on \(n\) variables have an RMDT called \(T\) of height \(h=\mathcal{O}(\log n)\) with all its monotone queries having monotone \(\mathsf{AC}^{0}\) circuits. From the three normal forms proved above, we may assume that the first \(r\) levels of \(T\) are random nodes, and the below queries all have monotone \(\mathsf{AC}^{0}\) circuits. This is justified, since those normal forms do not increase the tree height beyond \(\mathcal{O}(\log n)\), and the query functions still have monotone \(\mathsf{AC}^{0}\) circuits.
As there are no random queries beyond height \(r\), each sub-tree rooted at a node just below a random node in \(T\), resembles a (deterministic) MDT. Since we have established \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\subseteq\mathsf{AC}^{0}\) (Lemma 8.1), each of these sub-trees computes an \(\mathsf{AC}^{0}\) function, say \(f_{1},f_{2},\dots f_{2^{r}}\). For an input \(x\), recall that it means that probability of reaching a leaf with label \(f(x)\) in \(T\) is at least \(2/3\), which in other words means that at least \(2/3^{rd}\) of all leaves that are reachable with non-zero probability are labeled \(f(x)\). Since the sub-trees are deterministic, there is a unique leaf reached in a sub-tree for a given \(x\). Hence, at least \(2/3^{rd}\) of the sub-trees reach \(f(x)\) labeled leaf (correspondingly \(f_{i}(x)=f(x)\)) on input \(x\). Therefore, \(f\) is essentially a majority function over \(f_{i}\)'s, with the added advantage that the majority bits are at least \(2/3^{rd}\) of total. Ajtai and Ben-Or [1] proved the existence of an \(\mathsf{AC}^{0}\) circuit that computes majority in such cases. As all the \(2^{r}=\mathsf{poly}(n)\) many functions \(f_{i}\)'s are also in \(\mathsf{AC}^{0}\), we can obtain an \(\mathsf{AC}^{0}\) circuit for \(f\). Thus, \(f\in\mathsf{AC}^{0}\) and \(\mathsf{RDT}(\textit{mon-}\mathsf{AC}^{0})\subseteq\mathsf{AC}^{0}\).
Discussion and Open Problems
We explore the power of deterministic MDTs (adaptive and non-adaptive) and MDLs with most general monotone queries, and establish the relations between the corresponding complexity measures and alternation (Sections 4 and 5). We also introduce NMDTs and RMDTs and understand their power (Sections 6 and 7). Exploring the case of restricted queries, we show containments between various circuit complexity classes and the corresponding deterministic and randomized MDT classes (Section 8). By using a construction due to [18], we prove an upper bound on the number of negations in \(\mathsf{AC}^{0}\) circuits, and justify its role in potentially solving \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})=_{?}\mathsf{AC}^{0}\) (Sub-section 8.3).
The question of \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})=_{?}\mathsf{AC}^{0}\) is one of the main problems left unanswered by us. It is not even known whether the simple function like \(f=\overline{x_{1}}x_{2}\vee\overline{x_{3}}x_{4}\vee\cdots\vee\overline{x_{n -1}}x_{n}\) is in \(\mathsf{DT}(\textit{mon-}\mathsf{AC}^{0})\), or even in \(\mathsf{RDT}(\textit{mon-}\mathsf{AC}^{0})\). In this direction, we first note that the number of negations used in Theorem 1.3 cannot be improved asymptotically, for if we can, then we can show that any function in \(\mathsf{TC}^{0}\) can be computed by a \(\mathsf{TC}^{0}\) circuit with \(o(n^{\epsilon})\) negations (for any \(0<\epsilon<1\)), which contradicts Corollary 3.3(2) of [18]. We note that if the negations bound in Theorem 8.2 can be improved for some function, that shows the separation. If it cannot be improved, that means every function in \(\mathsf{AC}^{0}\) can be computed using an \(\mathsf{AC}^{0}\) circuit with \(\mathcal{O}(n^{\epsilon})\) negation gates for arbitrarily small constant \(\epsilon>0\), which would result in an improvement of Theorem 8.1.
We also comment here about an alternative way to define the classes \(\mathsf{DT}(\textit{mon-}\mathcal{C}),\mathsf{RDT}(\textit{mon-}\mathcal{C})\) and \(\mathsf{DL}(\textit{mon-}\mathcal{C})\), where we can restrict the queries by imposing that they be monotone but have possibly non-monotone circuits in \(\mathcal{C}\), rather than monotone circuits. This version results in potentially more powerful classes. With very similar proofs, we would still get \(\mathsf{DT}(\textit{mon-}\mathcal{C})=\mathsf{RDT}(\textit{mon-}\mathcal{C})= \mathcal{C}\) when \(\mathcal{C}\supseteq\mathsf{TC}^{0}\). Further, Proposition 3.3 implies that for any function \(f\in\mathsf{TC}^{0}\) with uniform alternation, we get \(f\in\mathsf{DT}^{\lceil\log(\mathsf{alt}(f)+1)\rceil}(\textit{mon-}\mathsf{ TC}^{0})\). The effect on our other results is unclear.
|
2309.14761 | Optimization Techniques for a Physical Model of Human Vocalisation | We present a non-supervised approach to optimize and evaluate the synthesis
of non-speech audio effects from a speech production model. We use the Pink
Trombone synthesizer as a case study of a simplified production model of the
vocal tract to target non-speech human audio signals --yawnings. We selected
and optimized the control parameters of the synthesizer to minimize the
difference between real and generated audio. We validated the most common
optimization techniques reported in the literature and a specifically designed
neural network. We evaluated several popular quality metrics as error
functions. These include both objective quality metrics and
subjective-equivalent metrics. We compared the results in terms of total error
and computational demand. Results show that genetic and swarm optimizers
outperform least squares algorithms at the cost of executing slower and that
specific combinations of optimizers and audio representations offer
significantly different results. The proposed methodology could be used in
benchmarking other physical models and audio types. | Mateo Cámara, Zhiyuan Xu, Yisu Zong, José Luis Blanco, Joshua D. Reiss | 2023-09-26T08:45:41Z | http://arxiv.org/abs/2309.14761v1 | # Optimization Techniques for a Physical Model of Human Vocalsation
###### Abstract
We present a non-supervised approach to optimize and evaluate the synthesis of non-speech audio effects from a speech production model. We use the Pink Trombone synthesizer as a case study of a simplified production model of the vocal tract to target non-speech human audio signals -yawnings. We selected and optimized the control parameters of the synthesizer to minimize the difference between real and generated audio. We validated the most common optimization techniques reported in the literature and a specifically designed neural network. We evaluated several popular quality metrics as error functions. These include both objective quality metrics and subjective-equivalent metrics. We compared the results in terms of total error and computational demand. Results show that genetic and swarm optimizers outperform least squares algorithms at the cost of executing slower and that specific combinations of optimizers and audio representations offer significantly different results. The proposed methodology could be used in benchmarking other physical models and audio types.
Mateo Cimara Information Processing & Telecomm. Center
Universidad Politecnica de Madrid
Madrid, Spain
[email protected] Zhiyuan Xu Centre for Digital Music
Queen Mary University of London,
London, UK [email protected] Yisu Zong Centre for Digital Music
Queen Mary University of London, UK [email protected]
Jose Luis Blanco Information Processing & Telecomm. Center
Universidad Politecnica de Madrid
Madrid, Spain
[email protected] Joshua D. Reiss Centre for Digital Music
Queen Mary University of London, UK [email protected]
## 1 Introduction
Articulatory synthesis provides a unique opportunity to delve into the mechanics of speech production [1, 2]. Unlike black box models, physical models achieve an _interpretable representation_ of the inner characteristics of the vocal tract. This allows for a deeper understanding of the processes involved in speech production. They also provide precise control of the speech's articulatory, resonance, and phonatory characteristics, such as the position of the tongue, lips, existing constrictions, or nose size; as well as informed control of model parameters. This makes natural-sounding synthetic speech samples less prone to artifacts than other synthesis models. Furthermore, they may produce any type of human sound coming out of the mouth and the nose. Those include sounds that are not words, such as sighs, laughs, yawns, and so on.
These _non-speech sounds_ are becoming increasingly important in today's audiovisual productions and digital interactions. From the sound effects in movies and videogames to the soundscapes in podcasts and audiobooks [3, 4, 5], the ability to generate these sounds has become a critical aspect to produce realistic performances. Analyzing the ability of models to construct these types of sounds is crucial to understand the limits and limitations of models [6], as well as the underlying complexities of producing naturally sounding audio samples. Answering those questions opens up new possibilities for sound and user-experience designers, video-game developers, and audio production professionals looking for new and innovative ways to create high-quality, realistic, and engaging sound experiences.
Physical models for speech synthesis pose challenges that are extensively reported in the literature. They often include many parameters that are difficult to configure simultaneously to achieve high-quality sounds. Their combined optimization can be demanding, computationally expensive, time-consuming, and challenging to implement in real time. These complications explain the need to improve and optimize the synthesizer.
Our research focuses on articulatory parameters from a black-box point of view. We optimize synthesizers without paying specific attention to what each parameter represents to maximize objective similarity by minimizing the difference between a target signal and the synthesized signal. This ensures superior generalization capabilities for the proposed method and valuable results for other contexts.
In this contribution, we look at the physical model known as the _Pink Trombone (PT)1_. This is a simplified version of the vocal tract that uses a small set of fundamental parameters to control the shape and movements of the articulators during speech production [7]. We fixed its articulatory bounds to focus on sounds that a human could physically produce, and used these to optimize the PT and compare its result with human audio samples.
Footnote 1: [https://dood.al/pinktrombone/](https://dood.al/pinktrombone/)
We conduct a case study using synthetic, sustained, and yawning sounds to understand its capabilities and limitations. We test different black-box strategies to predict the synthesizer control parameters, including well-known _optimization techniques_ and _Deep Neural Networks_, trained on a set of PT synthetic samples. For experimentation, we use sound files generated by the PT, as well as audio clips downloaded from the Freesound platform [8].
Experiments shall lay the foundations for studies on articulatory and production models with multiple parameters and different
audio types. The dataset, test sounds, and algorithms are available online1. Our goals for this contribution are the following:
Footnote 1: [https://www.cs.org/](https://www.cs.org/)
* _Determine if PT parameters can be accurately predicted exclusively from acoustic features_. We optimize synthesizer control variables from audio samples as a black-box.
* _Determine best optimization technique for articulatory variables_. We evaluate how different optimizers perform in front of increasingly challenging sounds.
* _Determine the error metric that yields a more satisfactory outcome_. We benchmark different techniques for standard error metrics and acoustic parameterizations of audio files.
The remainder of this paper is organized as follows. Sec. 2 expands on the optimization techniques and the parameterizations reported in the literature. Sec. 3 describes the experiments covered to meet our objectives. Sec. 4 analyzes and discusses the results obtained, and Sec. 5 concludes the paper.
## 2 Background
For sound-matching optimization, one may focus on the control parameters of the synthesizer, the input acoustic features extracted from the audio, and the process that leads to optimization. All these aspects provide insights into the methodologies and objectives of various optimization studies in synthesizers. Fig. 1 depicts the overall schematic of the optimization process.
### Optimization Methods
Considering the complexity of sound synthesizers, there is a need for reliable optimization techniques. Numerous optimization methods have been investigated in terms of optimizing parameters for physical models or traditional synthesizers. The related work can be organized into two main categories:
Search-based Methods:these have been widely applied to physical models in audio synthesis due to their ability to handle non-differentiable, non-linear, and non-convex optimization problems. They are universal and regard the synthesizer as a black-box model, focusing solely on parameter space optimization. Standard ways include the use of Evolutionary Algorithms (EA), including Evolution Strategies [9], Genetic Algorithm (GA) [10], or Particle Swarm Optimization (PSO) [11]. Other methods, including Hill Climber [12], Levenberg-Marquardt Algorithm [13] and Nelder-Mead Method [14] are also considered.
Model-based Methods:machine learning (ML) models have become mainstream for synthesizer parameter estimation in recent years. They learn the mapping between the latter and audio features directly from data. In [15], authors used a striated Convolutional Neural Network (CNN) to predict the parameters of a subtractive synthesizer. Recent work proposed differentiable digital signal processing (DDSP) [16] and integrated an additive synthesizer with a filtered noise synthesizer into the end-to-end deep learning framework. These allow direct gradient descent optimization. DDSP is now widely utilized for parameter estimation [17], despite its need for precise reproduction of the target synthesizer in a differentiable manner, which poses difficulties.
Each approach has its own benefits and limitations, leading to ongoing discussions. In [12], authors compared sound-matching performance on a VST synthesizer using two search strategies and three neural network methods. Results indicated that search methods are limited by their computational cost, and modeling methods are restricted to the inductive bias of model structure and data availability. We tested these limitations for the PT, including simple speech and non-speech vocalisations to evaluate the performance of the optimized model parameters to reproduce sounds.
### Parameter Selection
The control parameters of the synthesizer and the input parameters for the optimizer largely depend on the synthesis technique and the desired outcomes, in accordance with Fig. 1. We focus on the PT control parameters -see Table 1. Physical models may alternatively use local constrictions to describe the configuration required for the vocal tract to produce a certain sound. The PT can actually operate on those as well. Nonetheless, we are interested in the primary ones.
Furthermore, inputs to the optimizer must represent the acoustic content of the audio samples so that the model may produce accurate outputs. For this task, we shall look at acoustic features.
### Acoustic Features Extraction
Various acoustic features have been used in synthesizer optimization studies to evaluate and quantify the quality of the synthesized sounds to guide the optimization process. Spectral features are the most common, but finding the best metric with good perceptual consistency is still an open question [18]. In [19] authors focus on the spectral norm error, [10] used spectral norm plus spectral centroid error extracted from short-time Fourier transform (STFT) for each frame, [9] used relative spectral error, which is computed by summing normalized differences between frequency components extracted from two spectra, [20] combined the least squared error of the STFT of two sounds plus the perceptual error applying a narrow band masking curve. On error computation, [12] used Euclidean distance of Mel-Frequency Cepstral Coefficients (MFCCs), [16] used a deep representation extracted from MFCCs, and multiscale spectral loss plus perceptual loss, [15] compared the following features as the Deep Neural Networks (DNN) input: a set of spectral features [21], STFT, and deep representation extracted by a CNN from the raw signal. Results showed that STFT and deep representations seem more representative than handcrafted features. Our aim now is to identify suitable ones for the PT parameters optimization.
Figure 1: Schematic on the optimization process.
## 3 Experimentation
We designed our experiments to focus on three specific characteristics that are relevant to the acoustic-to-articulatory inversion: the _optimizer_ used to predict control variables, the _audio representation_ to compute the difference between original and synthesized audio, and the _signal complexity_.
### Pink Trombone Fundamentals
The PT is a simple vocal tract model that can be interacted with through a web interface. It is a Kelly-Lochbaum (KL) type model whose technical details can be found in [22]. In our black-box case study, we focused on the number of parameters to be used and their bounds. Table 1 summarizes these, which correspond to physical attributes of the vocal tract. We aimed to decouple the meaning of these parameters from their human meaningfulness for our method to be useful to any synthesizer.
### Signal complexity
Signal complexity refers to the challenges we pose to the optimizers to predict the exact parameters. In that sense, we consider three independent characteristics of the signal. First, the _origin of the audio file_: audio generated by a speech synthesizer or a person. Second, _variations over time_: sustained notes or dynamic audio (such as a yawn). Third, _number of variables to optimize_: related to the characteristics of the synthesizer. Hereafter we enumerate all experiments conducted from the least to the most complex.
* PT generated sounds for which:
* One of the control parameters is unknown.
* All of the control parameters are unknown.
* Gaussian white noise is added. This evaluates the robustness of optimizers dealing with non-ideal signals.
* Control parameters vary over time.
* Audio clips containing:
* Sustained vowel sounds.
### Audio Representation and Quality Assessment
#### 3.3.1 Representations focusing on spectral difference
To minimize the difference between the target and reconstructed sound, we focused on the spectral features of the audio signals. The following list includes all the transformations evaluated:
* _STFT_. A window size of 1024 samples with a 2x overlap STFT was taken.
* _Multiscale spectrogram_. The window sizes were {64, 128, 256, 512, 1024}, with a 75% overlap.
* _MEL-spectrogram_. We used 128 filters in the MEL bank up to a maximum frequency of 8 KHz.
* _MFCCs_. We took 20 cepstral coefficients from the MEL-spectrograms.
Computations were performed in Python 3.9, using the AuraLoss library [23]. The Mean Absolute Error (MAE) between the input and reconstructed audio was computed as the error function.
#### 3.3.2 Perceptual metrics
In addition to MAE, we also computed a set of perceptual quality and intelligibility metrics. These metrics were not used as error functions in the optimization process. The findings may be representative of the perceptual similarity between sounds. However, we encourage readers to listen to the results we posted online. The following full reference metrics were analyzed:
* PESQ: Perceptual Evaluation of Speech Qlt. [24].
* PEAQ: Perceptual Evaluation of Audio Qlt. [25].
* ViSQOL: Visually-Inspired Speech Qlt. Obj. Listener [26].
* STOI: Short-Time Objective Intelligibility [27].
### Selected Optimizers
We used _optimization algorithms_ and a _CNN_ to predict the control parameters of the synthesizer. We fed the algorithms with the MAE between the original and the synthesized signal. Hereafter we briefly introduce the selected optimization algorithms, which we have evaluated in terms of computational cost and reconstruction error.
Genetic Algorithm (GA):is an optimization technique inspired by natural selection and genetics [28]. The candidate solutions are defined by a set of genes. In every generation (loop over all candidates), the genes are able to randomly change (mutation), combine with other candidates (crossover), and be selected (optimization) to search for optimal solutions in the solution space. The fitness function seeks to minimize differences in the input/output signals.
We used 32 bits to define the genes, a crossover rate of 0.9, a mutation rate of 0.03, and a population of 10 individuals.
Particle Swarm Optimization (PSO):is a nature-inspired metaheuristic optimization technique that simulates the social behavior of swarms [29]. PSO operates by iteratively adjusting the position of particles within the search space based on their individual and global best experiences, converging towards the optimal solution. In our case, we set acceleration parameters \(c_{1}=0.5\), \(c_{2}=0.3\) (trust in itself, trust in its neighbors), and inertia weight \(w=0.9\), with 10 particle population.
Trust Region reFlective (TRF):The Trust Region reFlective [30] algorithm is a computational technique for solving least squares optimization problems. It employs a model-based method, seeking to minimize a function by iteratively creating simplified models of
\begin{table}
\begin{tabular}{l c c} Pink Trombone Parameters & Lower bound & Upper bound \\ \hline Pitch (Hz) & 75 & 330 \\ Voiceness & 0 & 1 \\ Tongue Index & 14 & 27 \\ Tongue Diameter (cm) & 1.55 & 3 \\ Lips Diameter (cm) & 0.6 & 1.2 \\ Constriction index & 12 & 42 \\ Constriction Diameter (cm) & 0.6 & 1.2 \\ Throat Construction (cm) & 0.5 & 1.0 \\ \end{tabular}
\end{table}
Table 1: Pink Trombone parameters and their bounds.
the objective function within certain trusted regions. The term "reflective" refers to the method's way of handling boundaries and constraints: if a proposed step hits a boundary, it is reflected in the feasible region.
Nelder-Mead Method (NM):also known as the downhill simplex method [31], is a multidimensional optimization technique well suited for non-linear problems. The algorithm starts with an initial simplex, a set of \(n+1\) points in an \(n\)-dimensional space. The algorithm iteratively updates the position of the simplex by reflecting, expanding, contracting, or shrinking it, based on the values of the function being optimized at the vertices of the simplex.
Covariance Matrix Adaptation Evolution Strategy (CMA-ES):is a stochastic optimization algorithm that uses information about the distribution of the samples generated by the algorithm to guide the search for the optimal solution [32]. The algorithm starts with an initial guess for the solution and then generates a set of samples around this point. The distribution of these samples is then updated based on the fitness of the samples, with higher-fitness samples being more likely to be selected. As the algorithm progresses, outcomes increasingly concentrate around the optimal solution.
Trust Region Reflective (TRF):is a nonlinear optimization algorithm [7]. The algorithm works by iteratively approximating the objective function with a quadratic model within a trust region around the current solution, then solving the subproblem within the trust region to find the next iterate. The trust region radius is adjusted based on the model's performance at each iteration.
Neural network prediction (NN):In addition to the optimization techniques, we tested the capability of neural networks to predict the control variables based on the acoustic features. We designed a CNN that admits spectrograms as input and outputs the control variables. To train it we collected a database of 400,000 different PT clips. We trained four different networks, each admitting as input for each audio representation mentioned in subsection 3.3.1. The network has been coded with Pytorch 1.7.1, with 2 convolutional layers, ReLU as the activation function, 0.0001 as the learning rate, ADAM optimizer, and following the 60/20/20 data splitting strategy between training, validation, and test.
### Materials
Attending to the scope of this contribution, the assessment of the performance achieved by the different techniques, audio representations, and optimizers in predicting the parameters of the physical model for sound-matching required two sets of audio files:
* Synthetic audio samples, generated at 48 kHz sampling rate and 1 s long. To generate these, we used the Programmable version of the PT3 modified to be a Node,js server. We generated 80 audio clips with random control parameters. Footnote 3: [https://github.com/zakaton/Pink-Trombone](https://github.com/zakaton/Pink-Trombone)
* Audio samples downloaded from Freesound containing utterances from different speakers. We focused on sustained vowels (5 clips) and yawnings (8 clips). The vowels are one second long and the yawnings are three seconds on average. All files were recorded at a 48 kHz sampling rate to match the same conditions as the synthetic audio.
## 4 Results
In this section, we present the results of the experiments aimed at predicting control parameters for PT. We sought to fix the same conditions for all optimizers to ensure a fair comparison. Some considerations apply to all experiments:
* _Error values are normalized_ with respect to the maximum and minimum values that each parameter can take.
* The _random seed was fixed_ to randomize the PT control parameters in each experiment, such that the optimizers face the same initial conditions in all cases.
* Each experiment was _repeated 20 times_. Initial conditions and target values were randomized.
* All optimizers had the same _stop criterion_: reach an error of less than 0.0001 in the metric or stop to improve the relative error with respect to the previous 20 loops.
### Optimization of PT-generated sounds
Hereafter we present the results of the different tests that were conducted using PT synthetic audio clips as inputs.
#### 4.1.1 Optimization of one control parameter
In this experiment, we fixed all control parameters except for one. We predicted the unknown value. This set of experiments does not include the CMA-ES algorithm because its particular design does not support single-parameter prediction. The results are shown in Figure 2, for the different optimizers and PT control parameters.
Results demonstrate the effectiveness of GA and PSO in accurately predicting the control parameters. No outliers were observed in the genetic algorithm's performance. The NM algorithm successfully reached the absolute minimum for most parameters. However, it struggled to achieve the same for the pitch and one tongue-related parameter. A closer examination of these parameters revealed that their error functions contained multiple local minima. Since the performance of the NM is heavily influenced by its initial conditions, it makes it prone to getting stuck in them.
Despite not always arriving at the optimal values, TRF and NN converge rapidly to the minimum. Once it is trained, NN takes less than a second to reach the minima. TRF algorithm takes 5 seconds
Figure 2: Optimizer performance over one control parameter. X-axis includes the optimizers. Y-axis represents the normalized error. Each bar is a control parameter.
on average, which is four times faster to optimize than PSO and NM, and 100 times faster than GA.
In the same line, Figure 3 illustrates the error associated with each audio representation. All audio representations are suitable for optimizing individual control parameters. However, no error is observed in the multiresolution. This makes sense, since it is an extension of the STFT that better represents the spectral characteristics of the signal.
#### 4.1.2 All control parameters
The single-parameter experiments validate that articulatory parameters can be predicted from sound representations alone. However, this scenario is too simplified to clarify which optimizer is more accurate. This can be done by increasing the complexity of the experiment, seeking to predict all control parameters at once. The prediction results for each parameter are shown in Figure 4.
Experiments focusing on predicting all parameters demonstrate the superior performance of GA, CMA-ES, and PSO compared to other methods. In this experiment set, the eight-dimensional search area makes the optimization more challenging. The TRF and NM algorithms yielded unsatisfactory results, deeming them unsuitable for tackling the problem. As the number of potential solutions grows exponentially with increasing dimensions, only the most robust methods can find an optimal solution. Genetic algorithms and PSO can outperform least squares minimization or the downhill simplex method because they are more robust in handling complex search spaces, non-convex functions, and intricate relationships between variables.
On the other hand, observing how the different audio representations behave in this scenario is interesting. They are shown in Figure 5. We can observe that no representation performs significantly better than the rest, not even the multiresolution representation. However, finding a higher error is not necessarily a serious problem when reconstructing the signal. Most control parameters have local minima very close to the global minimum. This means that different PT configurations can produce almost the same reconstruction. This does not apply to the pitch parameter, which is one of the critical parameters in quality and, as can be seen, the MFCCs do not make it easy to reach its minimum.
In fact, Figure 6 illustrates precisely these phenomena. It shows the MAE of the original and reconstructed signal. It is observed that regardless of the optimizer used when the search space is large, the MFCCs do not achieve satisfactory results. Thus, this experiment indicates that the best prediction of the control parameters can be made with GA or the PSO using the MEL scale or Multiresolution spectrograms.
In addition, Figure 7 shows the computational costs of each optimizer. It shows that the NN is the fastest once trained, while PSO is the fastest of the suitable optimization techniques.
#### 4.1.3 All control parameters which vary over time
The next level of complexity we tested was optimizing parameters that varied over time. To achieve this, we created an interpolator that generated intermediate values between two temporal spaces defined by two sets of articulatory parameters in the PT. As shown in Figure 8, none of the optimizers were able to achieve a satisfactory result when optimizing a time-variant set of parameters. The only optimizer that achieved a result closer to zero was the GA, using the STFT. The tendency is that as more parameters are added to optimize, the search space becomes more complex and therefore very difficult to reach the absolute minimum. It is important to note that this method is not suitable for a neural network. It would be necessary to train new networks depending on the number of parameters to be predicted.
Figure 4: _Optimizer performance when all parameters are predicted at the same time. X-axis includes all optimizers. Y-axis represents the normalized error. Each bar corresponds to an audio representation._
Figure 5: _Audio representation performance while predicting all parameters at a time. X-axis covers the representations. Y-axis represents the normalized error. Each bar corresponds to a control parameter._
Figure 3: _Audio representation performance over one control parameter. X-axis represents each audio representation. Y-axis represents the normalized error. Each bar is a control parameter._
To achieve a more satisfactory, general solution, it was decided that the best strategy for optimizing signals that vary over time is to segment the signal into small windows and optimize them as if they were a static signals. We tested different window sizes and found out a 100 milliseconds length performed optimally. These windows can then be connected using a Savitzky-Golay filter [33], which smooths out the result. The optimization results of these tests suggest insights equivalent to predicting a non-variant set of parameters. That is why this technique is the preferred choice for optimizing sounds created by humans.
#### 4.1.4 All control parameters in noise
In these experiments, different amounts of Gaussian white noise were added to the original signal. We sought to predict the articulatory parameters that defined the signal. As shown in Figure 9, the optimizers performance deteriorates as more noise is added. Additionally, we observed that the optimizers do not start to exhibit exponentially growing errors until the signal-to-noise ratio reaches 20 decibels. All optimizers act similarly, with the exception of the NN, which does not tolerate noise at its input.
From these experiments, we can conclude that it is possible to optimize signals that are not perfectly generated by a synthesizer but may come from any source, such as a recording from a public database. This finding is significant because it suggests that our approach can be applied in real-world scenarios where the input signals will likely not be perfectly recorded.
### Optimization of real audio files
Results of the tests with real sounds can be seen in Table 2. The results for each perceptual metric are shown for the best-performing combination of the optimizer-representation pair. The columns detail the optimizers and the color shows the best audio representation for each case. We used different perceptual metrics to measure how similar the sounds generated by the synthesizer were to human-generated ones. We also include how the perceptual metrics behaved when predicting PT samples. These set up a benchmark to compare the upper limit that could be reached. Still, we encourage readers to visit our website, where we have published these audio files, and evaluate the quality themselves.
generated the original sound. Therefore, we do not claim that we can produce an exactly identical sound but an equivalent one.
For all types of signal, we used the strategy of dividing the signal into small windows and smoothing them out into full-length signals. Systematically, PT-generated sounds are predicted with better scores than human-generated sounds. Furthermore, in many of the cases we found that yawns are perceptually recognized as more similar than sustained vowels. This is because the timbre in the sustained vowel has a much greater influence than in the yawn. The PT has vocal characteristics that do not match those of the person who recorded the sounds. For this reason, it is more difficult to recreate a perfectly harmonic voice like the vowel than a noisy sound like the yawn. This does not imply that our optimizers are malfunctioning, as the goal is to create comparable sounds, not exactly the same. The STOI metric in this regard is very representative of this situation, giving the yawn almost equal score to the PT-generated values, while the vowel is perceived as different.
These experiments also yield two interesting insights. First, one may identify optimizer-representation combinations that perform better than others. In particular, multiscale representation works well for yawns, while for sustained vowels STFT representation can do the job. None performed well using MFCC. Thus, one may need to take into account the type of signal to get good results from the optimizer. Second, there is consistency among the perceptual metrics. Those experiments that are more challenging consistently score worse than simpler ones.
## 5 Conclusion
Optimization techniques effectively predict the parameters of the Pink Trombone to produce human-like vocalisations. The selected algorithms delivered tuned control parameters while operating on different acoustic features and metrics. The resulting audio samples match the selected input sounds regarding the absolute error and perpetual equivalent metrics. A similar trend was observed on sustained vowels and yawnings; as well as under additive noise conditions. Nonetheless, the lower performance levels for the collected audio samples compared to the synthetic inputs, in absolute error and according to the perceptual-equivalent metrics, may be influenced by the limited ability of the Pink Trombone to match sounds out of its standard tract setting.
We comprehensively evaluated some of the most commonly used optimization algorithms in a black-box approach, predicting their control parameters to synthesize non-speech sounds. We tested different audio representations and conducted experiments in different scenarios, ranging from simple single-parameter predictions to complex, time-varying parameters or non-speech human-made sounds. Our results show that the Evolution Strategies (GA and CMA-ES) and Particle Swarm Optimization with multiresolution representation are the most effective for predicting control parameters with minimum error (MAE \(<1\%\)) and high quality (ViSQOL 4.3 for PT, PEAQ 3.0 for yawns). Also, PSO achieved the best performance vs. computational cost ratio.
According to our results, GA and PSO algorithms were superior to the other optimization methods in most cases. The NM algorithm struggled with local minima, and the TRF algorithm, although fast, could not optimize the parameters satisfactorily. The NN could predict the control parameters when the input was a sound generated by the PT. However, it failed when confronted with real sounds or noisy inputs. NN strategy can be useful for certain scenarios since it is also very fast once trained, but it can hardly reach the generalization of the GA.
Regarding audio representations, our experiments demonstrate that all those that we tested, including MFCC, STFT, MEL, and multiresolution decomposition, are suitable for optimizing individual control parameters. However, the MFCC representation showed poorer pitch prediction capability than other representations. This is consistent with what is expected from a cepstral representation according to the literature.
Perceptual metrics validate that the optimizers are able to faithfully predict audio samples generated by the synthesizer itself. Taking this benchmark, we can observe that real sounds do not reach such a high performance. The conclusion is that our synthesizer cannot achieve certain characteristics of real voices in the given conditions (e.g. vocal tract size). Nevertheless, comparable sounds have been achieved, which is the goal of our research.
Future research should explore new techniques that enhance the prediction of time-varying signals. Further analysis is needed to investigate the parameter's flow, whether it aligns with the typical structures of a human vocal tract or the chosen optimization strategy. Additionally, hierarchical optimizations can improve the neural network's performance. Predictions should be conducted in two stages by initially narrowing the bounds and then fine-tuning.
Finally, future work should use this framework to benchmark other solutions to the problem, including alternative optimization methods, acoustic features, metrics, and subjective tests.
\begin{table}
\begin{tabular}{c c|c c c c c c c|c} & \multicolumn{2}{c}{GA} & PSO & TRF & NM & CMA-ES & NN & Best & Result & Legend \\ \cline{2-10} PESQ & PT & \(2.2\pm 0.9\) & \(2.1\pm 1.2\) & \(1.8\pm 1.0\) & \(2.2\pm 1.0\) & \(2.6\pm 0.9\) & \(1.8\pm 0.9\) & CMA-ES & \(2.6\pm 0.9\) & mel \\ & VW & \(1.8\pm 0.8\) & \(1.5\pm 0.4\) & \(1.6\pm 0.6\) & \(1.8\pm 0.7\) & \(1.5\pm 0.5\) & \(1.8\pm 1.2\) & GA & \(1.8\pm 0.8\) & mfcc \\ & Y & \(1.5\pm 0.4\) & \(1.3\pm 0.3\) & \(1.4\pm 0.2\) & \(1.3\pm 0.1\) & \(1.3\pm 0.2\) & \(1.5\pm 0.7\) & GA & \(1.5\pm 0.4\) & multiscale \\ \cline{2-10} & PT & \(3.0\pm 0.6\) & \(3.2\pm 0.8\) & \(2.6\pm 0.6\) & \(3.0\pm 0.9\) & \(3.5\pm 0.7\) & \(3.0\pm 0.8\) & CMA-ES & \(3.5\pm 0.7\) & stft \\ PEAQ & VW & \(2.8\pm 0.7\) & \(2.2\pm 0.1\) & \(2.6\pm 0.7\) & \(2.5\pm 0.7\) & \(2.5\pm 0.6\) & \(2.6\pm 0.7\) & GA & \(2.8\pm 0.7\) & \\ & Y & \(2.5\pm 0.8\) & \(3.0\pm 1.1\) & \(2.7\pm 0.7\) & \(2.6\pm 0.7\) & \(2.7\pm 1.0\) & \(2.8\pm 1.0\) & PSO & \(3.0\pm 1.1\) & \\ \cline{2-10} & PT & \(3.1\pm 0.9\) & \(3.4\pm 0.8\) & \(1.7\pm 0.5\) & \(3.3\pm 1.2\) & \(4.3\pm 0.7\) & \(3.0\pm 0.6\) & CMA-ES & \(4.3\pm 0.7\) & \\ ViSQOL & VW & \(1.9\pm 0.5\) & \(2.0\pm 0.2\) & \(2.2\pm 0.3\) & \(1.9\pm 0.3\) & \(2.1\pm 0.4\) & \(1.8\pm 0.4\) & CMA-ES & \(2.1\pm 0.4\) & \\ & Y & \(2.1\pm 0.1\) & \(2.1\pm 0.1\) & \(2.3\pm 0.3\) & \(2.3\pm 0.4\) & \(2.1\pm 0.1\) & \(1.9\pm 0.2\) & TRF & \(2.3\pm 0.3\) & \\ \cline{2-10} & PT & \(0.3\pm 0.2\) & \(0.4\pm 0.3\) & \(0.1\pm 0.1\) & \(0.5\pm 0.4\) & \(0.5\pm 0.3\) & \(0.2\pm 0.1\) & CMA-ES & \(0.5\pm 0.3\) & \\ STOI & VW & \(0.1\pm 0.1\) & \(0.1\pm 0.0\) & \(0.1\pm 0.0\) & \(0.1\pm 0.1\) & \(0.1\pm 0.0\) & CMA-ES & \(0.1\pm 0.0\) & \\ & Y & \(0.3\pm 0.1\) & \(0.3\pm 0.1\) & \(0.3\pm 0.1\) & \(0.3\pm 0.1\) & \(0.3\pm 0.1\) & \(0.1\pm 0.1\) & - & \(0.3\pm 0.1\) \\ \cline{2-10} & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 2: Perceptual equivalent metrics of the real sounds. Type “PT” stands for “Pink Trombone” generated, “VW” for sustained “Wovel”, and “Y” for “Xawn”. All metrics are in MOS scale (from 1 to 5) except STOI (from 0 to 1). We used a color code to indicate the best-performing set of acoustic parameters per audio type and optimizer.
## 6 Acknowledgments
The authors would like to thank David Sudholt for the valuable discussions, support and help. Activities described in this contribution were partially funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 101003750, the Ministry of Economy and Competitiveness of Spain under grant PID2021-128469OB-I00, and the UPM Research Programme, _Programa Propio de I+D+I 2022_.
|
2309.12310 | Model non-Hermitian topological operators without skin effect | We propose a general principle of constructing non-Hermitian (NH) operators
for insulating and gapless topological phases in any dimension ($d$) that over
an extended NH parameter regime feature real eigenvalues and zero-energy
topological boundary modes, when in particular their Hermitian cousins are also
topological. However, the topological zero modes disappear when the NH
operators accommodate complex eigenvalues. These systems are always devoid of
NH skin effects, thereby extending the realm of the bulk-boundary
correspondence to NH systems in terms of solely the left or right zero-energy
boundary localized eigenmodes. We showcase these general and robust outcomes
for NH topological insulators in $d=1,2$ and $3$, encompassing their
higher-order incarnations, as well as for NH topological Dirac, Weyl and
nodal-loop semimetals. Possible realizations of proposed NH topological phases
in designer materials, optical lattices and classical metamaterials are
highlighted. | Daniel J. Salib, Sanjib Kumar Das, Bitan Roy | 2023-09-21T17:59:29Z | http://arxiv.org/abs/2309.12310v1 | # Model non-Hermitian topological operators without skin effect
###### Abstract
We propose a general principle of constructing non-Hermitian (NH) operators for insulating and gapless topological phases in any dimension (\(d\)) that over an extended NH parameter regime feature real eigenvalues and zero-energy topological boundary modes, when in particular their Hermitian cousins are also topological. However, the topological zero modes disappear when the NH operators accommodate complex eigenvalues. These systems are always devoid of NH skin effects, thereby extending the realm of the bulk-boundary correspondence to NH systems in terms of solely the left or right zero-energy boundary localized eigenmodes. We showcase these general and robust outcomes for NH topological insulators in \(d=1,2\) and \(3\), encompassing their higher-order incarnations, as well as for NH topological Dirac, Weyl and nodal-loop semimetals. Possible realizations of proposed NH topological phases in designer materials, optical lattices and classical metamaterials are highlighted.
_Introduction._ Nontrivial topology and geometry of electronic wavefunctions in the bulk of quantum crystals leave signatures at the boundaries (edges, surfaces, hinges and corners) in terms of robust gapless modes therein: a phenomenon known as the bulk-boundary correspondence (BBC). It plays a prominent role in the identification of topological crystals in nature and is germane for topological insulators (TIs) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12], topological semimetals (TSMs) [13; 14; 15; 16; 17; 18], and topological superconductors [19; 20; 21; 22; 23]. Broadly topological phases can be classified according to the co-dimension (\(d_{c}\)) of the associated boundary modes, where \(d_{c}=d-d_{B}\) and \(d\) (\(d_{B}\)) is the dimensionality of the system (boundary modes). Thus an \(n\)th order topological phase hosts boundary modes of \(d_{c}=n\). For example, three-dimensional topological crystals supporting surface (\(d_{B}=2\)), hinge (\(d_{B}=1\)) and corner (\(d_{B}=0\)) modes are tagged as first-, second- and third-order topological phases, respectively [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
An attempt to extend the realm of these topological phases to open quantum materials leads to non-Hermitian (NH) operators, although their exact connection with the system-to-environment interactions thus far remains illusive. Nevertheless, desired NH operators, if simple, can in principle be engineered on optical lattices [37] and in classical metamaterials [38; 39; 40]. Typically, the NH operators display NH skin effect: an accumulation of all the left and right eigenvectors at the opposite ends of a system with open boundary conditions [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76]. Naturally, it masks the BBC in terms of left or right eigenmodes, which nonetheless is captured by their bi-orthogonal product [61], even though its direct experimental measurement remains challenging. Therefore, construction of NH topological operators, featuring BBC in terms of their left or right eigenvectors and thus devoid of the NH skin effect, is of pressing and urgent theoretical and more crucially, experimental importance.
Here, we outline a general principle of constructing such NH operators for TIs and TSMs in any dimension as an extension of their Hermitian counterparts, which we explicitly exemplify for systems of dimensionality \(d\leq 3\). We show that the NH operators display the BBC in terms of robust zero-energy boundary modes, when all its eigenvalues are purely real. But, the system becomes trivial when the eigenvalues are complex. See Figs. 1-5.
_Topological models._ Our construction of NH topological operators is greatly facilitated by reviewing the universal model Bloch Hamiltonian for \(d\)-dimensional Hermitian topological phases, which can be decomposed as
\[H_{\text{Her}}(\mathbf{k})=H_{\text{Dir}}(\mathbf{k})+H_{\text{Wil}}(\mathbf{k})+H_{\text{ HOT}}(\mathbf{k}). \tag{1}\]
The lattice regularized Dirac kinetic energy stems from the nearest-neighbor (NN) hopping of amplitude \(t\) between orbitals of opposite parities, and it is given by
\[H_{\text{Dir}}(\mathbf{k})=t\sum_{j=1}^{d}\sin(k_{j}a)\Gamma_{j}, \tag{2}\]
where \(a\) is the lattice constant in a \(d\)-dimensional hypercubic lattice, momentum \(\mathbf{k}=(k_{1},\cdots,k_{d})\), and \(k_{1}\), \(k_{2}\) and \(k_{3}\) should be identified as \(k_{x}\), \(k_{y}\) and \(k_{z}\), respectively, for example. All the Hermitian \(\Gamma\) matrices appearing in this work satisfy the anticommuting Clifford algebra \(\{\Gamma_{j},\Gamma_{l}\}=\delta_{jl}\) for any \(j\) and \(l\). Their dimensionality, explicit representations and the internal structure of the associated Dirac spinor (\(\Psi\)) depend on the microscopic details, which we reveal while discussing specific models.
The (first-order) Wilson mass, preserving all the non-spatial and crystal symmetries, and thus transforming under the trivial singlet \(A_{1g}\) representation of any crystallographic point group, is \(H_{\text{Wil}}(\mathbf{k})=\Gamma_{d+1}m(\mathbf{k})\), where
\[m(\mathbf{k})=\Delta_{1}-2B\bigg{[}d-\sum_{j=1}^{d}\cos(k_{j}a)\bigg{]}+\sum_{s=1 }^{p}t_{s}\cos(k_{d+s}a). \tag{3}\]
For now we switch off the symmetry preserving out of \(d\)-dimensional hyperplane hopping processes by setting \(t_{s}=0\) for all \(s\). Then the first-order Wilson mass features band inversion within the parameter regime \(0<\Delta_{1}/B<4d\), where \(H_{\text{Her}}^{\text{Ins}}(\mathbf{k})=H_{\text{Dir}}(\mathbf{k})+H_{\text{Wil}}(\mathbf{k})\) describes a \(d\)-dimensional first-order TI, hosting zero-energy gapless boundary modes of \(d_{c}=1\). Prominent examples are end
modes of the Su-Schrieffer-Heeger insulator [77; 78; 79], edge modes of quantum anomalous [80] and spin Hall [5; 6] insulators, and surfaces states of three-dimensional strong \(Z_{2}\) TIs [7; 8; 9; 10]. Notice that \(H_{\rm Dir}(\mathbf{k})\) and thus \(H_{\rm Her}^{\rm Ins}(\mathbf{k})\) also transform under the \(A_{1g}\) representation.
A hierarchy of higher-order TIs is generated by the discrete symmetry breaking Wilson masses [34; 36]
\[H_{\rm HOT}(\mathbf{k})=\Delta_{2}\Gamma_{d+2}\;d_{x^{2}-y^{2}}(\mathbf{k})+\Delta_{3} \Gamma_{d+3}\;d_{3z^{2}-r^{2}}(\mathbf{k}), \tag{4}\]
where \(d_{x^{2}-y^{2}}(\mathbf{k})=\cos(k_{1}a)-\cos(k_{2}a)\) and \(d_{3z^{2}-r^{2}}(\mathbf{k})=2\cos(k_{3}a)-\cos(k_{1}a)-\cos(k_{2}a)\). The term proportional to \(\Delta_{2}\) (\(\Delta_{3}\)) is pertinent only for \(d\geq 2\) (\(d\geq 3\)). While \(d_{x^{2}-y^{2}}(\mathbf{k})\) transforms under the singlet \(B_{1g}\) representation of the tetragonal point group (\(D_{4h}\)) in \(d=2\), \(d_{x^{2}-y^{2}}(\mathbf{k})\) and \(d_{3z^{2}-r^{2}}(\mathbf{k})\) transform under the doublet \(E_{g}\) representation of the cubic point group (\(O_{h}\)) in \(d=3\). By virtue of the anticommutation relation among all the \(\Gamma\) matrices appearing in \(H_{\rm Her}(\mathbf{k})\), \(H_{\rm HOT}(\mathbf{k})\) acts as a mass for the gapless boundary modes of the first-order TIs in \(d>1\), and partially gaps them out, thereby yielding boundary modes with \(d_{c}>1\) and higher-order TIs.
As such, a finite \(\Delta_{2}\) converts a parent first-order TI into a second-order TI. Specifically, in \(d=2\) it hosts four zero energy modes localized at the corners in the body diagonal directions (\(k_{1}=\pm k_{2}\)) along which \(d_{x^{2}-y^{2}}(\mathbf{k})\) vanishes [34]. But, in \(d=3\) it features four \(z\)-directional hinge modes and gapless surface states on the top and bottom \(xy\) planes of a cubic crystal, exactly where \(d_{x^{2}-y^{2}}(\mathbf{k})\) vanishes [28]. Subsequently, a finite \(\Delta_{2}\) and \(\Delta_{3}\) produce a third-order TI in \(d=3\), supporting zero modes at eight corners of a cubic crystal, placed on its body diagonals (\(k_{1}=\pm k_{2}=\pm k_{3}\)), only along which both \(d_{x^{2}-y^{2}}(\mathbf{k})\) and \(d_{3z^{2}-r^{2}}(\mathbf{k})\) vanish simultaneously [36]. One can continue this construction in \(d\geq 3\) to realize the hierarchy higher-order TIs therein. However, we restrict ourselves to \(d\leq 3\).
Finally, we consider the terms proportional to \(t_{s}\) [Eq. (3)], yielding \((d+p)\)-dimensional weak topological phases by stacking \(d\)-dimensional \(n\)th order TIs. Depending on the parameter values (\(\Delta_{1}/B\) and \(t_{s}/B\)), the weak topological phase can be either gapless (known as TSMs) or insulating (trivial or weak TI). Their gapless boundary modes appear only along the stacking direction, obtained by placing the zero-energy modes of the parent \(n\)th order TI in that direction. Some well known examples are the Fermi arcs of Dirac and Weyl
Figure 2: Non-Hermitian Qi-Wu-Zhang model. (a) Eigenvalues for \(\alpha=0.5\) with PBCs and OBCs, confirming their reality condition and the existence of topological edge modes (inset) for \(|\alpha|<1\). (b) Amplitude square of the right (\(\beta=R\)) or left (\(\beta=L\)) eigenvectors of two closest to zero energy modes, showing their sharp edge localization. (c) The same as (b), but for all the right or left eigenvectors, showing no left-right or top-bottom asymmetry about the center of the system, thus no NH skin effect. (d) Complex eigenvalues for \(\alpha=10\), showing absence of any zero energy topological mode and NH skin effect for \(|\alpha|>1\). Here, we set \(t=B=1\) and \(\Delta_{1}=6\).
Figure 1: Non-Hermitian Su-Schrieffer-Heeger model. (a) Eigenvalue spectrum for \(\alpha=0.5\) with a periodic boundary condition (PBC) and an open boundary condition (OBC), showing their reality condition and the existence of two near zero-energy topological modes (inset) for \(|\alpha|<1\). (b) Amplitude square of the right (\(\beta=R\)) or left (\(\beta=L\)) eigenvectors of two zero-energy modes, showing their sharp localization near the ends of the chain. (c) The same as (b), but for all the right or left eigenvectors, showing no left-right asymmetry and confirming the absence of any NH skin effect (inset). (d) Eigenvalues for \(\alpha=10\), showing its generic complex nature, the absence of any zero-energy topological modes and skin effect for \(|\alpha|>1\). Here, we set \(t=B=1\) and \(\Delta_{1}=1\).
semimetals [13; 14; 15; 81; 82], drumhead surface states of nodal-loop semimetals [83; 84; 85] and hinge modes of higher-order Dirac semimetals [86; 87; 85; 84], which we will discuss in the context of NH TSMs. In this work, we focus on TSMs, although our results apply equally well for weak TIs.
_NH operators._ The stage is now set to construct the desired NH topological operators. The key observation is that the first-order Wilson mass matrix \(\Gamma_{d+1}\) anticommutes with \(H_{\rm Dir}(\mathbf{k})\) and \(H_{\rm HOT}(\mathbf{k})\). So, the products \(\Gamma_{d+1}H_{\rm Dir}(\mathbf{k})\) and \(\Gamma_{d+1}H_{\rm HOT}(\mathbf{k})\) are _anti-Hermitian_, as \((\Gamma_{d+1}\Gamma_{j})^{\dagger}=-\Gamma_{d+1}\Gamma_{j}\) for \(j=1,\cdots,d,d+2,d+3\). We therefore define a NH generalization of all the topological phases in terms of the NH operator
\[H_{\rm NH}(\mathbf{k})=H_{\rm Her}(\mathbf{k})+\alpha\ \Gamma_{d+1}\left[H_{\rm Dir}(\mathbf{k })+H_{\rm HOT}(\mathbf{k})\right]. \tag{5}\]
The parameter \(\alpha\) quantifies the strength of the non-Hermiticity. Since all the matrices in \(H_{\rm NH}(\mathbf{k})\) are mutually anticommuting, its eigenvalues are \(\pm E_{\rm NH}(\mathbf{k})\), where
\[E_{\rm NH}(\mathbf{k}) = \bigg{[}(1-\alpha^{2})\bigg{\{}t^{2}\sum_{i=1}^{d}\sin^{2}(k_{j} )+\Delta_{2}^{2}d_{x^{2}-y^{2}}^{2}(\mathbf{k}) \tag{6}\] \[+ \Delta_{3}^{2}d_{3z^{2}-r^{2}}^{2}(\mathbf{k})\bigg{\}}+m^{2}(\mathbf{k} )\bigg{]}^{1/2}.\]
For \(\alpha=0\), we recover the energy spectra of the Hermitian systems. For \(|\alpha|<1\) all the eigenvalues are purely real, showing a line gap. For \(|\alpha|>1\) they are complex in general with a point gap. These outcomes are insensitive to the real space boundary condition. The NH operator \(H_{\rm NH}(\mathbf{k})\) also meets some non-spatial symmetries [63; 65; 73]. If \(H_{\rm Her}(\mathbf{k})\) preserves the time-reversal (\(\mathcal{T}\)) and particle-hole (\(\mathcal{C}\)) symmetries [11], then \(\mathcal{T}H_{\rm NH}^{*}(\mathbf{k})\mathcal{T}^{-1}=H_{\rm NH}(-\mathbf{k})\) and \(\mathcal{C}H_{\rm NH}^{\mp}(\mathbf{k})\mathcal{T}^{-1}=-H_{\rm NH}(-\mathbf{k})\). But, \(H_{\rm NH}(\mathbf{k})\) lacks the sublattice and pseudo-Hermiticity symmetries, by construction.
Most importantly, as \(\Gamma_{d+1}\) transforms under the \(A_{1g}\) representation, \(\Gamma_{d+1}H_{\rm Dir}(\mathbf{k})\) and \(\Gamma_{d+1}H_{\rm HOT}(\mathbf{k})\) preserve all the spatial symmetries of the Hermitian system and do not break any new crystal symmetry that has not been already broken in the Hermitian limit. Therefore, the eigenmodes of \(H_{\rm NH}(\mathbf{k})\) do not show any NH skin effect, by construction. Furthermore, as the anti-Hermitian component of \(H_{\rm NH}(\mathbf{k})\) and the Hermitian operator \(H_{\rm Dir}(\mathbf{k})+H_{\rm HOT}(\mathbf{k})\) vanish exactly at the same high symmetry time-reversal invariant momentum (TRIM) points in the Brillouin zone, the topological bound states (when present) are always pinned at zero energy, as in the Hermitian systems. Finally, we show that such topological zero-energy bound states exist only for \(0<\Delta_{1}/B<4d\)
Figure 3: Non-Hermitian second-order topological insulator in \(d=2\). (a) Real eigenvalue spectrum with PBCs and OBCs, accommodating four near zero-energy topological modes (inset) for \(\alpha=0.5\). (b) Amplitude square of the right (\(\beta=R\)) or left (\(\beta=L\)) eigenvectors of four closest to zero-energy modes, showing their sharp corner localization. (c) Same as (b), but for all the right or left eigenvectors, showing no left-right or top-bottom asymmetry about the center of the system (no NH skin effect). (d) Generic complex eigenvalues, and the absence of zero-energy topological modes and NH skin effect for \(\alpha=10\). Here, we set \(t=B=1\), \(\Delta_{1}=6\) and \(\Delta_{2}=1\).
Figure 4: Hierarchy of non-Hermitian topological insulators in \(d=3\). Real eigenvalue spectrum for \(\alpha=0.5\) with PBCs and OBCs, showing the existence of near zero-energy topological modes (insets) for (a) first-order, (b) second-order and (c) third-order TIs. Amplitude square of the right or left eigenvectors of near zero-energy modes, showing sharp localization (d) on six surfaces, (e) on four \(z\)-directional hinges and two \(xy\) surfaces, and (f) at eight corners. Panels (g)-(i) are same as (a)-(c), respectively, but for \(\alpha=10\), showing complex eigenvalue spectrum and the absence of near zero-energy modes. Here, we set \(t=B=1\), \(\Delta_{1}=10\), \(\Delta_{2}=1\) and \(\Delta_{3}=1\).
and \(|\alpha|<1\), when the eigenvalues of \(H_{\rm NH}(\mathbf{k})\) are purely real. Next, we anchor these general outcomes for various paradigmatic models for topological phases of matter in one, two, and three dimensions.
_One dimension_. A NH Su-Schrieffer-Heeger model [77; 78; 79] in one dimension can be defined by taking \(\Gamma_{1}=\tau_{1}\) and \(\Gamma_{2}=\tau_{2}\). The Pauli matrices \(\mathbf{\tau}\) operates on the orbital degrees of freedom. The results are shown in Fig. 1. Analytical solutions of the topological modes can be obtained by considering a hard-wall boundary at \(x=0\) such that \(\Psi_{0}^{R}(x=0)=0\) in a semi-infinite system occupying the region \(x\geq 0\), thus \(\Psi_{0}^{R}(x\to\infty)=0\). Here, the superscript '\(R\)' denotes right eigenvector. Such a mode can only be found at zero energy, explicitly given by
\[\Psi_{0}^{R}(x)=A\left(\begin{array}{c}1\\ \frac{t\alpha\lambda_{+}}{t\lambda_{+}+\Delta_{1}+B\lambda_{+}^{2}}\end{array} \right)\sum_{\delta=\pm}\left[\delta\exp(-\lambda_{\delta}x)\right], \tag{7}\]
where \(A\) is the overall normalization constant, and
\[\lambda_{\delta}=\frac{t}{2B}\sqrt{1-\alpha^{2}}+\delta\sqrt{\frac{t^{2}}{4B^ {2}}(1-\alpha^{2})-\frac{\Delta_{1}}{B}}. \tag{8}\]
Hence, zero-energy topological bound state can only be found if \(|\alpha|<1\), for which \(\Re(\lambda_{\delta})>0\). As \(\alpha\to 1\), such a mode becomes more delocalized. At \(\alpha=1\), the modes living on two opposite ends of the one-dimensional chain hybridize, and they disappear for \(|\alpha|>1\).
The topological modes for the first-order TIs in \(d>1\) are obtained as the zero-energy bound states with a hard-wall boundary condition in a direction along which the translational symmetry is broken, following the steps outlined above. Subsequently, their dispersive nature is revealed by computing the matrix elements of the remaining part of the Hamiltonian, with conserved momentum in the orthogonal direction(s), within the subspace of the zero modes [12]. In higher-order TIs, topological modes of reduced dimensionality are realized by partially gapping the ones for the first-order TIs by the discrete symmetry breaking Wilson mass(es). This procedure applies to all the NH TIs within our construction. Hence, the above exercise proves that topological modes in NH TIs of any order in any dimension can only be found at zero energy and when \(|\alpha|<1\). So, we only provide its numerical evidence for all the remaining cases.
_Two dimensions_. A NH generalization of the Qi-Wu-Zhang model [80], describing NH Chern insulators, is realized for \(\Gamma_{j}=\tau_{j}\) for \(j=1,2,3\). The results are shown in Fig. 2. Within the topological regime, this model hosts chiral edge modes. A NH version of the Bernevig-Hughes-Zhang model [6] for a NH quantum spin Hall insulator is realized for \(\Gamma_{j}=\sigma_{3}\tau_{j}\) for \(j=1,2,3\). The Pauli matrices \(\mathbf{\sigma}\) act on the spin indices. All the results are identical to those for the NH Chern insulator, except each mode now enjoys two-fold Kramers degeneracy. So, we do not show them here. Within the topological regime, this model sustains counter-propagating helical edge modes for opposite spin projections. A NH second-order TI, featuring four zero-energy corner modes [24; 34], can be realized with the addition of the \(\Delta_{2}\) term, accompanied by the matrix \(\Gamma_{4}=\sigma_{1}\tau_{0}\), to the NH quantum spin Hall insulator model. The results are shown in Fig. 3.
_Three dimensions_. A three-dimensional NH first-order TI, supporting surface states on all six surfaces of a cubic crystal, is obtained with \(\Gamma_{j}=\sigma_{3}\tau_{j}\) for \(j=1,2,3\) and \(\Gamma_{4}=\sigma_{1}\tau_{0}\). A NH second-order TI, supporting four \(z\)-directions hinge and \(xy\) surface modes, is realized when \(\Delta_{2}\), accompanied by \(\Gamma_{5}=\sigma_{2}\tau_{0}\), is finite. A three-dimensional NH third-order TI is realized when both the terms proportional to \(\Delta_{2}\) and \(\Delta_{3}\) are finite. As now \(H_{\rm Her}(\mathbf{k})\) involves six mutually anticommuting Hermitian \(\Gamma\) matrices, their minimal dimensionality is eight [36; 88]. We choose \(\Gamma_{j}=\eta_{3}\sigma_{3}\tau_{j}\) for \(j=1,2,3\), \(\Gamma_{4}=\eta_{3}\sigma_{1}\tau_{0}\), \(\Gamma_{5}=\eta_{3}\sigma_{2}\tau_{0}\) and \(\Gamma_{6}=\eta_{1}\sigma_{0}\tau_{0}\). The set of Pauli matrices \(\mathbf{\eta}\) operate on the sublattice degrees of freedom. Then, its NH version \(H_{\rm NH}(\mathbf{k})\) [Eq. (5)] supports eight zero-energy corner modes. All the results are shown in Fig. 4.
_NH Topological semimetals_. By stacking NH TIs, one can realize NH TSMs depending on \(\Delta_{1}/B\) and \(t_{s}/B\). They are also devoid of any NH skin effect and support gapless boundary modes when \(|\alpha|<1\). Here, we discuss some key examples. NH Su-Schrieffer-Heeger insulators, stacked in the \(y\) direction, can produce a NH Dirac semimetal in \(d=2\) (like graphene) that supports
Fermi arcs between two Dirac points, located along the \(k_{y}\) axis. By continuing such stacking in the \(z\) direction, we can find a NH nodal-loop semimetal, supporting drum-head surface states on the \((k_{y},k_{z})\) planes. By the same token, stacked (in the \(z\) direction) NH Chern insulators yield a three-dimensional NH Weyl semimetal with surface Fermi arcs occupying the \((k_{z},x)\) and \((k_{z},y)\) planes in between two Weyl nodes along \(k_{z}\). And stacking of NH second-order TIs produces NH higher-order Dirac semimetal in \(d=3\), featuring only \(z\) directional hinge modes localized within the Dirac nodes on the \(k_{z}\) axis. These outcomes for specific choices of \(\Delta_{1}/B\) and \(t_{s}/B\) are shown in Fig. 5. Notice that topological boundary modes from opposite ends of the system get connected via bulk nodal points or loops in all the NH TSMs, where the localization length of the zero-modes of the underlying NH TIs diverges. Weak TIs obtained in the same way are also devoid of any NH skin effect and accommodate topological boundary modes, which, however, occupy the entire Brillouin zone along the stacking direction(s).
_Discussion & outlook._ Here we unfold a general principle of constructing NH insulating and nodal topological phases in any dimension that are always devoid of NH skin effects. In the topological regime, they showcase the BBC in terms of either the right or left zero-energy eigenmodes, when all the eigenvalues of the NH operators are purely real. The systems become trivial when these eigenvalues are complex. See Figs. 1-5. In order to numerically ensure the bi-orthonormality condition \(\langle\Psi_{i}^{L}|\Psi_{j}^{R}\rangle=\delta_{ij}\) between the real space left (\(L\)) and right (\(R\)) eigenmodes of \(H_{\text{NH}}(\mathbf{k})\) with eigenvalues \(E_{i}\) and \(E_{j}\), respectively, we sometime have to add an extremely small amount of random charge disorder (\(\sim 10^{-4}-10^{-6}\)). Our construction can be immediately generalized for NH crystalline topological phases, as in the Hermitian limit their universal Bloch Hamiltonian takes the form of \(H_{\text{Her}}(\mathbf{k})\), however, involving longer-range hopping processes (beyond NN), allowed crystal symmetries [89; 90; 91]. In the future, it will be worthwhile extending this construction for NH topological superconductors. The quantum critical points separating NH topological and trivial insulators are described by NH Dirac or Weyl fermions. Stability of such NH critical points against electronic interactions [92; 93] and disorder is still in its infancy.
Simplicity of our construction for the NH topological operators should make them realizable on multiple platforms, as \(H_{\text{NH}}(\mathbf{k})\) involves only NN hopping amplitudes and on-site staggered potential [Eq. (5)]. For example, electronic designer materials [94; 95; 96; 97; 98] and optical lattices constitute a promising quantum platform where these operators and the resulting NH topological phases can be realized. In the former system, a hopping imbalance (yielding non-Hermiticity) can be engineered by placing electronic valves along the NN bonds, permitting unidirectional electronic hopping, however, set at different operational voltages in the opposite directions. On optical lattices, it can be achieved with different laser intensities in the opposite directions along the NN bonds, as recently has been demonstrated on one-dimensional chains [37]. Topological modes in these setups can be detected by the standard scanning tunneling spectroscopy since the proposed NH topological phases are always devoid of the NH skin effect.
Classical metamaterials, such as photonic and mechanical lattices, as well as topolectric circuits, constitute yet another set of viable avenues along which the predicted NH topology can be experimentally displayed. On all these platforms, tunable NN hopping can be implemented and a plethora of NH topological phases with NH skin effects has already been realized [38; 39; 50]. Lack of the NH skin effect associated with all our NH operators should allow the detection of classical topological modes in these systems using well-developed tools (already applied for Hermitian topological systems), such as the two-point pump probe spectroscopy (on photonic lattices), mechanical (on mechanical lattices) and electrical (on topolectric circuits) impedance. Current discussion should therefore stimulate a new surge of experimental works exploring the BBC in skin effect-free NH systems in terms of solely the left and right topological eigenmodes.
_Acknowledgments._ D.J.S. was supported by NSF CAREER Grant No. DMR- 2238679 of B.R. and S.K.D. was supported by the Startup Grant of B.R. from Lehigh University.
|
2309.09042 | Medium modification of pion production in low energy Au+Au collisions | There is a major mismatch between the charged pion yields in Au+Au collisions
at low energies calculated by various transport models and the experimental
measured values from the Hades collaboration. In this work, reasonable
improvements on the equation of state, in-medium modification of cross
sections, and the influence of the nuclear potential for Delta resonances will
be investigated in the framework of the GiBUU transport model. As a result, we
demonstrate that theoretical calculations can indeed describe the charged pion
yields measured by Hades for Au+Au collisions rather well, but that a mismatch
then remains between calculations and data for the yields of neutral pions
extracted from dileptons within the same experiment. | Christian Kummer, Kai Gallmeister, Lorenz von Smekal | 2023-09-16T16:45:29Z | http://arxiv.org/abs/2309.09042v2 | # Medium modification of pion production in low energy Au+Au collisions
###### Abstract
There is a major mismatch between the charged pion yields in Au+Au collisions at low energies calculated by various transport models and the experimental measured values from the HADES collaboration. In this work, reasonable improvements on the equation of state, in-medium modification of cross sections, and the influence of the nuclear potential for \(\Delta\) resonances will be investigated in the framework of the GiBUU transport model. As a result, we demonstrate that theoretical calculations can indeed describe the charged pion yields measured by HADES for Au+Au collisions rather well, but that a mismatch then remains between calculations and data for the yields of neutral pions extracted from dileptons within the same experiment.
## I Introduction
In the paper by Adamczewski-Musch _et al._[1] the HADES collaboration showed that transport models systematically overpredict measured pion yields. Every transport code was overshooting the rapidity and \(p_{t}\) spectra by nearly a factor of 2. Such a failure is already known since almost 30 years [2; 3; 4; 5; 6; 7]. This is disturbing for a few reasons. Due to their low mass, pions play an important role in many theoretical models [8]. Assuming two flavors of massless quarks, for example, QCD exhibits a chiral symmetry which, when spontaneously broken, gives rise to three massless Nambu-Goldstone bosons. These are typically identified with the pions to explain why they are the lightest hadrons, and hence are commonly produced more abundantly than others in heavy-ion collisions. Thus pion production is important from an experimental as well as a theoretical point of view. For the system considered by HADES in Ref. [1], Au+Au at an incident energy of \(E_{\mathrm{kin}}=1.23\,A\,\)GeV, a mismatch between measured and calculated pion multiplicities by not quite a factor of two but on the \(50\,\%\) level was reported in Ref. [9]. Nevertheless, one has to require that transport models reproduce the most frequent particles better than that. It is therefore most important to develop a framework with some theoretical footing that can describe the HADES findings.
As the conventional assumptions and modes typically employed in transport codes failed to reproduce the data, new and creative ways have to be adapted. Indeed, Godbey _et al._[10] found a prescription to reproduce the correct pion numbers, whereby a density-dependent suppression was applied on the cross section for each charged pion. While this method is without a doubt successful, some questions still remain open. For once, it is unclear why an isospin dependent factor needs to be introduced in an otherwise isosymmetric theory. Furthermore the exponential suppression itself is an instrument to artificially lower hadron numbers. Its theoretical foundations need to be studied, but for now its phenomenological effectiveness will be appreciated by us as well.
Another approach lies in the modification of potentials and cross sections. The argument for these implementations comes from the assumption, that the emergence of scalar and vector fields in bulk matter changes the behaviour of hadrons compared to vacuum. Admittedly some of the in-medium modifications performed are controversial throughout the community, but they have some backing in literature.
Furthermore, we argue in favor of using the Liu equation of state instead of the commonly used NL2 by Lang. This claim is not just supported by our results, but also by the astronomic constraints obtained from neutron star mergers.
The structure of this letter is the following. In section II the relativistic mean field method of the Giessen Boltzmann-Uehling-Uhlenbeck (GiBUU) model is explained. Most information about it can be found in the review paper Ref. [11], however, it seems appropriate to elaborate on the assumptions and implementations made in this project. A focus is then given to the equation of state by Liu _et al._[12] and on the in-medium modifications. How the modifications affect particle production and baryon densities is explored in section III. Then in section IV a summary of the results shows how different modifications compare to experiments. These are the FOPI experiment for proton and pion data, and HADES, where the pions and dielectrons are being looked at. A summary of our findings is given in section V.
## II Model
### Rmf
In the present work, the propagation of particles is carried out using a relativistic mean field (RMF) for the
equations of motion. Its workings are elaborated here, as it is less common than non-relativistic Skyrme-like potentials typically used in heavy-ion collisions. More details can be found in the GiBUU review paper [11]. The Lagrangian is given by
\[{\cal L}=\overline{\psi}[\gamma_{\mu}(i\partial^{\mu}-g_{\omega}\omega^{\mu}-g_{ \rho}\mathbf{\tau}\mathbf{\rho}^{\mu}-\frac{e}{2}(1+\tau^{3} )A^{\mu})-m_{N}-g_{\sigma}\sigma]\psi+\frac{1}{2}\partial_{\mu}\sigma\partial^ {\mu}\sigma-U(\sigma)+\frac{1}{2}m_{\omega}^{2}\omega^{2}+\frac{1}{2}m_{\rho}^{ 2}\mathbf{\rho}^{2}-\frac{1}{16\pi}F_{\mu\nu}F^{\mu\nu}\;, \tag{1}\]
with the Dirac-spinor \(\psi\), and the fields \(\sigma\), \(\omega\), and \(\rho\). The meson-nucleon couplings \(g_{\omega},g_{\rho},g_{\sigma}\) and meson mass \(m_{\sigma}\) in the Lagrangian eq. (1) are determined by the equation of state (EOS) at hand, while the masses \(m_{\rho},m_{\omega},m_{N}\) for rho-meson, omega-meson and nucleon are fixed at their conventional value. In usual notations \(\tau\) are the Pauli matrices, \(A^{\mu}\) is the electromagnetic field and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) the corresponding field tensor. The scalar field self interaction is given by
\[U(\sigma)=\frac{1}{2}m_{\sigma}^{2}\sigma^{2}+\frac{1}{3}g_{2}\sigma^{3}+ \frac{1}{4}g_{3}\sigma^{4}\;, \tag{2}\]
with additional coefficients \(g_{2}\) and \(g_{3}\). The Lagrangian eq. (1) leads to the equations of motion for the Dirac-spinor \(\psi\),
\[\Big{[}\gamma_{\mu}\Big{(}i\partial^{\mu}-g_{\omega}\omega^{\mu} -g_{\rho}\mathbf{\tau}\mathbf{\rho}^{\mu}-\frac{e}{2}(1+\tau ^{3})A^{\mu}\Big{)}\] \[-m_{N}-g_{\sigma}\sigma\Big{]}\psi=0\;, \tag{3}\]
for the isoscalar-scalar field \(\sigma\),
\[\partial_{\mu}\partial^{\mu}\sigma+\frac{\partial U(\sigma)}{\partial\sigma}= -g_{\sigma}\rho_{S}\;, \tag{4}\]
for the isoscalar-vector field \(\omega\),
\[m_{\omega}^{2}\omega^{\nu}=\ g_{\omega}j_{b}^{\nu}\;, \tag{5}\]
for the isovector-vector field \(\rho\),
\[m_{\rho}^{2}\mathbf{\rho}^{\nu}=\ g_{\rho}j_{I}^{\nu}\;, \tag{6}\]
and for the electromagnetic potential \(A\),
\[\partial_{\mu}\partial^{\mu}A^{\nu}=\ 4\pi ej_{c}^{\nu}\;. \tag{7}\]
To be clear, there is no spinor implemented in GiBUU as in eq. (3). Instead, GiBUU uses the particle distribution functions \(f_{i}(x,\mathbf{p})\) for this evaluation. These \(f\) further define the r.h.s's of the equations of motion which are the scalar density (for \(p^{*}\) and \(m^{*}\) see below),
\[\rho_{S}=\frac{g}{(2\pi)^{3}}\sum_{i=p,n,\vec{p},\vec{n}}\int\,\frac{{\rm d}^{ 3}p}{p_{i}^{*}\,0}\,m_{N}^{*}\,f_{i}(x,\mathbf{p})\, \tag{8}\]
the baryon density,
\[j_{b}^{\mu}=\frac{g}{(2\pi)^{3}}\Bigg{(}\sum_{i=p,n}-\sum_{i=\vec{p},\vec{n}} \Bigg{)}\int\,\frac{{\rm d}^{3}p}{p_{i}^{*}\,0}\,p_{i}^{*\mu}\,f_{i}(x,\mathbf{p}) \tag{9}\]
the isospin current,
\[j_{I}^{\mu}=\frac{g}{(2\pi)^{3}}\sum_{i=p,n,\vec{p},\vec{n}}\int\,\frac{{\rm d }^{3}p}{p_{i}^{*}\,0}\,p_{i}^{*\mu}\,\mathbf{\tau}\,f_{i}(x,\mathbf{p})\, \tag{10}\]
and the charge current,
\[j_{c}^{\mu}=\frac{1}{2}(j_{b}^{\mu}+j_{I}^{3,\mu}). \tag{11}\]
Here \(g\) accounts for the spin-degeneracy and has a numerical value of 2. In principle there should also be partial derivatives of \(\omega\) in eq. (5), but in practice they are neglected, as it is a short-ranged field. Thus the \(\omega\)-field is proportional to the baryon current just as stated in eq. (5). Derivatives in eq. (6) are neglected for the same reason as well, in addition to the isospin-mixed nucleon states. Hence the first two components are set to zero, i.e. \(j_{I}^{1,\nu}=j_{I}^{2,\nu}=\rho^{1,\nu}=\rho^{2,\nu}=0\), and only \(\rho^{3,\nu}\) and \(j_{I}^{3,\nu}\) need to be calculated. Like \(\omega\), the \(\rho\)-field is proportional to a current. Finally in the last equation the electromagnetic current \(j_{c}^{\nu}\) is related to the electromagnetic field. Useful kinematic quantities are the kinetic four-momentum
\[p^{*}=p-V \tag{12}\]
and the effective nucleon mass
\[m_{N}^{*}=m_{N}+S. \tag{13}\]
The scalar and vector fields are given by
\[V^{\nu}= g_{\omega}\omega^{\nu}+g_{\rho}\tau^{3}\rho^{3,\nu}+\frac{e}{2}(1+ \tau^{3})A^{\nu}\,, \tag{14}\] \[S= g_{\sigma}\sigma\,. \tag{15}\]
By assuming a plane wave solution to the Dirac equation, eq. (3), one obtains the dispersion relation for the nucleons in the form
\[(p^{*})^{2}\,-\,(m_{N}^{*})^{2}\,=\,0\,. \tag{16}\]
Albeit the above discussion was only for nucleons, this RMF prescription will be applied to all baryons. The Coulomb potential is effective for all particle species.
### Liu EOS
An EOS relates observables to the nuclear density. They are always fixed at a binding energy per nucleon of
\(-16\,\)MeV at the saturation density \(\rho_{0}=0.16\,\)fm\({}^{-3}\). In other regards they can and do differ significantly. This can be explained by the theoretical backgrounds of the researchers responsible for them. For example, to study heavy-ion collisions A. Lang _et al._[13] devised parameter sets suitable for this task, while G.A. Lalazissis _et al._[14] needed a different set to work on nuclear structure models and B. Liu _et al._[12] investigated the influence of isovector scalar fields with another EOS. Hence the EOSs by Lang are typically more applicable for a dynamic description of heavy-ion collisions, while Lalazissis deals better with nuclear ground states. The EOS of Liu _et al._ is also used by Godbey _et al._[10].
Astronomic observations on neutron star mergers give some constraints on the EOS, an extensive review which summarizes those was done by Li _et al._[15]. For our purposes the most relevant work is a study by Xie _et al._[16], where restraints for possible potentials bounded by those astronomical constraints is provided. This is shown in fig. 1, where the binding energy per nucleon and the effective mass of the nucleons are given as a function for the normalized density for various EOSs. Constraints apply to the binding energy, where a band shows the possible behaviour of the EOS as allowed by astronomical observations. Right in the middle lies the EOS by Liu, which we will consider from now on. NL2 by Lang, commonly used in transport codes, has a similar binding energy and lies on the very edge of the band, but the ratio \(m^{*}/m\) is dropping off a lot weaker than Liu EOS. In case there is a considerable disagreement between results obtained with Liu and NL2 by Lang, we will showcase it.
The coefficients from eq. (1) required to reproduce the EOSs by Liu and Lang, are given in table 1.
### Cross Section Modification
As mentioned above, tranport theoretical calculations overestimate the experimental measured pion yields in heavy ion collisions. There are multiple handles to change this.
#### ii.3.1 Exponential Suppression
One way to reduce pion yields is the exponential suppression of cross sections in the spirit of Refs [10; 17]. Here the \(NN\to N\Delta\) and \(NN\to NN\pi\) cross sections are multiplied by a factor
\[f=\exp\bigg{(}-\alpha\left(\frac{\rho}{\rho_{0}}\right)^{\beta}\bigg{)}\, \tag{17}\]
with constants \(\alpha\) and \(\beta\), where the factor \(\beta\) was introduced here to have a more general expression. The underlying concept is that pion distributions can be expressed with this density dependent suppression. In our investigation we saw that leaving \(\beta\) at the default value 1 is sufficient, the data can be described fine just with the factor \(\alpha\). To maintain detailed balance, the factor \(f\) is also included in the cross section of the back reaction \(N\Delta\to NN\) and in the rate of pion absorption by two nucleons \(\pi NN\to NN\). Hereby two dominant channels of the pion production get affected. This method of medium modification is not perfect, as pions get produced by other processes, mainly \(NN\to NR\), where \(R\) is a resonance, at increased rates, but it is an effective method to reduce the overall pion numbers.
#### ii.3.2 Effective Mass
A second possibility for medium modifications is given by the effective mass.
To calculate heavy-ion collisions effectively, one needs to incorporate cross sections for the myriad of different events. One can derive from QFT that the differential cross sections in vacuum are given by
Figure 1: The binding energy per nucleon and the effective mass as a function of nuclear density for various EoSs [12; 13; 14]. The red band in the plot of the binding energies shows the estimates of the study by Xie _et al._[16].
\[\mathrm{d}\sigma_{12\to 1^{\prime}2^{\prime}\ldots N^{\prime}}=(2\pi)^{4}\delta^{(4)} \left(p_{1}+p_{2}-\sum_{i=1^{\prime}}^{N^{\prime}}p_{i}\right)\frac{n_{1}n_{2} \prod_{i=1^{\prime}}^{N^{\prime}}n_{i}}{4I_{12}}\overline{|\mathfrak{M}_{12\to 1 ^{\prime}2^{\prime}\ldots N^{\prime}}|^{2}}\ \mathcal{S}_{1^{\prime}2^{\prime}\ldots N^{ \prime}}\prod_{i=1^{\prime}}^{N^{\prime}}A_{i}(p_{i})\frac{\mathrm{d}^{4}p_{i }}{(2\pi)^{3}2p_{i}^{0}}\, \tag{18}\]
where the symmetry factor
\[\mathcal{S}_{ab}=\begin{cases}1&\text{if $a$ and $b$ are not identical},\\ \frac{1}{2}&\text{if $a$ and $b$ are identical}\,,\end{cases} \tag{19}\]
takes into account that one cannot differentiate between two elementary particles of the same kind and
\[I_{12}\,=\,\sqrt{(p_{1}p_{2})^{2}-(m_{1}m_{2})^{2}} \tag{20}\]
is the flux factor. The spin-averaged spectral function is defined as
\[A(x,p)=-\frac{1}{g\pi}\mathrm{tr}\Big{[}\mathrm{Im}(\tilde{S}^{\mathrm{ret}} (x,p))\gamma^{0}\Big{]} \tag{21}\]
with Dirac matrix \(\gamma^{0}\) and the retarded Green's function \(\tilde{S}^{\mathrm{ret}}(x,p)\). \(\mathfrak{M}\) is the matrix element used in the Bjorken-Drell convention, which is related to the PDG convention by
\[\mathcal{M}_{if}=\mathfrak{M}_{if}\prod_{j}\sqrt{n_{j}}\,\quad n_{j}=\begin{cases}1& \text{$j$ is boson},\\ 2m_{j}&\text{$j$ is fermion}\.\end{cases} \tag{22}\]
Having said that, it is important to keep in mind, that not every cross section can be calculated like that from first principles. Many have to be taken from experiment and many arise from educated guesses as there is not enough data or underlying theory to proceed otherwise.
For processes under investigation, particle number and density are not close to vacuum anymore. Hence it is plausible to allow some in-medium modifications. There are many options to apply this modifications, the prescription described here is the cross section modification by the effective mass. The in-medium cross section can be given by [11]
\[\mathrm{d}\sigma_{12\to 1^{\prime}2^{\prime}\ldots N^{\prime}}^{*}=(2\pi)^{4} \delta^{(4)}\left(p_{1}+p_{2}-\sum_{i=1^{\prime}}^{N^{\prime}}p_{i}\right) \frac{n_{1}^{*}n_{2}^{*}\prod_{i=1^{\prime}}^{N^{\prime}}n_{i}^{*}}{4I_{12}^{* *}}\overline{|\mathfrak{M}_{12\to 1^{\prime}2^{\prime}\ldots N^{\prime}}|^{2}}\ \mathcal{S}_{1^{\prime}2^{\prime} \ldots N^{\prime}}\prod_{i=1^{\prime}}^{N^{\prime}}A_{i}(p_{i})\frac{ \mathrm{d}^{4}p_{i}}{(2\pi)^{3}2p_{i}^{*0}}\, \tag{23}\]
with the in-medium flux factor
\[I_{12}^{*}\,=\,\sqrt{(p_{1}^{*}p_{2}^{*})^{2}-(m_{1}^{*}m_{2}^{*})^{2}}\,. \tag{24}\]
As one can see, eq. (23) is almost identical to eq. (18), except that effective masses and momenta appear. (\(n_{j}^{*}\) is defined as \(n_{j}\), but also with the effective mass.) In fact, this is why the Bjorken-Drell convention is used: The matrix element does not need to be in-medium modified, only the other factors are changed. However the delta function has to have the actual four momentum of the particles, as that is required by energy and momentum conservation.
The Bjorken-Drell convention of the matrix element leads to another problem, as it depends on the free c.m. energy of the system and thus the free four-momenta of colliding particles, while only the kinetic four-momenta are available in the medium. In other words the existing potentials have to be taken into account when calculating the available energy of the system. Thus the c.m. energy has to be modified as well. One option, which many transport codes (including GiBUU in Skyrme-like mode) use for hadron-hadron reactions, is with the so-called free c.m. energy
\[s_{\mathrm{free}}\,=\,(p_{1,\mathrm{free}}+p_{2,\mathrm{free}})^{2} \tag{25}\]
with free momenta
\[p_{\mathrm{free}}=(\sqrt{m^{2}+\mathbf{p}^{2}},\mathbf{p})\,. \tag{26}\]
This prescription assumes that the potential acts as a background field, which does not affect reaction rates.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline EOS & K & \(m_{N}^{*}/m_{N}\) & \(g_{\sigma}\) & \(g_{\omega}\) & \(g_{\rho}\) & \(g_{2}\) & \(g_{3}\) & \(m_{\sigma}\) \\ & MeV & & & & & & GeV & & MeV \\ \hline Liu & 240 & 0.75 & 8.958 & 9.238 & 3.769 & -4.681 & -30.909 & 550 \\ Lang, NL2 & 210 & 0.83 & 8.5 & 7.54 & 0.0 & -9.939 & -6.26 & 550.5 \\ \hline \end{tabular}
\end{table}
Table 1: Parameter sets for Liu and NL2 by Lang EOS’s and their saturation properties in terms of the compression modulus K and the effective mass \(m_{N}^{*}\) in units of the bare nucleon mass \(m_{N}\).
A drawback of this method is the disregard of the in-medium threshold for particle production. Consequently a different prescription is used, namely
\[\sqrt{s_{\rm free}}\,=\,\sqrt{s^{*}}-(m_{1}^{*}-m_{1})-(m_{2}^{*}-m_{2}) \tag{27}\]
with \(s^{*}=(p_{1}^{*}+p_{2}^{*})^{2}\). To show that this prescription conserves in-medium threshold, one has to start with the assumption, that the sum of vector fields \(V\) stays constant pre- and post collision, meaning \(V_{1}+V_{2}=\sum_{i=1^{\prime}}^{N^{\prime}}V_{i}\). Then one can replace the four momenta in the delta-function of eq. (23) by the kinetic momenta. This is done in the GiBUU code mainly due to technical reasons. Accordingly the cross section becomes proportional to the \(N\)-body phase-space volume element,
\[{\rm d}\sigma^{*}_{12\to 1^{\prime}\ldots N^{\prime}}\propto{\rm d}\Phi_{N}(p_{1} ^{*}+p_{2}^{*};p_{1^{\prime}}^{*},\ldots,p_{N^{\prime}}^{*})\,, \tag{28}\]
when we assume the outgoing particles are on their Dirac mass shell defined in eq. (16). The phase space volume element is defined by
\[{\rm d}\Phi_{N}(P;p_{1},\ldots,p_{N}) = \frac{{\rm d}^{3}\mathbf{p}_{1}}{(2\pi)^{3}2p_{1}^{0}}\cdots\frac{{ \rm d}^{3}\mathbf{p}_{N}}{(2\pi)^{3}2p_{N}^{0}}\] \[\times\ \delta^{(4)}(P-p_{1}-\cdots-p_{N})\,.\]
Now one can define the in-medium excess energy \(Q^{*}=\sqrt{s^{*}}-\sum_{i=1^{\prime}}^{N^{\prime}}m_{i}^{*}\) and the in-medium threshold condition
\[Q^{*}>0 \tag{30}\]
follows immediately from eq. (28). Hence it makes sense to define the in-medium invariant energy as
\[\sqrt{s_{\rm free}}=Q^{*}+\sum_{i=1^{\prime}}^{N^{\prime}}m_{i}=\sqrt{s^{*}}- \sum_{i=1^{\prime}}^{N^{\prime}}(m_{i}^{*}-m_{i})\,. \tag{31}\]
Nevertheless this last equation is hard to compute, as the final particles are not known initially, but produced during the collision. To overcome this problem another assumption is made, namely that the sum of initial scalar fields equals the sum of final scalar fields, e.g. \(S_{1}+S_{2}=\sum_{i=1^{\prime}}^{N^{\prime}}S_{i}\). This allows to rewrite eq. (31) into eq. (27). The assumptions made in this derivation, namely \(V_{1}+V_{2}=\sum_{i=1^{\prime}}^{N^{\prime}}V_{i}\) and \(S_{1}+S_{2}=\sum_{i=1^{\prime}}^{N^{\prime}}S_{i}\), are usually fulfilled in present simulations, as all baryon-meson couplings are set equal to the nucleon-meson coupling and the action of the mean-field on mesons is neglected. Cases when the assumptions do not hold are always explicitly given and explained below.
All in all it is now possible to calculate the in-medium cross sections for different final states of the baryon-baryon collisions. If the final particles are assumed to be on their mass shell, the relation between in-medium and vacuum cross sections is given by
\[\sigma^{*}_{12\to 1^{\prime}2^{\prime}\ldots N^{\prime}}(\sqrt{s^{*}})={ \cal F}\,\sigma^{\rm vac}_{12\to 1^{\prime}2^{\prime}\ldots N^{\prime}}( \sqrt{s_{\rm free}}) \tag{32}\]
with free center of mass (c.m.) energy given by equation eq. (27) and the modification factor
\[{\cal F}=\frac{n_{1}^{*}n_{2}^{*}n_{1^{\prime}}^{*}\ldots n_{N^{\prime}}^{*}} {n_{1}n_{2}n_{1^{\prime}}\ldots n_{N^{\prime}}}\frac{I_{12}}{I_{12}^{*}}\frac{ \Phi_{N^{\prime}}(\sqrt{s^{*}};m_{1^{\prime}}^{*},\ldots,m_{N^{\prime}}^{*})} {\Phi_{N^{\prime}}(\sqrt{s_{\rm free}};m_{1^{\prime}},\ldots,m_{N^{\prime}})}\,. \tag{33}\]
It contains all terms of eq. (23) which do depend on the kinetic momenta and effective mass. The \(N\)-body phase space volume is the integration of the infinitesimal element eq. (29),
\[\Phi_{N}(M;m_{1},...,m_{N})=\int{\rm d}\Phi_{N}(P;p_{1},...,p_{N})\, \tag{34}\]
with the mass shell condition \(p_{i}^{2}=m_{i}^{2}\) and \(P^{2}=M^{2}\). In the case of baryon-baryon scattering, eq. (33) takes the form
\[{\cal F}\propto\frac{m_{1}^{*}m_{2}^{*}}{m_{1}m_{2}}\ \prod_{i=1^{\prime}}^{N^{ \prime}}\frac{m_{i}^{*}}{m_{i}} \tag{35}\]
which shows, that the ratio of the effective (Dirac) mass to the bare mass is of importance for the cross section modification.
Not all cross sections need to be modified. There is some theoretical backing [18; 19; 20] in only modifying inelastic cross sections and leaving the elastic channels unmodified. Assuming that \(NN\leftrightarrow N\Delta\) and \(NN\leftrightarrow NN\pi\) reactions would dominate the inelastic collisions, these cross sections were modified, as in section II.3.1. Cross sections involving higher resonances can be left unmodified, as it can be assumed, that they are governed by a short-range interaction (heavy meson exchange), and, thus, in-medium effects can be neglected.
#### ii.3.3 Delta Potential Modifications
Another aspect that possibly needs to be refined is the baryon potential. The free c.m. energy of eq. (27) can be rewritten into
\[\sqrt{s_{\rm free}}=\sqrt{s^{*}}-S_{\rm fin}\, \tag{36}\]
where \(S_{\rm fin}\) is the sum of scalar potentials acting on the final particles. \(\sqrt{s^{*}}\) is the invariant energy of the colliding particles including their vector potentials. According to phenomenological observation [21] the potential energy of the delta is
\[U_{\Delta}\simeq-30\,{\rm MeV}\simeq\frac{2}{3}\,U_{N}\,. \tag{37}\]
Thus by reducing the scalar and vector potentials of the \(\Delta(1232)\) to two thirds the nucleons potential, the production of deltas is penalized compared to the default of using the nucleons potential. In the spirit of Kosov _et al._[22], these modifications are applied by changing the
couplings to the scalar and vector fields. Subsequently the ratios
\[r_{S}=\frac{g_{S}^{\prime}}{g_{S}}\ \ \text{and}\ \ r_{V}=\frac{g_{V}^{\prime}}{g_{V}} \tag{38}\]
are defined, where \(g_{S}\) is the nucleons coupling to the scalar field \(S\), \(g_{V}\) the nucleons coupling to the vector field \(V\), and \(g_{S}^{\prime}\) and \(g_{V}^{\prime}\) are the corresponding couplings of the \(\Delta\).
Unfortunately, there is an issue with this prescription. As mentioned above, the cross sections are not computed really by eq. (23), instead the Dirac delta function evaluates the kinetic 4-momentum \(p^{*}\) due to technical reasons. This causes no problems, when \(r_{V}=1\), but changing this coupling does violate momentum conservation for the \(\Delta\). The question of how to quantify this effect is left for future studies. Having said that, it is also possible to only modify the scalar potential, but then the condition eq. (37) requires \(r_{S}=0.878\) to keep \(U_{\Delta}\simeq-30\,\text{MeV}\). We will not discuss this possibility in this work.
## III Model predictions
In our simulations we consider the EOS by Liu with vacuum cross sections and in-medium modifications. In the following shown figures, denoted with \(\tensor{n}{\alpha}=2.0\)" is the exponential suppression with the factor \(\alpha=2.0\) from eq. (17). Modifications via Dirac mass with the prescription defined in section II.3.2 for the processes \(NN\leftrightarrow N\Delta\) and \(NN\leftrightarrow NN\pi\) are denoted by \(\tensor{n}{\sigma}^{*}_{\Delta,\pi}\)". The label "\(r_{S,V}\)" stands for modified coupling to scalar and vector field as described in section II.3.3 with the default value \(r_{S,V}=\frac{2}{3}\). We are interested in how medium modifications alter nuclear densities and collision rates. Thus a few simulations at the HADES incident energy of \(1.23\,A\text{GeV}\) with zero impact parameter were run. Here we expect the largest impact of the modifications, as less central events have less participants and more spectators and thus water down results.
The total number of nucleons, deltas and pions is shown in fig. 2. The number of nucleons decreases through collisions, as other hadrons are being produced by inelastic processes, before back reactions and particle decays lead again to an increase of nucleon numbers. Deltas are the resonances with the lowest mass and are produced more abundantly than others. Further they are of great importance to pion production, so they deserve some investigation. As they are not present in the nuclei at the beginning, they have to be created during collisions. Thus their numbers increase starting from zero, until decays and back reactions terminate almost all \(\Delta\) at the end of the simulation. The medium modifications are effective and lower production and consequently the yield of deltas. The nucleon numbers also reflect this behaviour: If less \(\Delta\) are being produced, which after all is the most dominant resonance, more nucleons remain and the dip in fig. 2 is less pronounced. However, the difference gets somewhat mitigated by the enhanced production of higher resonances. For the time frames considered in heavy ion collisions pions can be assumed to be stable particles, which can only disappear by inelastic collisions. Their production vastly outweighs such processes, so it is no surprise that pion numbers only increase over time. Just as for the deltas, the modifications lower yields. Furthermore they change the timing of delta and pion
Figure 2: Particle numbers as a function of time. Note the different scales for different particles and how the \(y\)-axis for the nucleons does not start at zero.
formation. Compared to the modification via effective mass and potential the exponential suppression has its delta peak delayed by \(1.6\,\mathrm{fm}/c\). This is due to the pion capture \(\pi N\to\Delta\), so the pion numbers are decreased in favor of the deltas. The modification of the potential coupling makes it less likely for deltas to form, so the pion capture is slightly hindered. In the end deltas decay into pions, \(\Delta\to\pi N\), and the exponential suppression gives a \(2.6\,\mathrm{fm}/c\) delay for pion numbers compared to the other modification.
The central baryon density is calculated at the center of mass of the system. So at the start of the simulations the nuclei are separated and the density is far below the nuclear density of \(\rho_{0}=0.16\,\mathrm{fm}^{-3}\). As the nuclei collide the density increases rapidly up to a maximum at \(t=12\,\mathrm{fm}/c\), when the centers of mass from both nuclei overlap. Then the density decreases until it asymptotically reaches zero. This process is depicted in fig. 3.
Note that density is not symmetric to the maximum due to stopping which slows particles down and causes the expansion to be slower than the compression. The medium modifications lower the central density significantly from \(2.82\,\rho/\rho_{0}\) without modifications to \(2.54\,\rho/\rho_{0}\) with exponential suppression, a \(10\,\%\) decrease. An explanation lies in the reduced cross sections due to the modifications, which causes less stopping to occur. Less stopping means that the nuclei become more transparent for the nucleons in the reaction so they move past each other instead of accumulating in the center region and increasing the density. The effects of the medium modifications on stopping are discussed in the section below as well.
At the beginning of the collision only nucleons are present, all other hadrons are being produced in the reaction by collisions. So studying the collision rates is of utmost importance for understanding the results. The net collision rates, i.e. "reaction - back reaction" for the modified processes is shown in fig. 4.
Compared to vacuum cross sections medium modifications, both effective mass and exponential suppression, lower the production of \(\Delta\) and \(\pi\) significantly. The exponential suppression is in fact almost completely ceasing these processes. Now the question arises how total pion numbers can be similar between both modification scenarios, if the exponential suppression is so much stronger. The answer lies in the production of (heavier) resonances. As fig. 5 shows, both modifications increase the production of resonances higher than \(\Delta\), which are named \(R\). This is a direct result of the modifications, as more nucleons are present to react and the production of resonances becomes more likely if the previously dominant reaction channel diminishes. The double delta production is more interesting. The effective mass and coupling modification hinders this reaction again, but this is entirely due to the altered coupling. The effective mass modification enlarges the production rate slightly, but not nearly as much as the exponential suppression. The reason for the increase is the same as for the resonance production, the lowered collision rate in the channel \(NN\to N\Delta\) leads to an increase in \(NN\to\Delta\Delta\). Of course this is unphysical
Figure 3: Central baryon density in units of the nuclear density \(\rho_{0}=0.16\,\mathrm{fm}^{-3}\) throughout the collision. The vertical line at \(t=12\,\mathrm{fm}/c\) corresponds to the moment of overlap of both nuclei, thus the moment of maximum density.
as the double delta production must be lower than the single delta production, so this is a point where the code must be improved in the future. But for our purposes in this paper this is sufficient.
## IV Comparison with experimental data
We compare our calculations to two experiments, namely HADES [1] and FOPI [23; 24]. The pion yields by HADES started our investigation and their dielectrons are also a good benchmark test for our model. The FOPI collaboration measured pions and protons, the latter provide an excellent method to calibrate mean-field potentials.
### Nucleons and Pions from FOPI
In the papers by Reisdorf _et al._[23; 24] data for nucleons and pions obtained by the FOPI collaboration are given. In the experiment \({}^{197}\)Au projectiles were collided with stationary \({}^{197}\)Au targets with a kinetic energy of \(400\,\mathrm{MeV}\) or \(1.5\,\mathrm{GeV}\) per nucleon, respectively. The FOPI measurements are very valuable, because they include cumulated protons instead of clusters like deuterons or helium-nuclei, which GiBUU cannot produce. Hence the predictions from our model are directly comparable to data. The FOPI collaboration decided to present their data as function of a logitudinal, \(y_{z}\), and a transversal rapidity, \(y_{x}\). The first one corresponds to the usual definition oriented along the beam axis, while the second one measures the rapidity in direction to the transversal \(x\)-direction (in the laboratory)[24],
\[y\equiv y_{z}=\frac{1}{2}\log\left(\frac{E+p_{z}}{E-p_{z}}\right)\,,\quad y_{ x}=\frac{1}{2}\log\left(\frac{E+p_{x}}{E-p_{x}}\right)\,. \tag{39}\]
The notion of the transversal rapidity might be misleading, as it is neither about the transverse momentum \(p_{t}\) nor about the transverse reaction plane, but solely refers to a fixed laboratory \(x\) direction which is perpendicular to the beam direction. In addition, published data [23] is in respect of the normalized rapidity \(y^{0}_{(z,x)}=y_{(z,x)}/y_{\mathrm{proj}}\), which is normalized by the projectile's rapidity in the
Figure 6: Proton rapidity spectra for Liu EOS with in-medium modifications. Experimental data taken from [23]. A kinetic energy of \(0.4\,A\)GeV is present.
c.m. frame. Only central events with a scaled impact parameter up to \(b^{0}=b/b_{\rm max}=0.15\) were considered, equivalent to an impact parameter of up to \(b=2.0\,\)fm with \(b_{\rm max}=1.15(A_{P}+A_{T})^{\frac{1}{3}}\,\)fm being the conventional maximum impact parameter.
The rapidity spectra for protons are shown in fig. 6 and fig. 7. The description of the experimental data using Liu EOS is fine, and the cross section modifications according the two different prescriptions do both not destroy the agreement. In fact, at midrapidity the agreement to data according the longitudinal axis gets better. Furthermore the curves are less peaked at mid-rapidity. This implies that less stopping occurs in the collisions which is to be expected as the cross sections are reduced by medium modifications. Transversal spectra are (nearly) unaffected by the used medium modifications.
A more complex observable is the scaled directed flow, also called sideflow, as it was introduced by FOPI [23]. It is given by
\[p_{xdir}^{0}=\frac{p_{xdir}}{u_{1cm}}\quad,\qquad p_{xdir}=\frac{\sum{\rm sign }(y)Zu_{x}}{\sum Z}\, \tag{40}\]
where the sums run over all charged particles with \(Z<10\), excluding pions. Here \(u_{1cm}\) is the spatial part of the center of mass 4-velocity of the projectile, \(y\) the (longitudinal) rapidity and \(Z\) the charge of a fragment. \(u_{x}=\beta_{x}\gamma\) is the 4-velocity projected onto the reaction plane. In fig. 8 the sideflow is plotted against the scaled impact parameter and compared to the data.
For a kinetic energy of \(400\,\)MeV per nucleon the agreement is fine and both methods of medium modifications do not break it. At the higher energy of \(1.5\,A\)GeV GiBUU slightly overpredicts the sideflow in vacuum and the modifications correct towards the data and are acceptable. The shift to more inelastic collisions explains why the modifications are more influential for altering the flow at higher energies. Consequently the mean field can be studied more clearly at the lower energy as inelastic collisions are less pronounced. Thus it is reassuring, that Liu with and without modifications agrees well with the data at \(400\,A\)MeV. On the other hand, NL2 by Lang does not agree with the data at lower energies, and stays significantly underneath Liu EOS as can be seen in fig. 9. This is supported by earlier work [25],
Figure 8: Sideflow for Liu EOS with in-medium modifications. Experimental data taken from [23]. MUL and ERAT are the two prescriptions used by FOPI to determine the impact parameter.
Figure 7: Same as fig. 6, but at \(1.5\,A\)GeV.
of the incompressibility on transverse flow is disproved. Nara _et al._[26] also find that the effective mass has a stronger effect than the incompressibility, which is almost identical between Liu and NL2 by Lang. They argue it is because flow is mostly governed by the stiffness at larger densities, which itself is closely related to effective mass and barely dependent on the incompressibility [27]. Physically the smaller effective mass requires a larger \(\omega\) meson coupling and hence more repulsion is generated. This is not the case in our model, but we also include the \(\mathbf{\rho}\) coupling in our EOS, so it is reasonable that meson interactions lead to more repulsion.
The FOPI collaboration also published the transverse momentum spectra for pions [24] for \(1.5\,A\)GeV and \(b^{0}\leq 0.15\). Comparisons of calculations with these data are shown in fig. 10. Calculations using the Liu EOS agree reasonably well with the data at \(p_{t}=100\)-\(150\,\)MeV, which are the lowest measured data points, since for lower \(p_{t}\), published data is extrapolated. The calculated spectra show a slope, which is too hard compared to the experimental results. The medium modification methods even worse the picture by reducing the yield of pions at low momenta. While integrated pion yields (even with the uncertainty of the extrapolation to low transverse momenta) with the default setup seem to be correctly described, the medium modifications yield to some underprediction of the experimental yields.
### Pions from HADES
For the HADES experiment, Au+Au heavy-ion collisions were performed at an incident energy of \(1.23\,A\)GeV and pion spectra were published by Adamczewski-Musch _et al._[1]. Right off the bat, the pion multiplicities are significantly lower than measured by FOPI, which was also discussed by both collaborations [1; 28]. The following comparisons of calculations to experimental spectra are for the innermost \(10\,\%\) centrality class of events. As shown in fig. 11 one observes, that both methods of the in-medium modifications reduce the pion yield and the rapidity curves agree perfectly with data.
In the case of the medium modification via the effective
Figure 10: Pion transverse momentum spectra compared to data from FOPI at \(1.5\,A\)GeV [24] for Liu EOS with in-medium modifications. Data points for momenta less than \(100\,\)MeV were extrapolated, not measured and are shown as hollow dots.
Figure 9: Same as fig. 8, but for EOS NL2 by Lang.
mass it is important to observe, that only the interplay with the modification of the \(\Delta\) potential yields to similar reduction of both pion species. Theoretical calculations without any medium modifications yield a ratio \(\pi^{-}/\pi^{+}\) very similar to the experimental value, while the absolute numbers are off. Thankfully this ratio is robust and the medium modifications do not break it. This is illustrated in fig. 12, where rapidity spectra with medium modifications, but with and without modification of the \(\Delta\) potential are shown. It is noteworthy that the modification via the effective mass or the \(\Delta\) potential lower the rapidity spectra only slightly, whereas their combined effect results in a significant reduction. Especially the \(m^{*}/m\) prescription has an almost negligible effect on its own. For comparison the same curves with NL2 by Lang EOS are shown in fig. 13. One can see that both the effective mass and the \(\Delta\) potential lower pion yields with more intense than with NL2, even though the rapidity curves with vacuum cross sections are almost identical. This is to be expected, as Liu has a stronger drop of \(m^{*}/m\). Consequently Liu lies perfectly on the data with the modifications, while NL2 by Lang still overpredicts the yields significantly. Similarly the prescription with Liu EOS and exponential suppression with \(\alpha=2.0\) works for both species and even slightly better with NL2 by Lang EOS. The \(\pi^{-}/\pi^{+}\) ratio is within the data. (Please recall that the Coulomb potential is in effect for all particle species in our calculations.) This has to be considered an improvement to the results of Godbey _et al._[10] who needed the factor \(\alpha\) to be isospin-dependent in order to overcome the burden of starting with a wrong ratio already without modification.
Unfortunately the good agreement of Liu EOS with the rapidity spectra does not hold for the \(p_{t}\) spectra, as shown in fig. 14: The in-medium modifications are too hard to reproduce the curves. This is in agreement with the above comparison to the pion spectra measured by FOPI. Another problem is that \(\pi^{-}\) at low \(p_{t}\) are not well described. But overall this is a significant improvement to earlier transport calculations.
Figure 11: Rapidity spectra of pions for Liu EOS with in-medium modifications. Experimental data taken from [1].
Figure 12: Rapidity spectra of pions with medium modifications according effective mass for both pion species as in fig. 11. Calculations with and without modification of the \(\Delta\) potential are shown. Top lines are for \(\pi^{-}\), bottom lines for \(\pi^{+}\).
Figure 13: Same as fig. 12, but for NL2 by Lang EOS.
### Dileptons from HADES
Another observable measured by HADES are dielectron spectra. For the situation at hand, gold on gold at \(1.23\,A\)GeV, the invariant mass spectra for the innermost \(40\,\%\) of events were published by Adamczewski-Musch _et al._[29]. The effect of the in-medium modification on the full spectrum can be seen in fig. 15. Slight relative differences in the various yields may be visible, while the sum of all contributions is nearly unaffected. Some mystery is the behaviour of the \(\pi N\)-bremsstrahlung. The in-medium modifications increase the yield of this channel by more than a factor 3. More research is needed to find the source of this enhancement. Details about the dilepton calculations may be found in ref. [9].
The region of the Dalitz peak is emphasized in fig. 16. The \(\pi^{0}\) Dalitz decay is troublesome: If total pion numbers agree well with experiment, too few dielectrons are computed. Bringing the \(\pi^{0}\) Dalitz decay closer to the experiment on the other hand gives too many pions in total, which is consistent with earlier works [9]. For now it seems impossible to get the correct number of dielectrons from the \(\pi^{0}\) Dalitz decay and the correct number of charged pions at the same time.
## V Conclusions
As shown in [1], transport models drastically overpredict experimental measured pion yields. The goal of this letter was to solve this pion puzzle by adjusting the underlying physics of the collisions to in-medium conditions.
There are three methods considered throughout this report which work in reducing pion production. One option is by applying in-medium modifications on the cross section via the Dirac mass, or more precisely the ratio of Dirac mass to bare mass \(m^{*}/m\). This prescription is not uncontroversial, but it has some backing in literature. Another method of reducing pion numbers is the modification of the coupling of the \(\Delta\) to scalar and vector field and thus changing the potential for these baryons. Whether this couplings are stronger or weaker than the nucleons', and what its precise values are, is still an unanswered question with active research. Further it is unclear, whether this prescription is implemented in a physical manner in GiBUU, but for now it is assumed that this is the case. Another way of lowering the abundance of pions is by suppressing the \(NN\leftrightarrow N\Delta\) and \(NN\leftrightarrow NN\pi\) reaction with an ad-hoc density dependent factor. Similar work was done by Godbey _et al._[10], but they required
Figure 14: Transverse momentum spectra of pions for Liu EOS with in-medium modifications. Experimental data taken from [1].
Figure 15: Invariant mass spectra of dielectrons for Liu EOS with and without in-medium modifications. Data taken from [29]. Bremsstrahlung-enhancement as in [9].
this factor to also be isospin dependent.
In order to give robust results, it is demanded that GiBUU can accurately describe proton spectra. Data on cumulated protons was published by the FOPI collaboration [23], which was needed, as GiBUU does not produce clusters, but only protons. From the proton rapidity spectra it follows, that a soft EoS has to be considered the way to go. This becomes blatantly apparent at lower energies (\(400\,A\)MeV) where the mean-field can be studied most clearly. Another interesting observable is the sideflow, which proves to be highly sensible on the EoS. There a stiff EoS immediately has to be ruled out. In agreement with astronomical constraints [15] the EoS by Liu seems to be most appropriate to describe nuclear matter.
The pion rapidity spectra obtained by HADES is well described by the EoS by Liu with some in-medium modifications. Using the exponential suppression good agreement is possible without introducing different factors for positively and negatively charged pions. In this way this study is an improvement on earlier works on this topic. Similar agreement to data can be achieved by modifying the cross sections for \(NN\leftrightarrow N\Delta\) and \(NN\leftrightarrow NN\pi\) and by simultaneously changing the coupling to scalar and vector fields, \(r_{S,V}\). Especially the coupling \(r_{S,V}\) turns out to be important for the ratio of \(\pi^{-}/\pi^{+}\). This is exciting as this formalism would offer a better theoretical description on the underlying physical situation. However, both exponential suppression and effective mass mechanisms yield \(p_{t}\) spectra which are too stiff. Too many high momentum pions are produced and for now it is unclear how to solve this problem. The comparison to the pion spectra from FOPI is ambivalent, nothing can be excluded. Furthermore, FOPI yields are significantly higher than HADES's, which is also discussed by the HADES collaboration [1].
One remaining problem lies in the \(\pi^{0}\) Dalitz decay. There is the paradoxical situation, that the Dalitz decay is under-predicted, if the multiplicites of the charged pions match the experiment. However, if the pion numbers are over-predicted, the \(\pi^{0}\) Dalitz decay lines up with the data. For now there is also no solution for this conundrum.
###### Acknowledgements.
We are grateful for close collaboration with Alexei Larionov in the early stages of this project. We also gratefully acknowledge helpful discussions with Ulrich Mosel and Jan-Hendrik Otto. This work was supported by BMBF grant no. 05P21RGFCA.
|
2309.07709 | Safe Aerial Manipulator Maneuvering and Force Exertion via Control
Barrier Functions | This article introduces a safe control strategy for application of forces to
an external object using a dexterous robotic arm mounted on an unmanned Aerial
Vehicle (UAV). A hybrid force-motion controller has been developed for this
purpose. This controller employs a Control Barrier Function (CBF) constraint
within an optimization framework based on Quadratic Programming (QP). The
objective is to enforce a predefined relationship between the end-effector's
approach motion and its alignment with the surface, thereby ensuring safe
operational dynamics. No compliance model for the environment is necessary to
implement the controller, provided end-effector force feedback exists.
Furthermore, the paper provides formal results, like guarantees of feasibility
for the optimization problem, continuity of the controller input as a function
of the configuration, and Lyapunov stability. In addition, it presents
experimental results in various situations to demonstrate its practical
applicability on an aerial manipulator platform. | Dimitris Chaikalis, Vinicius Goncalves, Nikolaos Evangeliou, Anthony Tzes, Farshad Khorrami | 2023-09-14T13:44:15Z | http://arxiv.org/abs/2309.07709v3 | # Aerial Manipulator Force Control Using Control Barrier Functions
###### Abstract
This article studies the problem of applying normal forces on a surface, using an underactuated aerial vehicle equipped with a dexterous robotic arm. A force-motion high-level controller is designed based on a Lyapunov function encompassing alignment and exerted force errors. This controller is coupled with a Control Barrier Function constraint under an optimization scheme using Quadratic Programming. This aims to enforce a prescribed relationship between the approaching motion for the end-effector and its alignment with the surface, thus ensuring safe operation. An adaptive low-level controller is devised for the aerial vehicle, capable of tracking velocity commands generated by the high-level controller. Simulations are presented to demonstrate the force exertion stability and safety of the controller in cases of large disturbances.
## I Introduction
Aerial platforms capable of autonomous task execution, safe operation and interaction with their environment, occasionally referred to as aerial workers [1], promise to increase efficiency and safety in tasks such as inspection [2], and transportation [3]. Typical multirotor vehicles equipped with planar motors are extensively used due to their availability in various sizes and development of high-fidelity sensors enabling operation in diverse indoor or outdoor settings and being robust to surrounding environments
Underactuation due to the planar placement of rotors leads to a coupling of the vehicle's altitude to its attitude. A common approach to re-instating the degrees of freedom lost due to underactuation, while also providing the vehicle with a method for interacting with its environment, is the attachment of robotic arms on the aerial vehicle. Such systems are sometimes referred to as aerial manipulators [4].
Aerial manipulators are capable of interacting with other entities via exchanging forces. Force control techniques can be used to solve a variety of problems, such as multi-robot cooperation by regulating either physical [4] or virtual forces [5] being exchanged. Likewise, the force control problem of regulating force exchanged with the environment can be used to solve problems such as automated push-pull and robotic grasping.
The problem of force exertion using fixed-base robotic manipulators has been thoroughly investigated [6], and can be extended to aerial manipulators using floating-base platforms [7].
Impedance control techniques control the interaction of aerial manipulators with the environment [8] in standalone or coordinated cases [9, 10]. This area has been addressed in space robotics while interacting with floating objects [11].
Along these lines, this article studies the problem of exerting prescribed normal forces on planar surfaces, with a multirotor aerial vehicle coupled with a dexterous robotic arm. The problem of sustained force exertion has applicability in tasks such as contact inspection, autonomous pushing, or clearing an area of obstacles, and in space robotics [12, 13].
In [14], a system is presented for force exertion applicable only for horizontal forces. The system consists of an aerial vehicle and a single DoF tool, with an additional slider at the end-effector, aimed at allowing friction-less sliding along one unconstrained axis of the surface.
An application on the peg-in-hole problem [15] is handled with a dual-arm manipulator for increased manipulability and simple controllers. In [16] the authors use a similar platform, with an eye-in-hand camera for achieving contact on a surface enriched with fixed visual features, enabling the control to take place in the image space. In [17] a fully-actuated vehicle is used, allowing for increased mobility during interaction, while the distance between end-effector and environment is used to aid the controller in achieving compliance. The model predictive control architecture is employed in [18, 19] for achieving force exertion with the environment, demanding accurate knowledge of the underlying dynamics.
In this work, it is desired for the force exertion to be perpetrated on a selected location of a surface and exclusively along its normal axis. With the interest of ensuring safety during the interaction task, an optimization-based controller is developed, incorporating control barrier functions (CBF) in its design [20]. The driving controller utilises kinematics of the vehicle and the robotic arm and outputs system-level velocity commands, allowing the developed scheme to be applicable to any type of multirotor platform and robot manipulator. The CBF is coupled with the driving controller via an optimization problem (using Quadratic Programming (QP)), aiming to guarantee that a desired relation between distance to the surface and end-effector alignment is maintained at all times. This ensures that contact with the environment will occur only under strict performance requirements, albeit in a continuous control manner.
This paper is structured as follows. Section II presents the model of the aerial manipulator and the environment, while also describing the task definition. Section III outlines the overall proposed optimization-based control design with the low-level controller for the aerial vehicle's stability in Section IV. Simulation studies are presented in Section V
validating the approach, while concluding remarks appear in Section VI.
## II Problem Statement & System Model
The studied Unmanned Aerial Manipulator (UAM) system consists of an underactuated multirotor aerial vehicle coupled with a robotic arm with revolute joints, shown in Figure 1.
There exists a stationary planar surface, on which it is desired to exert normal forces. The relevant frames for the studied problem are: \(\mathcal{U}\) the frame fixed on the center of the aerial vehicle, \(\mathcal{E}\) the frame of the force sensor attached on the robotic arm end-effector, \(\mathcal{W}\) the world-fixed inertial frame and \(\mathcal{P}\) the inertial frame attached on the wall. It is assumed that the \(\widetilde{z}\) vector of frame \(\mathcal{P}\) is normal to the wall, directed outside the wall with its center on the surface of the wall. Furthermore, it is assumed that the end-effector tool that will perform the force-exertion is aligned with the \(\widetilde{z}\) vector of the frame \(\mathcal{E}\) (see Figure 2).
The position \(p_{\mathcal{U}}^{\mathcal{W}}\in\mathbb{R}^{3}\) of the center of the aerial vehicle is measured in \(\mathcal{W}\), while the attitude of the vehicle frame \(\mathcal{U}\) in \(\mathcal{W}\) is dictated by the roll \(\phi\), pitch \(\theta\) and yaw \(\psi\) angles, or by the rotation matrix \(R_{\mathcal{U}}^{\mathcal{W}}\triangleq R_{z}(\psi)R_{y}(\theta)R_{x}(\phi)\). The configuration of the robotic arm is represented by the vector of joint angles \(q_{m}\in\mathbb{R}^{n}\), for a \(n\)-joint robot. All vectors in this paper are column vectors. Assuming known fixed transformation \(\left(p_{\mathcal{P}}^{\mathcal{W}},R_{\mathcal{P}}^{\mathcal{W}}\right)\) between \(\mathcal{W}\) and \(\mathcal{P}\), the complete pose of the end-effector can be computed relative to the wall frame \(\mathcal{P}\).
The goal is for the end-effector position and orientation to appropriately align with \(\mathcal{P}\) and subsequently exert forces along the \(\widetilde{z}\) axis of \(\mathcal{P}\). This force should be exerted at the center of \(\mathcal{P}\). Let the position of the end-effector along the \(\widetilde{z}\) axis of the wall frame be \(Z\) and \(F_{d}<0\) be the desired force. The following reaction force model, as a function of the insertion into the wall, \(Z\), is assumed 1: a function \(F:\mathbb{R}\rightarrow\mathbb{R}\) that is continuous, non-decreasing and such that there exist a unique \(Z_{d}<0\) such that \(F(Z_{d})=F_{d}\). A typical example satisfying the above assumptions is the spring-action force model:
Footnote 1: \(Z<0\) implies an insertion of the tool into the wall, whereas \(Z>0\) means that no contact between the wall and the tool was established.
\[F(Z)=\left\{\begin{array}{ll}\kappa Z&\text{if }Z\leq 0\\ 0&\text{if }Z>0\end{array}\right.\quad.\]
in which \(Z_{d}=F_{d}/\kappa\). The joint motor controllers accept joint velocity references, while the aerial vehicle flight controller handles the low-level attitude control, allowing attitude and thrust setpoints to be transmitted to the autopilot. Autopilots typically accept as input the normalized thrust \(\frac{T_{f}}{T_{m}}\), where \(T_{m}\) is the maximum value of \(T_{f}\). Since the thrust relies on the employed brushless motor, propeller and battery characteristics, it can be introduced in the vehicle translational dynamics as an additional unknown constant.
The aerial manipulator's dynamic model is:
\[\frac{d}{dt}v_{\mathcal{U}}^{\mathcal{W}}=\frac{1}{c_{m}}\left(R_{\mathcal{U}} ^{\mathcal{W}}\widetilde{e}_{z}T_{f}+F_{e}\right)-g\,\ \frac{d}{dt}q_{m}=u_{m}, \tag{1}\]
where \(v_{\mathcal{U}}^{\mathcal{W}}\triangleq\frac{d}{dt}\left(p_{\mathcal{U}}^{ \mathcal{W}}\right)\), and \(F_{e}\) is the external force vector attributed to forces exchanged with the environment and disturbances from the robotic arm. It is assumed to be upper bounded in norm by a value \(\Delta\) and also to be slow-varying \(\dot{F}_{e}\simeq 0\) (as in [21]). Furthermore, \(g\triangleq 9.81\widetilde{e}_{z}\) is the gravity acceleration vector, \(q_{m}\) the manipulator joint position vector and \(u_{m}\) is the robotic arm joint velocity vector, while \(\widetilde{e}_{x},\widetilde{e}_{y},\widetilde{e}_{z}\) are the columns of the \(3\times 3\) identity matrix respectively.
In (1), the unknown constant \(c_{m}=\frac{m}{T_{m}}\) is related to the overall system mass \(m\), while the rotation matrix \(R_{\mathcal{U}}^{\mathcal{W}}\) can be expressed in its corresponding Euler angles. Let the control input vector be
\[\tau=[T_{f}\ \ \phi_{d}\ \ \theta_{d}\ \ \omega_{z}\ \ u_{m}^{\top}]^{\top}, \tag{2}\]
where \(\omega_{z}\) is the angular velocity about the vehicle's \(\widetilde{z}\)-axis. The controller should be designed designed so as to arrive at final control values \(\tau\), while exerting a force at a selected wall-location. The controller framework is split into: a) the _high level controller_ that generates the (i) UAM's linear velocity, (ii) yaw rate and (iii) the manipulator joint velocity. The latter two are sent directly as low-level commands, but the linear velocity is sent to: b) a _low level controller_ that generates the appropriate remaining low level control signals.
## III High level controller
### _Basic definitions_
In order to ensure safety during task execution, it is desired for the aerial manipulator to achieve proper alignment prior to attempting force interaction, thus avoiding undesirable collisions of parts of the system.
Fig. 1: Example Unmanned Aerial Manipulator (UAM) system.
Fig. 2: Coordinate frames of a UAM system
For this reason, a controller is sought, capable of guaranteeing end-effector alignment and force convergence. This controller is then combined with an appropriate control barrier function (CBF) under an optimization scheme, in order to ensure that a desirable relation between end-effector distance from the wall and end-effector alignment is maintained at all times. The devised controller outputs linear velocity and yaw angular rate commands for the aerial vehicle and joint angle velocity commands for the robotic arm, which are then transmitted to the lower-level controllers.
For the sake of simplicity, the variables in this section are expressed in the wall frame. Thus, let \(x,y,z\) the vehicle positions in \(\mathcal{P}\) (center of \(\mathcal{U}\) measured in \(\mathcal{P}\)). As far as the high-level controller is concerned, the following _configuration vector_ is defined
\[q\triangleq[x\;\;y\;\;z\;\;\;\psi\;\;\;q_{m}^{\top}]^{\top}. \tag{3}\]
The output command of the controller will be \(\dot{q}\) and the high-level controller assumes the dynamics \(\dot{q}=u\), i.e.It is also assumed implicitly that the UAM's roll(\(\phi\)) and pitch(\(\theta\)) rates are zero, as far as the high-level controller is concerned.
Let \(X(q),Y(q),Z(q)\) be the coordinates of the position of the end-effector frame and \(\vec{x}(q),\vec{y}(q),\vec{z}(q)\) the orthonormal vectors of the orientation of the end-effector frame. These elements form the end-effector frame, measured in wall frame, and can be obtained using forward kinematics. Using forward differential kinematics, the linear and angular velocity geometric Jacobians \(J_{v}(q),J_{\omega}(q)\) are also defined for the end-effector frame measured in the wall frame.
According to Section II, the goal is to control the UAM to have \(X(q)=Y(q)=0\), \(\vec{z}(q)=-\vec{e}_{z}\) and to exert a force \(F(Z(q))=F_{d}\). The task can be separated into: a) alignment and b) force exertion, which are to be implemented using scalar potential functions.
### _The alignment potential function_
Let the positive-definite alignment function be
\[A(q)\triangleq\frac{k_{x}}{h}|X(q)|^{h}+\frac{k_{y}}{h}|Y(q)|^{h}+\frac{k_{o}}{ g}|1+\vec{e}_{z}^{\top}\vec{z}(q)|^{g},\]
for constants \(h>1,g>1,k_{x},k_{y},k_{o}>0\). Note that \(A(q)=0\) if and only if \(X(q)=Y(q)=0\) and \(\vec{z}(q)=-\vec{e}_{z}\). Define the symbol \([x]^{s}=\text{sign}(x)|x|^{s}\). The gradient of \(A\) in \(q\) will be necessary. It is given by
\[\nabla A(q) = k_{x}[X(q)]^{h-1}J_{v}(q)^{\top}\vec{e}_{x}+k_{y}[Y(q)]^{h-1}J_{v }(q)^{\top}\vec{e}_{y}+\] \[k_{o}[1+\vec{e}_{z}^{\top}\vec{z}(q)]^{g-1}J_{\omega}(q)^{\top} \left(\vec{e}_{z}\times\vec{z}(q)\right).\]
### _Force exertion potential function_
Consider a continuous and non-decreasing function \(\gamma:\mathbb{R}\rightarrow\mathbb{R}\) with \(\gamma(s)=0\) if and only if \(s=0\). Let:
\[U(q)\triangleq\int_{Z_{d}}^{Z(q)}\gamma\Big{(}F(\xi)-F_{d}\Big{)}d\xi.\]
It can be shown that \(U(q)\geq 0\) and zero if and only if \(q\) is such that \(Z(q)=Z_{d}\). Furthermore, \(U(q)\) is a differentiable function on \(q\), with \(\nabla U(q)=\gamma\Big{(}F(Z(q))-F_{d}\Big{)}J_{v}(q)^{\top}\vec{e}_{z}\).
### _The task Lyapunov function_
Let the _high-level Lyapunov function_ can be defined be
\[V_{H}(q)\triangleq A(q)+U(q).\]
Clearly, \(V_{H}(q)\) is differentiable and \(V_{H}(q)=0\) if and only if \(X(q)=Y(q)=0\), \(\vec{z}(q)=-\vec{e}_{z}\) and \(F=F_{d}\) (i.e., \(Z(q)=Z_{d}\)).
### _The distance-alignment error diagram_
Given the Lyapunov function, one could consider a controller \(u=-K_{q}\nabla V_{H}(q)\) for a positive scalar \(K_{q}\). However, this can be unsafe, because this controller does not consider that the UAM's manipulator can collide with the wall. To solve this issue, a _distance-alignment error_ relationship will be enforced. Essentially, the distance between the end-effector and the wall \(D(q)\) and the alignment error function \(A(q)\) should follow a relationship in which there is a minimal distance \(D\) for each alignment error \(A\), and thus the approach will happen in a safe way. Geometrically, this can be visualized by plotting \(A\) and \(D\) in a diagram, as shown in Fig. 3 and enforcing the points along the trajectory to be above a certain curve that contains the point \(A=0,D=0\).
For this paper, the end-effector wall-distance function is considered to be simply the distance from the end-effector point to the wall, \(D(q)\triangleq Z(q)\). More complex distance functions, taking into the consideration the geometry of the robot, can be used if necessary. Regardless, consider a function \(\Omega\) that is differentiable, increasing and such that \(\Omega(0)=0\). This function will shape the distance-alignment relationship. To enforce this relationship, the _barrier distance-alignment relation function_\(B(q)\triangleq D(q)-\Omega(A(q))\) is defined, so the desired relationship can be written as \(B(q)\geq 0\). Since that \(B\) is differentiable in \(q\), this inequality is suitable to be enforced through CBFs with a differential inequality of the form \(\nabla B(q)^{\top}\dot{q}\geq-K_{B}B(q)\) for a positive \(K_{B}\).
### _Joint and configuration velocities limits_
Limits for \(\dot{q}\) and also joint limits for the manipulator will also be enforced. Consider joint limits for the manipulator \(\underline{q}_{m}\leq q_{m}\leq\overline{q}_{m}\) and configuration velocity limits \(\underline{u}\leq\dot{q}\leq\overline{u}\). The joint limits can be enforced through CBFs, and the
Fig. 3: Alignment error (A) and distance (D)-diagram versus the shaping function \(\Omega(s)=\sqrt{s}\).
corresponding differential inequalities can be merged into the configuration velocity limits. Split \(\underline{u}\) and \(\overline{u}\) in its first four entries \(\underline{u}_{D}\), \(\overline{u}_{D}\) and remaining entries \(\underline{u}_{m}\), \(\overline{u}_{m}\). Define
\[\underline{b}(q) \triangleq[\underline{u}_{D}^{\top} \max(\underline{u}_{m},K_{L}(\underline{q}_{m}-q_{m}))^{\top}]^{\top}\] \[\overline{b}(q) \triangleq[\overline{u}_{D}^{\top} \min(\overline{u}_{m},K_{L}(q_{m}-\overline{q}_{m}))^{\top}]^{\top} \tag{4}\]
in which \(K_{L}\) is a positive scalar and the maximum and minimum are taken component-wise. Then \(\underline{b}(q)\leq\dot{q}\leq\overline{b}(q)\) implements the desired constraint.
### _The optimization problem_
The QP-controller can be computed for the high-level input \(u_{H}\). Define the steering controller \(u_{d}(q)\triangleq-K_{q}\nabla V_{H}(q)\). The control input \(u_{H}\) is:
\[u_{H}(q) \triangleq \arg\min_{\mu}\ \ ||\mu-u_{d}(q)||^{2}\mbox{ such that} \tag{5}\] \[\nabla B(q)^{\top}\mu\geq-K_{B}B(q)\] \[\underline{b}(q)\leq\mu\leq\overline{b}(q).\]
Note that \(u_{H}\) is indeed a function only of \(q\), since the reaction force is modelled to be configuration-dependent through the function \(F(Z(q))\). However, it should be noted that _it is not necessary_ to know the force-reaction model function \(F\) to compute this controller. Indeed, the force model only appears through the quantity \(F(Z(q))\), that is part of the steering controller. This quantity can be obtained without any information about the force model provided that there is a force sensor that measures it. The mention to the force model is only necessary for the sake of the mathematical analysis.
The following results can then be shown about this controller:
**Proposition 1**: _The set_
\[\mathbb{P}\triangleq\left\{q\ |\ B(q)>0\,\ \underline{q}_{m}<q_{m}<\overline{q}_{m}\right\} \tag{6}\]
_is positive invariant._
The CBF constraint in (4) guarantees strict bounds \(\underline{q}_{m},\overline{q}_{m}\) for \(q_{m}\), while the first constraint of (5) guarantees \(\dot{B}(q)\) is non-decreasing as \(B(q)\to 0\), thus ensuring positive invariance of \(\mathbb{P}\).
**Proposition 2**: _If the initial configuration \(q_{0}\in\mathbb{P}\), then the optimization problem is always feasible. In particular, \(\mu=0\) is always a strictly feasible solution._
Application of \(\mu=0\) in (5), results in
\[-K_{B}B(q)<0,\quad\underline{b}(q)<0<\overline{b}(q)\]
Both of the above inequalities are already guaranteed from the positive invariance of \(\mathbb{P}\), thus \(\mu=0\) is a strictly feasible solution, because the \(\geq\) conditions are met with \(>\).
**Proposition 3**: _The solution to the optimization problem is unique._
From convex function theory, a strictly convex function minimized over a convex set, has a unique optimal solution (if any). In (5), the constraint set is comprised of linear inequalities and is thus convex. The quadratic optimization problem objective function of (5) can be written in the form
\[\min_{\mu}\ \ \frac{1}{2}\mu^{T}H\mu+f^{T}\mu,\]
with \(H=2I\), where \(I\) the identity matrix and \(f=-2u_{d}(q)\). Since \(H\) is positive definite, the objective function is strictly convex.
**Proposition 4**: _For \(q\in\mathbb{P}\), \(u_{H}(q)\) is a continuous function of \(q\)._
The constraint functions are continuous in \(q\). Objective function matrices \(H,f\) are also continuous in \(q\). From [22], it can be shown that the above conditions, along with \(\mu=0\) being a strictly feasible solution (from Proposition 2), are sufficient to prove continuity of the optimal solution \(u_{H}(q)\) in \(q\).
**Proposition 5**: _If the initial configuration \(q_{0}\in\mathbb{P}\), then for all \(t\geq 0\) the closed loop system using (5), \(\dot{q}=u_{H}(q)\), is Lyapunov stable to the set \(\mathbb{V}_{H}\triangleq\{q\ |\ V_{H}(q)=0\}\), i.e., \(\dot{V}_{H}=\nabla V_{H}(q(t))^{\top}u_{H}(q(t))\leq 0\) for all \(t\geq 0\)._
Let \(G(\mu,q)\) be the objective function of (5), then by definition, the solution \(\mu=u_{H}(q)\) is the unique minimizer of \(G\), or \(G(u_{H}(q),q)\leq G(\mu,q)\) for any feasible solution \(\mu\).
Given \(\mu=0\) is a feasible solution, from Proposition 2, \(G(u_{H}(q),q)\leq G(0,q)\) or
\[2K_{q}\nabla V_{H}(q)^{T}u_{H}(q)\leq-||u_{H}(q)||^{2}\leq 0,\]
which from the definition of \(\dot{V}_{H}(q)\) results in
\[\dot{V}_{H}(q)\leq 0.\]
## IV Low level controller
The high level controller provides the low-level variables \(\omega_{z}\) and \(u_{m}\) directly from \(u_{H}(q)\). More precisely, \(\omega_{z}\) comes from the \(\psi\) component of \(u_{H}(q)\) (\(4_{th}\) element), and \(u_{m}\) from the \(q_{m}\) component of \(u_{H}(q)\) (last \(n\) elements). The linear velocity component of \(u_{H}(q)\) (the first three components), however, needs to be converted into low-level variables, which is the role of the _low level controller_. The aerial vehicle controller module needs to guarantee convergence to the desired linear velocities commanded by the CBF force controller, despite effects due to forces exchanged with the environment, as well as uncertainty in assigning vehicle thrust, as explained in Section II.
Clearly, the UAV velocity commands in \(u_{H}\) generated by the high-level controller in the wall frame can be directly transformed to desired velocities in the world frame, henceforth denoted by \(v_{U,d}^{\mathcal{W}}\). A control law to track this linear velocity will be shown, and its Lyapunov stability will be proved. For this purpose, some definitions are necessary. Define the velocity error, \(e_{v}\triangleq v_{U}^{\mathcal{W}}-v_{U,d}^{\mathcal{W}}\). Let the variables \(\widehat{F}_{e},\widehat{c}_{m}\) evolve according to the following dynamics
\[\dot{\widehat{F}}_{e}=\sigma_{p}e_{v}-\sigma_{r}\sigma_{p}\widehat{F}_{e}, \tag{7}\] \[\dot{\widehat{c}}_{m}=\sigma_{m}\ e_{v}^{\top}\left(K_{v}e_{v}-g\right) \tag{8}\]
with arbitrary initial conditions, in which \(\gamma,\sigma_{p}>0\) and \(\sigma_{r}\geq 0\). Furthermore, define
\[F_{f}\triangleq\widehat{c}_{m}\left(g-K_{v}e_{v}\right)-\widehat{F}_{e}, \tag{9}\]
for a positive definite matrix \(K_{v}\in\mathbb{R}^{3\times 3}\). Finally, define \(\widetilde{F}_{R}\triangleq R_{z}(\psi)^{\top}\frac{F_{f}}{||F_{f}||}\).
**Proposition 6**: _If the control inputs \(T_{f}\), \(\phi_{d}\) and \(\theta_{d}\) are:_
\[T_{f}=||F_{f}||,\phi_{d}=\text{asin}\left(\bar{e}_{y}^{\top}\widetilde{F}_{R} \right),\theta_{d}=\text{atan2}\left(\bar{e}_{x}^{\top}\widetilde{F}_{R}, \bar{e}_{z}^{\top}\widetilde{F}_{R}\right) \tag{10}\]
_and assuming slow external disturbances \(\dot{F}_{e}\simeq 0\) (as in [21]) then \(e_{v}\) and \(\widetilde{F}_{e}\) remain ultimately uniformly bounded._
Let \(\widetilde{c}_{m}\triangleq c_{m}-\widehat{c}_{m},\widetilde{F}_{e}\triangleq F _{e}-\dot{F}_{e}\), where \(c_{m}>0\) and constant by definition. Define the low-level Lyapunov function \(V_{L}\triangleq\frac{1}{2}e_{v}{}^{\top}e_{v}+\frac{1}{2\sigma_{m}c_{m}} \widetilde{c}_{m}^{2}+\frac{1}{2c_{m}\sigma_{p}}\widetilde{F}_{e}^{\top} \widetilde{F}_{e}\).
Note that with the choices in (10), the term \(R_{\mathcal{U}}^{\mathcal{W}}\bar{e}_{z}T_{f}\) in (1) becomes \(F_{f}\). Using (1) and (9) the Lyapunov function derivative becomes
\[\dot{V}_{L}= -e_{v}{}^{\top}K_{v}e_{v}+\frac{\widetilde{c}_{m}}{c_{m}}\left(e _{v}{}^{\top}K_{v}e_{v}-e_{v}{}^{\top}g-\frac{\dot{\bar{c}}_{m}}{\sigma_{m}} \right)+\] \[\frac{1}{c_{m}}\left(e_{v}-\frac{\dot{\bar{F}}_{e}}{\sigma_{p}} \right)^{\top}\widetilde{F}_{e}, \tag{11}\]
where \(\dot{c}_{m}=0\), and \(\dot{F}_{e}\simeq 0\) was used. The substitution of the adaptation evolution laws (7), (8) in (11) results in
\[\dot{V}_{L}=-e_{v}{}^{\top}K_{v}e_{v}-\frac{\sigma_{r}}{2c_{m}}\left(|| \widetilde{F}_{e}||^{2}+||\widetilde{F}_{e}||^{2}-||F_{e}||^{2}\right). \tag{12}\]
If the gain \(\sigma_{r}=0\) then \(\dot{V}_{L}<0\) with respect to state error variables, proving convergence of \(e_{v}\to 0\). However, in the presence of unmodelled excitations [23], such adaptive controllers are prone to parameter drift, which can cause the estimates to grow unbounded despite guarantees for the tracking of error variables.
The \(\sigma\)-modification rule (i.e., \(\sigma_{r}>0\)) is introduced to alleviate this instability in adaptations [24] by essentially regulating \(\widehat{F}_{e}\) towards \(0\) as the error \(e_{v}\) converges to \(0\). Looking at (12), with \(\Delta\) upper bound of the bounded force \(||F_{e}||\) as stated in Section II then
\[\dot{V}_{L}\leq-\lambda_{\text{min}}(K_{v})||e_{v}||^{2}-\frac{\sigma_{r}}{2c_ {m}}||\widetilde{F}_{e}||^{2}+\frac{\sigma_{r}}{2c_{m}}\Delta^{2}, \tag{13}\]
in which \(\lambda_{\text{min}}(K_{v})\) is the smallest eigenvalue of \(K_{v}\). Thus, the error signal \(e_{v}\) and estimation error \(\widetilde{F}_{e}\) are uniformly ultimately bounded in the set
\[\mathbb{O}\triangleq\left\{e_{v},\ \widetilde{F}_{e}\ \Big{|}\ ||e_{v}||\leq \sqrt{\frac{\sigma_{r}}{2c_{m}\lambda_{\text{min}}(K_{v})}}\Delta,\ ||\widetilde{F}_{e}||\leq\Delta\right\}. \tag{14}\]
Regarding initial adaptation estimates, it is straightforward for \(\widehat{F}(0)=0\) since no information about external disturbances exists at initialization. The selection of \(\widehat{c}_{m}(0)\) is more nuanced.
Since \(c_{m}\) represents the thrust-to-weight ratio of the aerial vehicle (albeit in \(Kg/N\)), a large value of \(\widehat{c}_{m}\) signifies an initial assumption that the vehicle has a small thrust-to-weight capacity and thus the controller will start with large thrust commands. In the case where the assumption is wrong, this behavior could result in dangerously large accelerations during take-off.
Conversely, a small value of \(\widehat{c}_{m}\) assumes a large thrust-to-weight ratio, meaning initial thrust commands will be small. If the assumption is wrong, in this case the vehicle will simply slowly increase the thrust level as \(\widehat{c}_{m}\) builds up and take off when the estimate becomes close enough to the real ratio. Thus, the authors suggest selecting a value of \(\widehat{c}_{m}(0)\approx 0.02\) for any size of UAV, as it corresponds to assumed hovering thrust at just \(20\%\) of capacity, which is a significantly better ratio than any commercial type of UAV.
The overall control architecture can be found in Figure 4.
## V Simulation Studies
The Gazebo physics engine was employed for simulating the system along with the proposed controllers under ROS Noetic. The DJI Matrice 100 quadcopter was simulated, with the ArduCopter autopilot running in a Software-In-The-Loop configuration tasked with handling the low-level attitude control of the vehicle. A \(2\)-DoF robotic arm is attached underneath the aerial vehicle. A screenshot from the simulator can be seen in Figure 1.
The total mass of the system is \(c_{m}=2.5\) Kg. The two links of the arm have lengths \(l_{1}=0.27\) m and \(l_{2}=0.87\) m respectively, while the first joint is at \(0.13\) m below the center of the vehicle. The CBF gains are selected as \(k_{x,y}=2,k_{o}=1,h=g=1.5,K_{q}=0.3\), and
Robot joint limits are \(\pm 105^{o}\) for each joint, with \(K_{L}=0.35\). Configuration velocity limits: \(\overline{u}=-\underline{u}=[0.45,0.45,0.35,0.35,0.35,0.35]^{\top}\). The force control function used is \(\gamma(F)=0.5[F-F_{d}]^{0.5}\), where \(F\) the force measured at the sensor and the alignment-barrier function \(\Omega(s)=\frac{4}{\pi}\text{atan}(1.5\sqrt{s})\). The UAV controller gains used were \(K_{v}=\text{diag}(0.4,0.4,0.6),\sigma_{m}=0.005,\sigma_{p}=0.01,\sigma_{r}=10^{ -3}\), with initial adaptation estimates \(\widetilde{F}_{e}=0_{3\times 1},\widehat{c}_{m}=0.02\). The employed units comply with SI (meters for lengths, Newtons for forces, seconds for time and rad for angles).
### _Force Exertion Scenario_
For the first simulation, the performance of the devised controller is validated while exerting force on a planar surface of arbitrary placement. Specifically, the orientation of axis \(\widetilde{z}\) of \(\mathcal{P}\) is \([-0.664,-0.664,-0.342]^{\top}\) when expressed in \(\mathcal{W}\). The goal is to exert \(F_{d}=-3\)N of normal force. As can be seen in Figure 5 (left), the controller manages to achieve the desired force exertion. Figure 5 (right) depicts the evolution of end-effector distance from the surface \((D(q))\)
Fig. 4: Overall UAM-control block diagram.
over the barrier-alignment function \((\Omega(A(q)))\). Clearly the CBF constraint manages to enforce the desired relationship between the two values (plotted in blue), ensuring safe force exertion.
In Figure 6 the velocity commands for the aerial vehicle generated by the optimization-based controller are plotted alongside their measured counterparts, to visualize the performance of the low-level controller of the UAV.
### _Evaluation of Performance under Disturbances_
In the next scenario, an external disturbance affects the yaw of the vehicle while exerting force, causing it to slip along the wall and lose contact and alignment with the environment. This is an important test in order to evaluate the capability of the designed CBF controller in overcoming such disturbances and maintaining safety during task execution.
#### V-B1 CBF Disabled
For the first simulation, the CBF constraint in the optimization function is disabled. Without this safety-oriented action, the force controller attempts to simultaneously exert forces and correct the end-effector alignment whilst in contact with the wall, resulting in undesirable interaction with the environment and eventual loss of stability.
The force measured at the end-effector is plotted in the top of Figure 7, clearly showing failure in maintaining desired force exertion, while the bottom shows the evolution of \(D(q),\Omega(A(q))\), with a rather large alignment error.
#### V-B2 CBF Enabled
In contrast, Figure 8 shows that the proposed CBF controller manages to maintain a distance to the wall while recovering the end-effector's alignment, allowing for the force exertion to quickly resume safely. Figure 9 shows the evolution of distance over alignment for
## VI Conclusions
This paper presents a control architecture for force exertion on planar surfaces, with a platform consisting of an underactuated multirotor aerial vehicle and an attached robotic
Fig. 5: Force measurement and CBF barrier constraint.
Fig. 8: Forces, distance,and alignment with the CBF constraint.
Fig. 6: Commanded and measured vehicle velocities during force exertion.
Fig. 7: Force, distance and alignment without the CBF constraint.
Fig. 9: Evolution of distance over alignment barrier
arm. An optimization-based control scheme is designed, combining force measurements and a Lyapunov controller with control barrier functions, ensuring the system is capable of achieving force exertion in a prescribed, safety-oriented manner. An adaptive controller is devised for the velocity control of the aerial vehicle, guaranteeing bounded-error command tracking. The performance of the controller is validated via simulation studies.
|
2309.10275 | Optimizing Crowd-Aware Multi-Agent Path Finding through Local
Communication with Graph Neural Networks | Multi-Agent Path Finding (MAPF) in crowded environments presents a
challenging problem in motion planning, aiming to find collision-free paths for
all agents in the system. MAPF finds a wide range of applications in various
domains, including aerial swarms, autonomous warehouse robotics, and
self-driving vehicles. Current approaches to MAPF generally fall into two main
categories: centralized and decentralized planning. Centralized planning
suffers from the curse of dimensionality when the number of agents or states
increases and thus does not scale well in large and complex environments. On
the other hand, decentralized planning enables agents to engage in real-time
path planning within a partially observable environment, demonstrating implicit
coordination. However, they suffer from slow convergence and performance
degradation in dense environments. In this paper, we introduce CRAMP, a novel
crowd-aware decentralized reinforcement learning approach to address this
problem by enabling efficient local communication among agents via Graph Neural
Networks (GNNs), facilitating situational awareness and decision-making
capabilities in congested environments. We test CRAMP on simulated environments
and demonstrate that our method outperforms the state-of-the-art decentralized
methods for MAPF on various metrics. CRAMP improves the solution quality up to
59% measured in makespan and collision count, and up to 35% improvement in
success rate in comparison to previous methods. | Phu Pham, Aniket Bera | 2023-09-19T03:02:43Z | http://arxiv.org/abs/2309.10275v3 | # Crowd-Aware Multi-Agent Pathfinding With Boosted Curriculum Reinforcement Learning
###### Abstract
Multi-Agent Path Finding (MAPF) in crowded environments presents a challenging problem in motion planning, aiming to find collision-free paths for all agents in the system. MAPF finds a wide range of applications in various domains, including aerial swarms, autonomous warehouse robotics, and self-driving vehicles. The current approaches for MAPF can be broadly categorized into two main categories: centralized and decentralized planning. Centralized planning suffers from the curse of dimensionality and thus does not scale well in large and complex environments. On the other hand, decentralized planning enables agents to engage in real-time path planning within a partially observable environment, demonstrating implicit coordination. However, they suffer from slow convergence and performance degradation in dense environments. In this paper, we introduce CRAMP, a crowd-aware decentralized approach to address this problem by leveraging reinforcement learning guided by a boosted curriculum-based training strategy. We test CRAMP on simulated environments and demonstrate that our method outperforms the state-of-the-art decentralized methods for MAPF on various metrics. CRAMP improves the solution quality up to 58% measured in makespan and collision count, and up to 5% in success rate in comparison to previous methods.
## I Introduction
Multi-Agent Path Finding (MAPF) presents challenges with broad applications in autonomous warehouses, robotics, aerial swarms, and self-driving vehicles [1, 2, 3, 4, 5, 6]. The objective is to plan paths for multiple agents to navigate from start to goal positions in obstacle-laden environments. A critical constraint of such systems is to guarantee that agents can navigate concurrently without collisions. Two main categories of approaches are centralized and decentralized planning. Centralized planners [7, 8, 9, 10] aim to find optimal solutions. They are effective in small and sparse environments but face limitations in real-time performance and scalability in dense and complicated environments [11, 12]. These methods require complete a knowledge of the environment and full replanning when a change occurs, leading to exponential computation times with increased agents, obstacles, and world size. Recent studies [13, 10] have sought to discover real-time solutions. However, these solutions remain sub-optimal and still necessitate access to global information about the world.
On the contrary, decentralized methods [14, 15, 16, 17, 18, 1] seek to tackle these challenges by allowing each agent to acquire local policies. In these approaches, agents have the capacity to reactively plan paths within partially observable environments. Such methods prove beneficial in situations where agents lack comprehensive knowledge of the world, as is often the case in the domain of autonomous vehicles. Rather than pursuing an optimal solution for all agents, decentralized planners train local policies that rapidly generate sub-optimal solutions as a tradeoff between speed and solution quality. Given that agents make their decisions based on local information, decentralized approaches often face challenges in achieving effective global coordination among agents. In cluttered, dynamic environments characterized by congestion or rapid changes, agents may tend to prioritize their individual goals, potentially resulting in conflicts and inefficiencies that affect the overall system's performance.
To tackle these challenges, we introduce a novel approach that extends the capabilities of PRIMAL [14] and incorporates the presence of dense crowds into the policy learning process, guided by a boosted curriculum training strategy. Our proposed methodology, referred to as CRAMP, focuses on training intelligent agents to navigate through dense and dynamic environments efficiently. We formulate the MAPF problem as a sequential decision-making task and employ deep reinforcement learning techniques to teach agents to make optimal decisions while considering the presence of other agents and the constantly changing environment.
The key contribution of CRAMP lies in its curriculum-driven training strategy, which progressively exposes agents to increasingly complex scenarios. By starting with simple environments and gradually increasing the difficulty, our approach facilitates smoother learning convergence and improved generalization to real-world MAPF scenarios. Furthermore, we introduce crowd-awareness mechanisms that enable agents to adapt their policies dynamically based on
Fig. 1: _Path comparison between PRIMAL and CRAMP, our novel crowd-aware approach for multi-agent pathfinding problem in densely populated environments. The solution proposed by CRAMP is much shorter compared to PRIMAL._ |
2309.00066 | SoDaCam: Software-defined Cameras via Single-Photon Imaging | Reinterpretable cameras are defined by their post-processing capabilities
that exceed traditional imaging. We present "SoDaCam" that provides
reinterpretable cameras at the granularity of photons, from photon-cubes
acquired by single-photon devices. Photon-cubes represent the spatio-temporal
detections of photons as a sequence of binary frames, at frame-rates as high as
100 kHz. We show that simple transformations of the photon-cube, or photon-cube
projections, provide the functionality of numerous imaging systems including:
exposure bracketing, flutter shutter cameras, video compressive systems, event
cameras, and even cameras that move during exposure. Our photon-cube
projections offer the flexibility of being software-defined constructs that are
only limited by what is computable, and shot-noise. We exploit this flexibility
to provide new capabilities for the emulated cameras. As an added benefit, our
projections provide camera-dependent compression of photon-cubes, which we
demonstrate using an implementation of our projections on a novel compute
architecture that is designed for single-photon imaging. | Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Bruschini, Edoardo Charbon, Mohit Gupta | 2023-08-31T18:13:01Z | http://arxiv.org/abs/2309.00066v2 | # SoDaCam: Software-defined Cameras via Single-Photon Imaging
###### Abstract
Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.
## 1 Introduction
Throughout the history of imaging, sensing technologies and the corresponding processing have developed hand-in-hand. In fact, sensing technologies have, to some extent, defined the scope of processing captured data. In the film era, instances of such processing included dodging and burning. The advent of digital cameras provided processing at the granularity of pixels and paved the way for modern computer vision. Light field cameras [34, 78], by sampling the plenoptic function [2], allowed post-capture processing at the granularity of light rays, enabling novel functionalities such as refocusing photos after-capture. The logical limit of post-capture processing, given the fundamental quantization of light, would be at the level of individual photons. What would imaging look like if we could perform computational processing on individual photons?
In this work, we show that photon data captured by a new class of single-photon detectors, called single-photon avalanche diodes (SPADs), makes it possible to emulate a wide range of imaging modalities such as exposure bracketing [12], video compressive systems [38, 55] and event cameras [52, 60]. A user then has the flexibility to choose one (or even multiple) of these functionalities _post-capture_ (Fig. 1 _(top)_). SPAD arrays can operate as extremely high frame-rate photon detectors (\(\sim\)\(100\) kHz), producing a temporal sequence of binary frames called a photon-cube [16]. We show that computing _photon-cube projections_, which are simple linear and shift operations, can reinterpret the photon-cube to achieve novel post-capture imaging functionalities in a software-defined manner (Fig. 1 _(middle)_).
As case studies, we emulate three distinct imagers: high-speed video compressive imaging; event cameras which respond to dynamic scene content; and motion projections which emulate sensor motion, without any real camera movement. Fig. 1 _(bottom)_ shows the outputs of these cameras that are derived from the same photon-cube.
Computing photon-cube projections.One way to obtain photon-cube projections is to read the entire photon-cube off the SPAD array and then perform relevant computations off-chip; we adopt this strategy for our experiments in Secs. 6.1 and 6.2. While reasonable for certain applications, reading out photon-cubes requires an exorbitant data bandwidth, which can be up to \(100\) Gbps for a \(1\) MPixel array--well beyond the capacity of existing data peripherals. Such readout considerations will become center stage as large-format SPAD arrays are fabricated [48, 49].
An alternative is to avoid transferring the entire photon
cube by computing projections near sensor. As a proof-of-concept, we implement photon-cube projections on Ultra-Phase [1], a recently-developed programmable SPAD imager with independent processing cores that have dedicated RAM and instruction memory. We show, in Sec. 6.3, that computing projections on-chip greatly reduces sensor readout and, as a consequence, power consumption.
**Implications: Toward a photon-level software-defined camera.** The photon-cube projections introduced in this paper are computational constructs that provide a realization of _software-defined cameras_ or _SoDaCam_. Being software-defined, SoDaCam can emulate multiple cameras simultaneously without additional hardware complexity. SoDaCam, by going beyond baked-in hardware choices,
Figure 1: _(top)_**SoDaCam** can emulate a variety of cameras from the photon-cubes acquired by single-photon devices. _(middle)_ Photon-cubes represent the spatio-temporal detection of photons as a sequence of binary frames. Projections of the photon-cube, when computed either on or off-chip, result in reinterpretable and software-defined cameras. We demonstrate the versatility of photon-cube projections on a **real dynamic scene**: a die falls on a table, bounces, spins in the air, and later ricochets off a nearby toy top. _(bottom)_ The cameras emulated by our photon-cube projections can produce a \(12\times\) high-speed video from a single compressive snapshot, event-stream representations of two time intervals (blue and red depict positive and negative spikes respectively), an image where the die appears stationary, as well as a motion-deblurred image.
unlocks hitherto unseen capabilities--such as \(2000\) FPS video from \(25\) Hz readout (Fig. 7); event imaging in very low-light conditions (Fig. 9); and motion stacks, which are a stack of images wherein each image, objects only in certain velocity ranges appear sharp (Fig. 6).
Limitations.The SPAD array [19] used in this work has a relatively low spatial resolution (\(512\times 256\)), and a low fill-factor (\(\sim\)10%) owing to the lack of microlenses in the prototype used. Similarly, the near-sensor processor that we use has limited capabilities compared to off-chip processors. However, with rapid progress in the development of single-photon cameras [48, 49] and increasing interest in near-sensor processors, we anticipate that many of these shortcomings will be addressed in the upcoming years.
## 2 Related Work
Reinterpretable imaginghas previously been explored at the granularity of light rays [3], by modulating the plenoptic function, and at the level of spatio-temporal voxels [20], by using fast per-pixel shutters. SoDaCam represents a logical culmination of reinterpretability at the level of photon detections, that facilitates multiple post-capture imaging functionalities.
Programmable imagingusing a digital micromirror device was first introduced in Nayar et al. [51] to perform _pre-capture_ radiometric manipulations. Modern programmable cameras are typically near-sensor processors [8, 69, 71] that can perform limited operations in analog [46, 55, 69], while more complex operations [9, 45] occur after analog-to-digital conversion (ADC). In contrast, by performing _post-capture_ computations directly on photon detections, we can perform complex operations without incurring the read-noise penalty that is associated with ADC.
Passive single-photon imaging.Only recently have SPADs been utilized as passive imaging devices, with applications in high-dynamic range imaging [29, 30, 37, 50], motion-compensation [31, 58], burst photography [11, 42] and object tracking [22]. Compared to compute-intensive burst-photography methods [11], our proposed techniques involve lightweight computations that can be performed near sensor. These computations can also be performed using other single-photon imagers such as Jots [15, 17], which feature higher sensor resolution and photon-efficiency [40], albeit at lower frame-rates and higher read noise.
Reducing the readout of SPADs.Several data reduction strategies have been proposed in the context of SPADs that are used to timestamp incident photons, including: coarse histograms [13, 23, 56], compressive histograms [21], and measuring differential time-of-arrivals [70, 77]. When SPADs are operated as photon-detectors, multi-bit counting [48], or summing binary frames, can reduce readout. While compression is not our main objective, we show that photon-cube projections act as camera-specific compression schemes that dramatically reduce sensor readout.
## 3 Background: Single-Photon Imaging Model
A SPAD array captures incident light as a _photon-cube_: a temporal sequence of binary frames that represents the pixel-wise detection of photons across their respective exposure windows. We can model the stochastic arrival of photons as a Poisson process [72], allowing us to treat spatio-temporal values of the photon-cube as independent Bernoulli random variables with
\[\text{Pr}\left\{B_{t}(\mathbf{x})=1\right\}=1-e^{-(\eta\Phi(\mathbf{x},t)+r_{ q})\text{new}_{\text{exp}}}, \tag{1}\]
where \(B_{t}(\mathbf{x})\) represents the value of the photon-cube at pixel \(\mathbf{x}\) and exposure index \(1\leq t\leq T\), which receives a mean incident flux of intensity \(\Phi(\mathbf{x},t)\) across its exposure of duration \(w_{\text{exp}}\). Additionally, \(\eta\) is the photon detection efficiency of the SPAD, and \(r_{q}\) denotes the sensor's dark count rate--which is the rate of spurious counts unrelated to incident photons. While individual binary frames are extremely noisy, the temporal sum of the photon-cube
\[\mathcal{I}_{\text{sum}}(\mathbf{x}):=\sum_{t=1}^{T}B_{t}(\mathbf{x}), \tag{2}\]
can produce an 'image' of the scene that is sharp in static regions, but blurry in dynamic regions (Fig. 2_(top)_). Indeed, in static regions, the sum-image can be used to derive a maximum likelihood estimator of the scene intensity [4], given by \(\hat{\Phi}(\mathbf{x})=-\ln(1-T^{-1}\mathcal{I}_{\text{sum}}(\mathbf{x}))/ \eta w_{\text{exp}}-r_{q}/w_{\text{exp}}\).
## 4 Projections of the Photon-Cube
The temporal sum described in Eq. (2) is a simple instance of projections of a photon-cube. Our key observation is that it is possible to compute a wide range of photon-cube projections, each of which emulates a unique sensing modality _post-capture_--including modalities that are difficult to achieve with conventional cameras. For example, varying the number of bit-planes that are summed over emulates exposure bracketing [12, 43], which is typically used for HDR imaging. Compared to conventional exposure bracketing, the emulated exposure stack, being software-defined, does not require spatial and temporal registration, which can often be error-prone. Fig. 2_(top)_ shows an example of an exposure stack computed from a photon-cube.
Going further, we can gradually increase the complexity of the projections. For example, consider a _coded exposure
projection that multiplexes bit-planes with a temporal code
\[\mathcal{I}_{\text{flutter}}(\mathbf{x}):=\sum_{t=1}^{T}C_{t}B_{t}(\mathbf{x}), \tag{3}\]
where \(C_{t}\) is the temporal code. An example of globally-coded exposures is the flutter shutter camera [53], which uses pseudo-random binary codes for motion-deblurring.
More general coded exposures can be obtained via spatially-varying temporal coding patterns \(C_{t}(\mathbf{x})\):
\[\mathcal{I}_{\text{coded}}(\mathbf{x}):=\sum_{t=1}^{T}C_{t}(\mathbf{x})B_{t}( \mathbf{x}). \tag{4}\]
Fig. 2 (_bottom_) shows spatially-varying exposures that use a quad (Bayer-like) spatial pattern and random binary masks. With photon-cubes, we can perform spatially-varying coding without bulky spatial light modulators, similar to focal-plane sensor-processors [46, 69]. Moreover, we can capture multiple coded exposures simultaneously, which is challenging to realize in existing sensors. In Sec. 5.1, we describe coding patterns for video compressive sensing.
Spatial and temporal gradients form the building blocks of several computer vision algorithms [7, 11, 24, 26, 39]. Given this, another projection of interest is temporal contrast, i.e., a derivative filter preceded by a smoothing filter:
\[\mathcal{I}_{\text{contrast}}(\mathbf{x},t):=D_{t}\circ G*B_{t}(\mathbf{x}), \tag{5}\]
where \(D_{t}\) is the difference operator, \(G\) could be exponential or Gaussian smoothing, \(\circ\) denotes function composition, and \(*\) denotes convolution. Due to their sparse nature, gradients form the basis of bandwidth- and power-efficient event cameras [6, 14, 36, 60], which we emulate in Sec. 5.2.
So far, we have considered projections taken only along the time axis. Next, we consider a more general class of _spatio-temporal projections_ that lead to novel functionalities. For instance, computing a simple projection, such as the temporal sum, along arbitrary spatio-temporal directions emulates sensor motion during exposure time [33], but _without moving the sensor_. We achieve this by shifting bit-planes and computing their sum:
\[\mathcal{I}_{\text{shift}}(\mathbf{x}):=\sum_{t=1}^{T}B_{t}\left(\mathbf{x}+ \mathbf{r}(t)\right), \tag{6}\]
where \(\mathbf{r}\) is a discretized 2D trajectory that determines sensor motion. Outside a software-defined framework, such projections are hard to realize without physical actuators. We describe the capabilities of _motion projections_ in Sec. 5.3.
In summary, the proposed photon-cube projections are simple linear and shift operators that lead to a diverse set of post-capture imaging functionalities. These projections pave the way for future'swiss-army-knife' imaging systems that achieve _multiple functionalities (e.g., event cameras, high-speed cameras, conventional cameras, HDR cameras) simultaneously with a single sensor_. Finally, these projections can be computed efficiently in an online manner, which makes on-chip implementation viable (Sec. 6.3).
At this point, we note that a key enabling factor of photon-cube projections is the extremely high temporal-sampling rate of SPADs. Indeed, the temporal sampling rate determines key aspects of sensor emulation, such as the discretization of temporal derivatives and motion trajectories. This raises a natural question: can we use conventional high-speed cameras for computing projections?
Trade-off between frame-rate and SNR.In principle, photon-cube projections can be computed using regular (CMOS or CCD based) high-speed cameras. Unfortunately, each frame captured by a high-speed camera incurs a read-noise penalty, which increases with the camera's frame-rate [6]. In fact, the read noise levels of high-speed cameras [1] can be \(10\)-\(30\times\) higher than consumer cameras [28]. Coupled with the low per-frame incident flux at high frame-rates, high levels of read noise result in extremely low SNRs. In contrast, SPADs do not incur a per-frame read noise and are limited only by the fundamental photon noise. Hence, for the post-capture software-defined functionalities proposed here, it is imperative to use SPADs.
## 5 Emulating Cameras from Photon-Cubes
Sec. 4 presented the concept of photon-cube projections and its potential for achieving multiple post-capture imag
Figure 2: **Coded exposures from photon-cubes.**_(top)_ An exposure stack with sum-images computed using \(250\), \(500\), and \(1000\) bit-planes. Short exposures are noisy while long exposures exhibit motion blur. _(bottom)_ Spatially-varying exposure that uses a quad pattern [32] (see inset), and a video compressive frame that uses \(16\) random binary masks to modulate the photon-cube. Zoom-in to see details.
ing functionalities. As case studies, we now demonstrate three imaging modalities: video compressive sensing, event cameras, and motion-projection cameras. These modalities have been well-studied over several years; in particular, there exist active research communities around video compressive sensing and event cameras today. We also show new variants of these imaging systems that arise from the software-defined nature of photon-cube projections.
### Video Compressive Sensing
Video compressive systems _optically_ multiplex light with random binary masks, such as the patterns in Fig. 3 _(left)_. As discussed in the previous section, such multiplexing can be achieved _computationally_ using photon-cubes.
Two-bucket cameras.One drawback of capturing coded measurements is the light loss due to blocking of incident light. To prevent loss of light, coded two-bucket cameras [69] capture an additional measurement that is modulated by the complementary mask sequence (Fig. 3 _(center)_). Such measurements recover higher quality frames, even after accounting for the extra readout [9, 14]. Two-bucket captures can be readily derived from photon-cubes, by implementing Eq. (4) with the additional mask sequence.
Multi-bucket cameras.We can extend the idea of two-bucket captures to multi-bucket captures by accumulating bit-planes in one of \(k\) buckets that is randomly chosen at each time instant and pixel location. Since multiplexing is performed computationally, we do not face any loss in photoreceptive area that [59, 68] hampers existing multi-bucket sensors. Multi-bucket captures can reconstruct a large number of frames by better conditioning video recovery and provide extreme high-speed video imaging. Fig. 3_(right)_ shows the modulating masks for a four-bucket capture.
### Event Cameras
Next, we describe the emulation of event cameras, which capture changes in light intensity and are conceptually similar to the temporal contrast projection introduced in Eq. (5). Physical implementations of event sensors [6, 14, 36, 60] generate a photoreceptor voltage \(V(\mathbf{x},t)\) with a logarithmic response to incident flux \(\Phi(\mathbf{x},t)\), and output an event \((\mathbf{x},t,p)\) when this voltage deviates sufficiently from a reference voltage \(V_{\text{ref}}(\mathbf{x})\):
\[|V(\mathbf{x},t)-V_{\text{ref}}(\mathbf{x})|>\tau, \tag{7}\]
where \(\tau\) is called the contrast-threshold and \(p=\text{sign}(V(\mathbf{x},t)-V_{\text{ref}}(\mathbf{x}))\) encodes the polarity of the event. Once an event is generated, \(V_{\text{ref}}(\mathbf{x})\) is updated to \(V(\mathbf{x},t)\). Eq. (7), for a smoothly-varying flux intensity, thresholds a function of the temporal gradient, i.e., \(\partial_{t}\log(\Phi(\mathbf{x},t))\).
From bit-planes to event streams.To produce events from SPAD frames, we compute an exponential moving average (EMA) of the bit-planes, as \(\mu_{t}(\mathbf{x})=(1-\beta)B_{t}(\mathbf{x})+\beta\mu_{t-1}(\mathbf{x})\)--where \(\mu_{t}(\mathbf{x})\) is the EMA, \(\beta\) is the smoothing
Figure 4: **Event stream from photon-cubes.**_(top left)_ By exploiting the non-linear response curve of SPADs to encode brightness, we can avoid the underflow issues of a log-response. We visualize events generated from photon-cubes using a \(3\)D scatter plot of polarities (_top right_, \(14000\) bit-planes), and frame accumulation of events (_bottom_, \(1200\) bit-planes). Blue and red denote positive and negative spikes respectively. The event images also show the effect of varying the contrast threshold \(\tau\) and exponential decay \(\beta\)—larger values yield a less noisy but sparser event stream.
Figure 3: **Modulating masks for video compressive sensing.**_(left)_ A single VCS measurement temporally compresses a sequence of frames using binary random masks. _(center)_ Two-bucket cameras capture an additional measurement by using the complementary mask sequence. _(right)_ We propose using multi-bucket captures by randomly choosing an active bucket for each frame. Both two-bucket and multi-bucket captures have \(100\)% light efficiency. All masks are visualized here for \(16\times 16\) pixels.
factor, and \(B_{t}\) is a bit-plane. We generate an event when \(\mu_{t}(\mathbf{x})\) deviates from \(\mu_{\text{ref}}(\mathbf{x})\) by at least \(\tau\):
\[|h(\mu_{t}(\mathbf{x}))-h(\mu_{\text{ref}}(\mathbf{x}))|>\tau, \tag{8}\]
where \(h\) is a scalar function applied to the EMA. We can see that Eq. (8) thresholds temporal contrast, by observing the role played by the EMA and the difference operator.
Setting \(h\) to be the logarithm of the flux MLE mimics Eq. (7). However, since the log-scale is used to prevent sensor saturation, a simpler alternative is to use the non-saturating response curve of SPAD pixels (\(h(x)=x\)). The response curve takes the form of \(1-\exp\left(-\alpha\Phi(\mathbf{x},t)\right)\), where \(\alpha\) is a flux-independent constant. As a major advantage, this response curve avoids the underflow issues of the log function that can occur in low-light scenarios [62].
The SPAD's frame rate determines the time-stamp resolution of emulated events. In Fig. 4, we show the events generated from a photon-cube acquired at a frame-rate of \(96.8\) kHz--resulting in a time-stamp resolution of \(\sim\)\(10\)\(\mu\)s that is comparable to those of existing event cameras.
How do SPAD-events differ from the output of a regular event camera?The main difference is the expression of temporal contrast, given by \(\partial_{t}h\), is now \(-\partial_{t}\exp(-\alpha\Phi(\mathbf{x},t))\), instead of \(\partial_{t}\log(\Phi(\mathbf{x},t))\). This difference poses no compatibility issues for a large class of event-vision algorithms that utilize a grid of events [13, 57, 66] or brightness changes [18]. We show examples of downstream applications using SPAD-events in Suppl. Sec. 2. Finally, SPAD-events can be easily augmented with spatially- and temporally-aligned intensity information--a synergistic combination that has been exploited by several recent event-vision works [18, 25, 74].
### Motion Projections
Having described the emulation of cameras that capture coded exposures and temporal contrasts, we now shift our attention to cameras that emulate sensor motion during exposure, _viz._ motion cameras. We describe two useful trajectories when emulating motion cameras using Eq. (6).
Linear trajectory.The simplest sensor trajectory involves linear motion, where \(\mathbf{r}(t)=\left(bt+c\right)\mathbf{\hat{p}}\) for some constants \(b,c\in\mathbb{R}\) and unit vector \(\mathbf{\hat{p}}\). As Fig. 5_(top row)_ shows, this can change the scene's frame of reference: making moving objects appear stationary and vice-versa.
Motion-invariant parabolic projection.If motion is along \(\mathbf{\hat{p}}\), parabolic integration produces a motion-invariant image [33]--all objects, irrespective of their velocity, are blurred by the same point spread function (PSF), up to a linear shift. Thus, a deblurred parabolic capture produces a sharp image of all velocity groups (Fig. 5_(bottom row)_). The parabolic trajectory is given by \(\mathbf{r}(t)=(at^{2}+bt+c)\mathbf{\hat{p}}\). We choose \(a\) based on the maximum object velocity and \(b,c\)
Figure 5: **Motion projections.**_(top)_ Integrating along a linear trajectory in the photon-cube changes the apparent image-space velocity of scene objects. Details are seen for _(top left)_ the case when static, and _(top right)_ the metallic tape when the sensor translates along the \(x\)-axis. _(bottom)_ A parabolic integration trajectory results in a motion-invariant image, resulting in similar blur kernels for all objects. _(bottom right)_ Deblurring with the resultant shift-invariant point spread function (shown in _inset_) produces a sharp image.
Figure 6: **Motion stack.** Computing multiple linear projections with different trajectories can produce a stack of images where objects with matching velocities are sharp. Here, we show a traffic scene involving four cars that have four different velocities. By suitably altering the slope of the linear trajectory, we can produce images where only one of the cars appears sharp at a time. We indicate the slope of the trajectories chosen and the objects that are “in-focus”.
so the parabola's vertex lies at \(T/2\). We readily obtain the PSF by applying the parabolic integration to a delta input. Upon deconvolution using the PSF, a parabolic projection provides the optimal SNR for a blur-free image from single capture when only the direction of velocity is known.
Ensembling linear projections.Finally, we leverage the flexibility of photon-cubes to compute multiple linear projections, as seen in Fig. 6. This produces a stack of images where one velocity group is motion-blur free at a time--or a'motion stack', analogous to a focal stack. This novel construct can be used to compensate motion by blending stack images using cues such as blur orientation or optical flow.
## 6 Hardware and Experimental Results
We design a range of experiments to demonstrate the versatility of photon-cube projections: both when computations occur after readout (Secs. 6.1 and 6.2), and when they are performed near-sensor on-chip (Sec. 6.3). All photon-cubes were acquired using the SwissSPAD2 array [19], operated using one of two sub-arrays, each having \(512\times 256\) pixels, and at a frame-rate of \(96.8\) kHz. For the on-chip experiments, we use the UltraPhase compute architecture to interface with photon-cubes acquired by the SwissSPAD2.
### SoDaCam Capabilities
High-speed compressive imaging.We reconstruct \(80\) frames from compressive snapshots that are emulated at \(25\) Hz, resulting in a \(2000\) FPS video. We decode compressive snapshots using a plug-and-play (PnP) approach, PnP-FastDVDNet [26]. As Fig. 7 shows, it is challenging to recover a large number of frames from a single compressive measurement. Using the proposed multi-bucket scheme significantly improves the quality of video reconstruction.
Figure 8: **Deblurring of traffic scenes using motion projections.** Linear projections can recover details of moving objects if their velocity is known. When only the motion direction is known (e.g., road’s orientation), a sharp image can be obtained by either deblurring a parabolic projection or by blending multiple randomly-sampled linear projections. We quantitatively compare against the compute- and bandwidth-expensive Quanta Burst Photography [11], based on PSNR and LPIPS [76].
Figure 7: **High-speed videography at \(2000\) FPS of a tennis ball dropped into a bowl of water, from a \(25\) Hz readout. The conventional capture provides a visualization of the scene dynamics. It is challenging to reconstruct a large number of frames from a single compressive snapshot. Multi-bucket captures recover frames with significantly greater detail, such as the crown of water surrounding the ball. We include more sequences (e.g., a bursting balloon) in the supplementary material.**
While multi-bucket captures require more bandwidth, this can be partially amortized by coding only dynamic regions, which we show in Suppl. Sec. 1.
Motion projections on a traffic scene.Fig. 8 shows two traffic scenes captured using a \(50\) mm focal length lens and at \(30\) Hz emulation. When object velocity is known, a linear projection can make moving objects appear stationary. If only the velocity direction is known (e.g., road's orientation in Fig. 8), a parabolic projection provides a sharp reconstruction of all objects. We deblur parabolic captures using PnP-DnCNN [75]. We offer an improvement by randomly sampling \(8\) linear projections along the velocity direction and blending them using the optical flow predicted by RAFT [17] between two short exposures.
Low-light event imaging.Fig. 9 compares event-image visualizations of SPAD and that of a state-of-the-art commercial event sensor (Prophesee EVK4), across various light levels, with an accumulation period of \(33\) ms. For a fair comparison, we bin the Prophesee's events in blocks of \(2\times 2\) pixels and use a smaller aperture to account for the lower fill factor of the SPAD. We tuned event-generation parameters (contrast threshold, integrator decay rate) of both cameras at each light level. Low light induces blur and deteriorates the Prophesee's event stream. In contrast, SPAD-events continue to capture temporal gradients, due to the SPAD's low-light capabilities and its brightness-encoding response curve. We include an ablative study of brightness-encoding functions in Suppl. Sec. 2.
Our observations are in concurrence with recent works that examine the low-light performance of event cameras [19, 27], and show that SPAD-events can provide neuromorphic vision in these challenging-SNR scenarios.
### Comparison to High-Speed Cameras
Recall, as previously discussed in Sec. 4, that read-noise limits the per-frame SNR of high-speed cameras. To demonstrate this limitation, we compute projections using the \(4\) kHz acquisition of the Photron Infinicam, a conventional high-speed camera, at a resolution of \(1246\times 240\) pixels. We operate the SwissSPAD2 and the Infinicam at ambient light conditions using the same lens specifications. As
Figure 10: **Comparison against conventional high-speed acquisition at \(4000\) Hz. (top) SPAD projections recover a \(16\times\) compressive video and an event image of a spinning roulette wheel. (middle) Read-noise corrupts the incident flux in the Infinicam high-speed camera, removing details in frames which are compressed on-the-fly. (bottom) Although using a larger aperture to admit more light recovers some detail, noise and compression artifacts still persist.**
Figure 9: **Comparison to a state-of-the-art event camera. SPAD-events can capture temporal gradients even when the light-level is reduced by \(500\times\), by benefiting from their single-photon sensitivity and bounded brightness response curve. In contrast, low-light induces blur and deteriorates the Prophesee’s event stream. As a measure of the light-level, we report the PPP (photons per pixel) averaged across bit-planes and a light meter’s reading at the sensor location.**
Fig. 10 shows, read noise corrupts the incident signal in Infinicam and makes it impossible to derive any useful projections. The read noise could be averaged out to some extent if the Infinicam did not perform compression-on-the-fly, but compression is central to the camera's working and enables readout over USB. Using a larger aperture to admit more light improves the quality of computed projections, but the video reconstruction and event image remain considerably worse than the corresponding outputs of the SPAD.
### Bandwidth and Power Implications
While Sec. 6.1 has demonstrated the capabilities of photon-cube projections, we now show that our projections can also be obtained in a bandwidth-efficient manner via near-sensor computations. We implement photon-cube projections on UltraPhase (Fig. 11 _(left)_), a novel compute architecture designed for single-photon imaging. UltraPhase consists of \(3\times 6\) processing cores, each of which interfaces with \(4\times 4\) pixels, and can be 3D stacked beneath a SPAD array. We include visualizations and programming details of a few example projections in Suppl. Sec. 5.
We measure the readout and power consumption of UltraPhase when computing projections on \(2500\) bit-planes of the falling die sequence (Fig. 1). The projections include: VCS with \(16\) random binary masks, an event camera, a linear projection, and a combination of the three. We output projections at \(12\)-bit depth and calculate metrics based on the clock cycles required for both compute and readout. As seen in Fig. 11 _(right)_, computing projections on-chip dramatically reduces sensor readout and power consumption as compared to reading out the photon-cube. Finally, similar to existing event cameras, SPAD-events have a resource footprint that reflects the underlying scene dynamics.
In summary, our on-chip experiments show that performing computations near-sensor can increase the viability of single-photon imaging in resource-constrained settings.
## 7 Discussion and Future Outlook
SoDaCam provides a realization of reinterpretable software-defined cameras [2, 3, 20, 34, 51] at the fine temporal resolution of SPAD-acquired photon-cubes. The proposed computations, or photon-cube projections, can match and in some cases, surpass the capabilities of existing imaging systems. The software-defined nature of photon-cube projections provides functionalities that may be difficult to achieve in conventional sensors. These projections can reduce the readout and power-consumption of SPAD arrays and potentially spur widespread adoption of single-photon imaging in the consumer domain. Finally, future chip-to-chip communication standards may also make it feasible to compute projections on a camera image signal processor.
Adding color to SoDaCam.One way to add color is by overlaying color filter arrays (CFAs) and perform demosaicing on the computed photon-cube projection: depending on the projection, demosaicing could be relatively simple or more complex. As a reference, Bayer CFAs have been considered in the context of both video compressive sensing [26] and event cameras [64]. Incorporating CFAs with motion projections requires careful considerations, e.g., avoiding integrating across pixel locations of differing color.
Future outlook on SPAD characteristics.A key SPAD characteristic that determines several properties of emulated cameras is the frame rate. While no fundamental limitations prevent SPADs from being operated at the frame rates utilized in this work (\(\sim\)\(100\) kHz), sensor readout and power constraints can preclude high speeds, especially in high-resolution SPAD arrays. Photon-cube projections can enable future large-format SPADs to preserve high-speed information with modest resource requirements.
A platform for comparing cameras.Comparing imaging modalities can be quite challenging since hardware realizations of sensors can differ in numerous aspects, such as their quantum efficiency, fill factor, pixel pitch, and array resolution. By emulating their imaging models, SoDaCam can serve as a platform for hardware-agnostic comparisons; for instance, determining operating conditions where one imaging modality is advantageous over another.
A Cambrian explosion of new cameras.Besides comparing cameras, by virtue of being software-defined, SoDaCam can also make it significantly easier to prototype and deploy new unconventional imaging models, and even facilitate sensor-in-the-loop optimization [44, 47, 63] by tailoring photon-cube projections for downstream computer-vision tasks. This is an exciting future line of research.
Figure 11: **Power and bandwidth requirements** when computing photon-cube projections on UltraPhase [1]_(left)_, a recent compute architecture designed for single-photon imaging, at \(40\) Hz readout. _(right)_ Our projections act as a compression scheme for photon-cubes, resulting in dramatically reduced sensor readout and power consumption.
## References
* [1]P. A. Al-A.
High-speed 3D sensing via hybrid-mode imaging and guided upsampling. _Optica_, 7(10):1253-1260, 2020.
* Harris et al. [1988] C. Harris, M. Stephens, et al. A combined corner and edge detector. In _Alvey vision conference_, volume 15, pages 10-5244. Citeseer, 1988.
* Hidalgo-Carrio et al. [2022] J. Hidalgo-Carrio, G. Gallego, and D. Scaramuzza. Event-aided direct sparse odometry. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5781-5790, June 2022.
* Horn and Schunck [1981] B. K. Horn and B. G. Schunck. Determining optical flow. _Artificial intelligence_, 17(1-3):185-203, 1981.
* Hu et al. [2021] Y. Hu, S.-C. Liu, and T. Delbruck. v2e: From video frames to realistic DVS events. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_, pages 1312-1321, June 2021.
* Igual [2019] J. Igual. Photographic noise performance measures based on raw files analysis of consumer cameras. _Electronics_, 8(11):1284, 2019.
* Ingle et al. [2019] A. Ingle, A. Velten, and M. Gupta. High Flux Passive Imaging With Single-Photon Sensors. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019.
* Ingle et al. [2021] A. Ingle, T. Seets, M. Buttafava, S. Gupta, A. Tosi, M. Gupta, and A. Velten. Passive inter-photon imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2021.
* Iwabuchi et al. [2021] K. Iwabuchi, Y. Kameda, and T. Hamamoto. Image quality improvements based on motion-based deblurring for single-photon imaging. _IEEE Access_, 9:30080-30094, 2021. doi: 10.1109/ACCESS.2021.3059293.
* Jiang et al. [2021] Y. Jiang, I. Choi, J. Jiang, and J. Gu. HDR video reconstruction with tri-exposure quad-layer sensors. _arXiv preprint arXiv:2103.10982_, 2021.
* Levin et al. [2008] A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman. Motion-invariant photography. _ACM Transactions on Graphics (TOG)_, 27(3):1-9, 2008.
* Levoy and Hanrahan [1996] M. Levoy and P. Hanrahan. Light field rendering. In _Proceedings of the 23rd annual conference on Computer graphics and interactive techniques_, pages 31-42, 1996.
* Li et al. [2020] Y. Li, M. Qi, R. Gulve, M. Wei, R. Genov, K. N. Kutulakos, and W. Heidrich. End-to-end video compressive sensing using anderson-accelerated unrolled networks. In _2020 IEEE International Conference on Computational Photography (ICCP)_, pages 1-12, 2020. doi: 10.1109/ICCP48838.2020.9105237.
* Lichtsteiner [2003] P. Lichtsteiner. 64x64 event-driven logarithmic temporal derivative silicon retina. In _Program 2003 IEEE Workshop on CCD and AIS_, 2003.
* Liu et al. [2022] Y. Liu, F. Gutierrez-Barragan, A. Ingle, M. Gupta, and A. Velten. Single-photon camera guided extreme dynamic range imaging. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, pages 1575-1585, January 2022.
* Llull et al. [2013] P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady. Coded aperture compressive temporal imaging. _Opt. Express_, 21(9):10526-10545, May 2013. doi: 10.1364/OE.21.010526. URL [https://opg.optica.org/oe/abstract.cfm?URI=oe-21-9-10526](https://opg.optica.org/oe/abstract.cfm?URI=oe-21-9-10526).
* Lucas and Kanade [1981] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In _IJCAI'81: 7th international joint conference on Artificial intelligence_, volume 2, pages 674-679, 1981.
* Ma et al. [2017] J. Ma, S. Masoodian, D. A. Starkey, and E. R. Fossum. Photon-number-resolving megapixel image sensor at room temperature without avalanche gain. _Optica_, 4(12):1474-1481, Dec 2017. doi: 10.1364/OPTICA.4.001474. URL [http://www.osapublishing.org/optica/abstract.cfm?URI=optica-4-12-1474](http://www.osapublishing.org/optica/abstract.cfm?URI=optica-4-12-1474).
* Ma et al. [2020] S. Ma, S. Gupta, A. C. Ulku, C. Bruschini, E. Charbon, and M. Gupta. Quanta burst photography. _ACM Transactions on Graphics_, 39(4):1-16, July 2020. ISSN 0730-0301, 1557-7368.
* Ma et al. [2023] S. Ma, P. Mos, E. Charbon, and M. Gupta. Burst vision using single-photon cameras. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, pages 5375-5385, January 2023.
* Mann and Picard [1994] S. Mann and R. Picard. Beingundigital'with digital cameras. _MIT Media Lab Perceptual_, 1:2, 1994.
* Martel et al. [2020] J. N. Martel, L. K. Mueller, S. J. Carey, P. Dudek, and G. Wetzstein. Neural sensors: Learning pixel exposures for HDR imaging and video compressive sensing with programmable sensors. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 42(7):1642-1653, 2020.
* Martel et al. [2016] J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek. Parallel HDR tone mapping and auto-focus on a cellular processor array vision chip. In _2016 IEEE International Symposium on Circuits and Systems (ISCAS)_, pages 1430-1433, 2016. doi: 10.1109/ISCAS.2016.7527519.
* Martel et al. [2017] J. N. P. Martel, L. K. Muller, S. J. Carey, and P. Dudek. High-speed depth from focus on a programmable vision chip using a focus tunable lens. In _2017 IEEE International Symposium on Circuits and Systems (ISCAS)_, pages 1-4, 2017. doi: 10.1109/ISCAS.2017.8050548.
* Metzler et al. [2020] C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein. Deep optics for single-shot high-dynamic-range imaging. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2020.
* [48] K. Morimoto, A. Ardelean, M.-L. Wu, A. C. Ulku, I. M. Antolovic, C. Bruschini, and E. Charbon. Megapixel time-gated SPAD image sensor for 2D and 3D imaging applications. _Optica_, 7(4):346-354, Apr. 2020.
* [49] K. Morimoto, J. Iwata, M. Shinohara, H. Sekine, A. Abdelghafar, H. Tsuchiya, Y. Kuroda, K. Tojima, W. Endo, Y. Maehashi, Y. Ota, T. Sasago, S. Maekawa, S. Hikosaka, T. Kanou, A. Kato, T. Tezuka, S. Yoshizaki, T. Ogawa, K. Uehira, A. Ehara, F. Inui, Y. Matsuno, K. Sakurai, and T. Ichikawa. 3.2 megapixel 3D-stacked charge focusing SPAD for low-light imaging and depth sensing. In _2021 IEEE International Electron Devices Meeting (IEDM)_, pages 20.2.1-20.2.4, 2021. doi: 10.1109/IEDM19574.2021.9720605.
* [50] S. Namiki, S. Sato, Y. Kameda, and T. Hamamoto. Imaging method using multi-threshold pattern for photon detection of quanta image sensor. In _International Workshop on Advanced Imaging Technology (IWAIT) 2022_, volume 12177, page 1217702. SPIE, 2022.
* [51] S. K. Nayar, V. Branzoi, and T. E. Boult. Programmable imaging: Towards a flexible camera. _International Journal of Computer Vision_, 70:7-22, 2006.
* [52] C. Posch, D. Matolin, and R. Wohlgenannt. A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS. _IEEE Journal of Solid-State Circuits_, 46(1):259-275, 2011. doi: 10.1109/JSSC.2010.2085952.
* [53] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: motion deblurring using fluttered shutter. In _Acm Siggraph 2006 Papers_, pages 795-804. 2006.
* [54] H. Rebecq, R. Ranftl, V. Koltun, and D. Scaramuzza. Events-to-video: Bringing modern computer vision to event cameras. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019.
* [55] D. Reddy, A. Veeraraghavan, and R. Chellappa. P2C2: Programmable pixel compressive camera for high speed imaging. In _CVPR 2011_, pages 329-336, 2011. doi: 10.1109/CVPR.2011.5995542.
* [56] X. Ren, P. W. Connolly, A. Halimi, Y. Altmann, S. McLaughlin, I. Gyongy, R. K. Henderson, and G. S. Buller. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor. _Optics express_, 26(5):5541-5557, 2018.
* [57] C. Scheerlinck, H. Rebecq, D. Gehrig, N. Barnes, R. Mahony, and D. Scaramuzza. Fast image reconstruction with an event camera. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, March 2020.
* [58] T. Seets, A. Ingle, M. Laurenzis, and A. Velten. Motion adaptive deblurring with single-photon cameras. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, pages 1945-1954, January 2021.
* [59] M.-W. Seo, Y. Shirakawa, Y. Masuda, Y. Kawata, K. Kagawa, K. Yasutomi, and S. Kawahito. 4.3 a programmable sub-nanosecond time-gated 4-tap lock-in pixel cmos image sensor for real-time fluorescence lifetime imaging microscopy. In _2017 IEEE International Solid-State Circuits Conference (ISSCC)_, pages 70-71, 2017. doi: 10.1109/ISSCC.2017.7870265.
* [60] T. Serrano-Gotarredona and B. Linares-Barranco. A 128 \(\times\) 128 1.5% contrast sensitivity 0.9% FPN 3 \(\mu\)s latency 4 mW asynchronous frame-free dynamic vision sensor using transimpedance preamplifiers. _IEEE Journal of Solid-State Circuits_, 48(3):827-838, 2013.
* [61] P. Shedligeri, A. S, and K. Mitra. A unified framework for compressive video recovery from coded exposure techniques. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_, pages 1600-1609, January 2021.
* [62] C. Shi, N. Song, W. Li, Y. Li, B. Wei, H. Liu, and J. Jin. A review of event-based indoor positioning and navigation. 2022.
* [63] V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein. End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging. _ACM Transactions on Graphics (TOG)_, 37(4):114, 2018.
* [64] G. Taverni, D. Paul Moeys, C. Li, C. Cavaco, V. Motsnyi, D. San Segundo Bello, and T. Delbruck. Front and back illuminated dynamic and active pixel vision sensors comparison. _IEEE Transactions on Circuits and Systems II: Express Briefs_, 65(5):677-681, 2018. doi: 10.1109/TCSII.2018.2824899.
* [65] Z. Teed and J. Deng. Raft: Recurrent all-pairs field transforms for optical flow. In _European Conference on Computer Vision_, 2020.
* [66] S. Tulyakov, D. Gehrig, S. Georgoulis, J. Erbach, M. Gehrig, Y. Li, and D. Scaramuzza. Time lens: Event-based video frame interpolation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 16155-16164, June 2021.
* [67] A. C. Ulku, C. Bruschini, I. M. Antolovic, Y. Kuo, R. Ankri, S. Weiss, X. Michalet, and E. Charbon. A 512 x 512 SPAD Image Sensor With Integrated Gating for Widefield FLIM. _IEEE Journal of Selected Topics in Quantum Electronics_, 25(1):1-12, Jan. 2019. ISSN 1077-260X, 1558-4542. doi: 10.1109/JSTQE.2018.2867439.
* [68] G. Wan, X. Li, G. Agranov, M. Levoy, and M. Horowitz. CMOS image sensors with multi-bucket pixels for computational photography. _IEEE Journal of Solid-State Circuits_, 47(4):1031-1042, 2012. doi: 10.1109/JSSC.2012.2185189.
* [69] M. Wei, N. Sarhangejad, Z. Xia, N. Gusev, N. Katic, R. Genov, and K. N. Kutulakos. Coded two-bucket cameras for computer vision. In _Proceedings of the European Conference on Computer Vision (ECCV)_, September 2018.
* [70] M. White, S. Ghajari, T. Zhang, A. Dave, A. Veeraraghavan, and A. Molnar. A differential SPAD array architecture in 0.18 \(\mu\)m CMOS for HDR imaging. In _2022 IEEE International Symposium on Circuits and Systems (ISCAS)_, pages 292-296, 2022. doi: 10.1109/ISCAS48785.2022.9937558.
* [71] T. Yamazaki, H. Katayama, S. Uehara, A. Nose, M. Kobayashi, S. Shida, M. Odahara, K. Takamiya, Y. Hisamatsu, S. Matsumoto, L. Miyashita, Y. Watanabe, T. Izawa, Y. Muramatsu, and M. Ishikawa. 4.9 a 1ms high-speed vision chip with 3D-stacked 140GOPS column-parallel PEs for spatio-temporal image processing. In _2017 IEEE International Solid-State Circuits Conference (ISSCC)_, pages 82-83, 2017. doi: 10.1109/ISSCC.2017.7870271.
* [72] F. Yang, Y. M. Lu, L. Sbaiz, and M. Vetterli. Bits from photons: Oversampled image acquisition using binary poisson statistics. _IEEE Transactions on Image Processing_, 21(4):1421-1436, 2012. doi: 10.1109/TIP.2011.2179306.
* [73] X. Yuan, Y. Liu, J. Suo, F. Durand, and Q. Dai. Plug-and-play algorithms for video snapshot compressive imaging. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(10):7093-7111, 2022. doi: 10.1109/TPAMI.2021.3099035.
* [74] J. Zhang, X. Yang, Y. Fu, X. Wei, B. Yin, and B. Dong. Object tracking by jointly exploiting frame and event domain. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 13043-13052, 2021.
* [75] K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofhe. Plug-and-play image restoration with deep denoiser prior. _arXiv preprint_, 2020.
* [76] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2018.
* [77] T. Zhang, M. J. White, A. Dave, S. Ghajari, A. Raghuram, A. C. Molnar, and A. Veeraraghavan. First arrival differential LiDAR. In _2022 IEEE International Conference on Computational Photography (ICCP)_, pages 1-12, 2022. doi: 10.1109/ICCP54855.2022.9887683.
* [78] W. Zhang and L. Lin. Light field flow estimation based on occlusion detection. _Journal of Computer and Communications_, 5(3):1-9, 2017.
## 1 Video Compressive Sensing
In this supplementary note, we provide the pseudo code that describes the emulation of two- and multi-bucket cameras and mathematically describe their multiplexing masks. We also specify algorithmic details for video recovery from compressive measurements.
Multi-Bucket Capture Pseudocode
Algorithm 1 describes the emulation of \(J\)-bucket captures, denoted as \(\mathcal{I}_{\text{coded}}^{j}(\mathbf{x})\), from the photon-cube \(B_{t}(\mathbf{x})\) using multiplexing codes \(C_{t}^{j}(\mathbf{x})\), where \(1\leq j\leq J\). Both single compressive snapshots (or one-bucket captures) and two-bucket captures can be emulated as special cases of Algorithm 1, with \(J=1\) and \(J=2\) respectively.
```
0: Photon-cube \(B_{t}(\mathbf{x})\) Number of buckets \(J\) Multiplexing code for \(j^{\text{th}}\) bucket, \(1\leq j\leq J\), \(C_{t}^{j}(\mathbf{x})\) Pixel locations \(\mathcal{X}\) Total bit-planes \(T\) Enx: Multiplexed captures \(\mathcal{I}_{\text{coded}}^{j}(\mathbf{x})\) functionMultiBucketEmulation(\(B_{t}(\mathbf{x})\), \(C_{t}^{j}(\mathbf{x})\)) \(Y^{j}(\mathbf{x})\gets 0,\,\forall\,j\) for\(\mathbf{x}\in\mathcal{X},1\,\leq j\leq J\)do for\(1\leq t\leq T\)do \(\mathcal{I}_{\text{coded}}^{j}(\mathbf{x})\leftarrow\mathcal{I}_{\text{coded}}^{j}( \mathbf{x})+B_{t}(\mathbf{x})\cdot C_{t}^{j}(\mathbf{x})\) endfor endfor return\(\mathcal{I}_{\text{coded}}^{j}(\mathbf{x})\) endfunction
```
**Algorithm 1** Multi-Bucket Capture Emulation
Mask sequences for video compressive sensing.For a single compressive capture (\(J=1\)), a sequence of binary random is used, i.e, \(C_{t}^{1}(\mathbf{x})=1\) with probability \(0.5\). For a two bucket capture, we use
\[C_{t}^{2}(\mathbf{x})=1-C_{t}^{1}(\mathbf{x}),\]
which is the complementary mask sequence. For \(J>2\), at each timestep \(t\) and pixel location \(\mathbf{x}\), the active bucket is chosen at random:
\[C_{t}^{j}(\mathbf{x})\gets 1,\,\,j\sim\text{Uniform}(1,J).\]
This is a direct generalization of the masking used for both one- and two-bucket captures.
Decoding Video Compressive Captures
A variety of decoding algorithms have been developed for video compressive sensing, including: (a) optimization frameworks tailored to the forward model of Eq. (4) and with additional regularization [10, 24], (b) end-to-end deep-learning methods that utilize a large corpus of training data [7, 9, 14, 22, 23], and (c) hybrid, plug-and-play (PnP) approaches that utilize an optimization framework but perform one or more steps using a deep denoiser [5, 20, 26]. We opt to use the PnP approach featuring an ADMM formulation [25] and a deep video denoiser (FastDVDNet [16]) in this work. We justify our choice by noting that PnP-ADMM can produce high-quality reconstructions, comparable to end-to-end counterparts while using an off-the-shelf denoiser--precluding the need to train separate models for various masking strategies.
For computational efficiency, PnP-ADMM requires the gram matrix of the resulting linear forward model of Eq. (4) to be efficiently invertible. The multi-bucket scheme described above adheres to this consideration.
### Constant-Bandwidth Comparison
We now present a comparison of single-, two- and multi-bucket compressive captures when the readout rate is fixed. As Supp. Fig. 1 shows, multi-bucket captures provide higher fidelity reconstructions even when bandwidth is fixed. Furthermore, their bandwidth cost can be amortized, to some extent, by coding only dynamic regions--we describe this next.
### Coding Only Dynamic Regions: Mitigating the Bandwidth Cost of Multi-Bucket Captures
We observe that multi-bucket captures capture redundant information in static regions of the scene since each pixel in a static region has the same expected value under random binary modulation. Hence, we propose coding only dynamic regions--the dynamic region-of-interest (RoI) can be determined by masking pixels whose coded exposures deviate significantly from one another. As seen in Supp. Fig. 2, the dynamic content may just be \(25\)% of the image area, which provides significant scope for bandwidth savings. We observe that we can code just \(25\)% of the total pixels, among additional compressive measurements, without a perceptible drop in visual quality, which yields an overall bandwidth requirement of \(1.45\times\), or under twice the bandwidth cost of a single compressive measurement.
### Results on More Sequences
Additional results are shown in Supp. Fig. 3.
Figure 1: **Fixed readout comparison of compressive video schemes.** We compare video reconstruction obtained from a single compressive snapshot, two-bucket capture, four-bucket capture and eight-bucket capture while holding readout constant—we achieve this by commensurately increasing readout, for instance, by reading out single compressive snapshots at \(200\) Hz. We indicate the readout rate here in Hertz (Hz) and the frame-rate of the reconstructed video in FPS. Clearly, multi-bucket captures provide better reconstruction results than a burst of independently multiplexed captures.
Supplementary Figure 2: **Coding dynamic regions can reduce readout of multi-bucket captures.**_(left column)_ Dynamic regions are detected by computing the standard deviation of the coded exposures and thresholding them appropriately (e.g., by the \(75^{\text{th}}\) percentile)--we show the dynamic regions in white here. Coded exposures are transmitted only in the dynamic regions. For the static regions, we simply use a long exposure--by adding multi-bucket captures. _(right column)_ We observe that readout-bandwidth can be reduced to \(1.75\times\) from \(4\times\) in the case of a four-bucket capture with no perceptual degradation of reconstruction quality. Bandwidth is provided here as a multiple of the readout of a single compressive capture.
Supplementary Figure 3: **Results on additional sequences.** We use Hertz (Hz) to indicate the rate of emulation and frames-per-second (FPS) to indicate the frame-rate of the reconstructed video. Frame numbers are indicated in yellow font.
## 2 Event Cameras
### Event-Generation Pseudocode
We provide the pseudocode for emulating events from photon-cubes in Algorithm 2. The contrast threshold \(\tau\) and exponential smoothing factor \(\beta\) are the two parameters that determine the characteristics of the resulting event stream, such as its event rate (number of events per second). We use an initial time-interval \(T_{0}\) (typically \(80\)-\(100\) bit-planes) to initialize the reference moving average, with \(T_{0}\) being much smaller than \(T\). The result of this pseudocode is an event-cube, \(E_{t}(\mathbf{x})\), which is a sparse spatio-temporal grid of event polarities--positive spikes are denoted by \(1\) and negative spikes by \(-1\). From the emulated event-cube, other event representations can be computed such as: an event stream, \(\{(\mathbf{x},t,p)\}\), where \(p\in\{-1,1\}\) indicates the polarity of the event; a frame of accumulated events [12] (seen in Figs. 4 and 9); and a voxel grid representation [27], where events are binned into a few temporal bins (shown in Supp. Fig. 4 _(top left)_ using \(3\) temporal bins).
```
0: Photon-cube \(B_{t}(\mathbf{x})\) Contrast threshold \(\tau\) Exponential smoothing factor, \(\beta\) Pixel locations \(\mathcal{X}\) Initial time-interval \(T_{0}\), for computing reference moving average Total bit-planes \(T\)
0: Event-cube \(E_{t}(\mathbf{x})\) that describes the spatio-temporal spikes functionEventCameraEmulation(\(B_{t}(\mathbf{x})\), \(\tau\), \(\beta\), \(T_{0}\)) \(E_{t}(\mathbf{x})\gets 0\), \(\forall\,t\), \(\forall\,\mathbf{x}\) for\(\mathbf{x}\in\mathcal{X}\)do Reference moving average, \(\mu_{\text{ref}}(\mathbf{x})\gets 0\) Current moving average, \(\mu_{0}(\mathbf{x})\gets 0\) for\(1\leq t\leq T_{0}\)do \(\mu_{\text{ref}}(\mathbf{x})\leftarrow\beta\mu_{\text{ref}}(\mathbf{x})+(1- \beta)B_{t}(\mathbf{x})\) endfor for\(T_{0}\leq t\leq T\)do \(\mu_{t}(\mathbf{x})\leftarrow\beta\mu_{t-1}(\mathbf{x})+(1-\beta)B_{t}(\mathbf{x})\) if\(|\mu_{t}(\mathbf{x})-\mu_{\text{ref}}(\mathbf{x})|>\tau\)then \(E_{t}(\mathbf{x})\leftarrow\text{sign}(\mu_{t}(\mathbf{x})-\mu_{\text{ref}}( \mathbf{x}))\) \(\mu_{\text{ref}}(\mathbf{x})\leftarrow\mu_{\text{ref}}(\mathbf{x})+\tau* \text{sign}(\mu_{t}(\mathbf{x})-\mu_{\text{ref}}(\mathbf{x}))\) endif endfor endfor return\(E_{t}(\mathbf{x})\) endfunction
```
**Algorithm 2** Event Camera Emulation
### Compatibility of SPAD-Events with Existing Event-Vision Algorithms
We now provide examples of downstream algorithms applied to SPAD-events, which shows the compatibility of the emulated event streams with existing event-vision algorithms. Supplementary Figure 4 shows three downstream algorithms with SPAD-events as their input: Contrast Maximization [8] which generates a warped image of events that has sharp edges (_top right_), E2VID [13] which estimates intensity frames from an event stream (_bottom left_), and DCEIFlow [21] which computes dense optical flow using intensity frames and aligned events (_bottom right_). Both E2VID and DCEIFlow use a voxel grid representation of events as their inputs. We include the visualization of a voxel grid representation in Supp. Fig. 4 _(top left)_. All event streams were emulated using \(3000\) bit-planes of photon-cubes acquired at \(96.8\) kHz, and using \(\beta=0.95\) and \(\tau=0.4\) as emulation parameters. We note that the performance of these algorithms can be improved by finetuning pre-trained learning-based models on a dataset of SPAD-events.
### Ablation of Brightness-Encoding Functions
Our event emulation scheme (Algorithm 2) relies on the SPAD's response curve to encode scene brightness, which is a non-linear and non-saturating response of the form
\[1-\exp(-\alpha\Phi(\mathbf{x},t)),\]
where \(\alpha=\eta t_{\text{exp}}\) and assuming negligible dark count rate (DCR). Current event cameras typically use a logarithmic response to encode scene brightness. This can also be utilized to emulate events from photon-cubes by setting \(h\) (as described in Eq. (8)) to be the log-MLE function:
\[h(\mu)=\log\left(-\frac{\log(1-\mu)}{\eta t_{\text{exp}}}\right).\]
However, a log-response suffers from underflow issues, particularly at low-light scenarios as seen in Supp. Fig. 5.
### SoDaCam Flexibility and SPAD-Events
Here are a few benefits of the SoDaCam approach for event-based imaging:
* **Direct access to intensity information.** By computing a sum image, SoDaCam makes intensity frames that are spatially- and temporally-aligned with the generated event stream available. This precludes the need for multiple devices, which often require careful alignment and calibration.
* Further, the intensity frames obtained via the sum-image feature the SPAD's imaging capabilities, i.e., such intensity frames feature a high dynamic range and can be utilized in low-light imaging scenarios. This is in contrast to dynamic active vision sensors (DAVIS) [3, 6], where a conventional frame, which has limited dynamic range and low-light capabilities compared to SPAD-derived images, can be obtained in addition to the event stream.
* **Computing multiple event-streams simultaneously.** Contrast threshold \(\tau\) is an important parameter that controls the sparsity and noise level of generated event stream: small values of \(\tau\) can produce potentially noisy event streams that require extensive processing, while large values of \(\tau\) can result in very sparse streams with less useful information. With SoDaCam, it is possible to emulate event streams with different values of \(\tau\) simultaneously, thereby amortizing these trade-offs. In fact, this can be thought of as analogous to exposure stacks but in the context of event-imaging. Supplementary Figure 6_(top row)_ shows an example of an 'event-image stack'.
* **Per-pixel contrast thresholds.** We can also vary the contrast threshold \(\tau\) as a function of pixel location or incident intensity. For instance, we can use a smaller contrast threshold if we have an estimate of the incident intensity with less variance and a higher contrast threshold when there is more variance. We show an example of this in Supp. Fig. 4_(bottom right)_, where we vary the contrast threshold between \(0.35\) and \(0.45\) as a linear function of the sample variance of the moving average, \(\mu_{t}(\mathbf{x})\).
Supplementary Figure 6: **Flexible event-based imaging**. _(top)_ An 'event-stack' that employs increasing contrast thresholds, \(\tau=0.35,0.4,0.45\). _(bottom)_ We can also output frames with aligned events, and use event generation policies that may not be trivial to realize in hardware--such as varying the contrast threshold \(\tau\) as a linear function of the sample variance.
## 3 Motion Projections
### Pseudocode for Emulating Motion Cameras
Algorithm 3 provides the pseudocode for emulating sensor motion from a photon-cube, where the sensor's trajectory is determined by the discretized function \(\mathbf{r}\). At each time instant \(t\), we shift bit-planes by \(\mathbf{r}(t)\) and accumulate them in \(\mathcal{I}_{\text{shift}}\). For pixels that are out-of-bounds, no accumulation is performed. For this reason, the number of summations that occur varies spatially across pixel locations \(\mathbf{x}\)--we normalize the emulated shift-image by the number of pixel-wise accumulations \(N(\mathbf{x})\) to account for this. The function \(\mathbf{r}\) can be obtained by discretizing any smooth \(2\)D trajectory: by either rounding up or dithering, or by using a discrete line-drawing algorithm [4].
```
0: Photon-cube \(B_{t}(\mathbf{x})\) Discretized trajectory \(\mathbf{r}(t)\) Pixel locations \(\mathcal{X}\) Total bit-planes \(T\)
0:\(\mathcal{I}_{\text{shift}}(\mathbf{x})\) functionMotionCameraEmulation(\(B_{t}(\mathbf{x})\), \(\mathbf{r}\)) \(\mathcal{I}_{\text{shift}}(\mathbf{x})\gets 0\), \(\forall\,\mathbf{x}\) for\(\mathbf{x}\in\mathcal{X}\)do Normalizer, \(N(\mathbf{x})\gets 0\) for\(1\leq t\leq T\)do if\(\mathbf{x}+\mathbf{r}(t)\in\mathcal{X}\)then \(N(\mathbf{x})\gets N(\mathbf{x})+1\) \(\mathcal{I}_{\text{shift}}(\mathbf{x})\leftarrow\mathcal{I}_{\text{shift}}( \mathbf{x})+B_{t}(\mathbf{x}+\mathbf{r}(t))\) endif endfor if\(N(\mathbf{x})>0\)then \(\mathcal{I}_{\text{shift}}(\mathbf{x})\leftarrow\mathcal{I}_{\text{shift}}( \mathbf{x})/N(\mathbf{x})\) endif endfor return\(\mathcal{I}_{\text{shift}}(\mathbf{x})\) endfunction
```
**Algorithm 3** Motion Camera Emulation
As described in Sec. 5.3, we consider two trajectories: linear and parabolic. Linear trajectories are parameterized by their slope
\[\mathbf{r}(t)=v\left(t-\frac{T}{2}\right)\mathbf{\hat{p}},\]
where \(v\) is the object velocity, \(\mathbf{\hat{p}}\) is a unit vector that describes the trajectory's direction, and \(T\) is the total number of bit-planes. Parabolic trajectories are parameterized by their maximum absolute slope, \(v_{\text{max}}\)
\[\mathbf{r}(t)=\frac{v_{\text{max}}}{T}\left(t-\frac{T}{2}\right)^{2}\mathbf{ \hat{p}}.\]
To prevent tail-clipping, which are image artifacts introduced by the finite extent of the parabolic integration, it is important to choose \(v_{\text{max}}\) to be sufficiently higher than the velocity of objects in the scene. Both linear and parabolic trajectories have a zero at \(t=T/2\)--which allows blending multiple linear projections without any pixel alignment issues.
### Blending Multiple Linear Projections
As shown in Fig. 8, randomly sampling multiple linear projections (seen in Supp. Fig. 7_(left column)_) can provide motion compensation when only the motion direction, and not the exact extent of motion, is known. To blend these projections, in addition to the randomly sampled linear projections, we also compute two short exposures using bit-planes at the beginning and end of the photon-cube. For the scenes shown in Fig. 8, we used the first \(200\) and the last \(200\) bit-planes to emulate short exposures. We then use RAFT [17] to predict optical flow between the two short exposures--which can be used to select the
linear projection that can best compensate motion as a function of the pixel location (as seen in Supp. Fig. 7 (_left column_)). We did not have to perform any spatial smoothing after selecting linear projections, since the optical flow predicted by RAFT was reasonably smooth.
Blending can also be achieved by choosing the least blurred linear projection in a per-pixel manner--similar to how focal stacking is typically achieved. This would however require predicting per-pixel blur kernels or constructing a measure of motion blur. Laplacian filters, which are typically used for focal stacking, do not readily work with motion stacks.
[FIGURE:S4.F7][EN
Figure 7: **An example of motion stack blending.**_(left)_ We sample \(8\) linear projections randomly along the road’s orientation such that their ensuing pixel displacements are uniformly between \(0\) and \(40\) pixels. _(right)_ We then blend them using the optical flow field that is predicted between two short exposures computed from the same photon-cube. The flow field visualization follows Baker et al. [2].
## 4 Experimental Setup for Secs. 6.1 and 6.2
### Cameras and Sensory Arrays Used
We used the following imagers for our experiments described in Secs. 6.1 and 6.2:
* **SwissSPAD2 array** a \(512\times 512\) SPAD array that can be operated at a maximum frame rate of \(97\) kHz. We operate the SwissSPAD2 in its 'half-array' mode: utilizing one of two sub-arrays, with a resolution of \(512\times 256\) pixels. The SPAD pixels have a pixel pitch of \(16.4\)\(\mu\)m and a low fill factor of \(10\)%, owing to the lack of microlenses in the prototype.
* **Prophesee EVK4 event camera**, which is a state-of-the-camera commercial event camera featuring a sensor resolution of \(1280\times 720\) pixels, pixel pitch of \(4.86\)\(\mu\)m and a fill-factor of \(>77\)%.
* **Photron infinicam** a conventional high-speed camera that can stream acquisition over USB-C at a resolution of \(1246\times 1024\) pixels and \(1\) kHz frame-rate. For higher frame rates, it is necessary to reduce the number of rows that are read out--for the example, we use a resolution of \(1246\times 240\) pixels to obtain acquisition at \(4\) kHz in Fig. 10.
### Removing Hot Pixels
A few SPAD pixels (around \(5\)% of the total pixels in our prototype) have extremely high dark current rate and therefore have \(B_{t}(\mathbf{x})=1\) almost always. We detect these _hot pixels_ by capturing a photon-cube of \(100000\) bit-planes in a very dark environment and detecting pixel locations with high photon counts. For video compressive and event imagers, we inpaint projections using OpenCV's implementation of the Telea algorithm [18]. For motion projections, we do not sum over bit-plane locations that correspond to hot pixels during integration. Further, we remove pixel locations from the hot pixel mask if the motion trajectory provides access to neighboring values that are not hot pixels. We inpaint the motion projection after excluding these points.
### Experiment-wise Lens Specifications
We used C-mount lenses for our experiments with the following focal lengths:
* \(12\) mm for the comparison to Prophesee EVK4 in Fig. 9. The Prophesee EVK4 and the SwissSPAD2 were used with the same lens specifications.
* \(16\) mm for the coded exposures shown in Fig. 2.
* \(35\) mm for the spinning casino roulette shown in Figs. 4 and 10.
* \(50\) mm for the motion stack shown in Fig. 6 and the traffic scene shown in Fig. 8.
* \(75\) mm for the falling die sequence shown in Fig. 1, the measure tape sequence shown in Fig. 5, and the water splash captured in Fig. 7.
## 5 UltraPhase Experiments
### Processor Description
The chip consists of a \(3\times 6\) array of processing cores, each of which can interface with \(4\times 4\) SPAD pixels via \(3\)D stacking. At this point, the \(3\)D stacking has not been completed, so we interface UltraPhase with the photon-cubes acquired by the SwissSPAD2 [19] instead. Every core is independent, has \(4\) kb of available RAM, and can execute programs of up to 256 instructions in length at a rate of \(140\) million instructions per second (MIPS). The system supports a wide range of instructions including, bit-wise operations, 32-bit arithmetic operations, data manipulation and custom inter-core synchronization. For more details, please refer to Ardelean [1].
We implement projections on UltraPhase by using a custom assembly code to program each core separately. We include the commented assembly code for all three projections in Listings 1 to 3. To compute multiple projections, we simply run projections sequentially, one bit-plane at a time. Since each projection can be computed significantly faster than the camera frame rate (e.g., \(1.678\) ms for video compressive sensing of \(40\) Hz readout), this does not bottleneck acquisition. We include the processing time for each projection in Tab. 1.
### Measuring Bandwidth
We assume that the outputs for sum, video compressive and motion projections have \(12\)-bit depth. For event cameras, we assume that each event consists of \(18\)-bits---\(9\)-bits to encode the pixel location (\(\lceil\log_{2}(12\times 24)\rceil\)), \(8\) bits to represent the timestamp (corresponding to the bit-plane index where the event was triggered), and \(1\)-bit to encode polarity. We then measure readout on a \(12\times 24\) region-of-interest (RoI) of the falling die sequence that was acquired using the SwissSPAD2. Table 1 lists the readout bandwidth for each projection.
### Measuring Power
The power consumption of UltraPhase is comprised of compute power and readout power. For compute power, the chip was characterized by executing instructions corresponding to each projection in an infinite loop and measuring its average power consumption. As an upper bound, we assumed the maximum possible power consumption for operations that involved reading and writing to the RAM. This measured power consumption was then scaled by the duty cycle of each projection--which is the ratio of the time required to process a bit-plane to the exposure time of each bit-plane.
For readout power, we consider a conventional digital interface at \(3.3\) V with a load of \(7\) pF operating at the specified bandwidth, amounting to \(54\) nanowatts for each kilobit readout (nW/kbps)--this is similar, for instance, to the USB interface utilized by the SwissSPAD2.
Table 1 provides the processing power, readout power and the total power for each projection. Clearly, processing requires an order of magnitude (or more) lesser power than readout, which explains how computing photon-cube projections results in reduced sensor power consumption.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Processing time \(\downarrow\) & Bandwidth \(\downarrow\) & \multicolumn{3}{c}{Power \(\downarrow\)} \\ \cline{3-6} & (ms) & (kbps) & Processing (\(\mu\)W) & Readout (\(\mu\)W) & Total (\(\mu\)W) \\ \hline
12-bit sum image & \(0.981\) & \(135\) & \(0.3\) & \(7.29\) & \(7.6\) \\ Snapshot compressive & \(1.678\) & \(135\) & \(3.0\) & \(7.29\) & \(10.3\) \\ Motion projection & \(1.096\) & \(135\) & \(1.3\) & \(7.29\) & \(8.6\) \\ Event camera & \(9.817\) & \(101.25\) & \(2.4\) & \(5.83\) & \(8.2\) \\ \hline Three projections & \(12.591\) & \(405\) & \(6.7\) & \(21.87\) & \(28.6\) \\ \hline Photon-cube readout & \(0.007\) & \(28125\) & \(5.4\times 10^{-3}\) & \(1518.8\) & \(1518.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Power and bandwidth benchmarks** when computing photon-cube projections on UltraPhase, a \(24\times 12\) array, at \(40\) Hz readout. We compare computing projections to reading out the entire photon-cube. We report the processing time, the readout bandwidth, and the compute and readout power for each projection.
### Comparison to CMOS Sensors
In addition to compute and readout power quantified in the previous section, a computational SPAD also consumes power to detect photons. By incorporating this photon-detection power, we can provide a rough comparison of SoDaCam projections to CMOS sensors. The photon-detection dissipation depends on the number of photon detections, and hence varies with the light level--for the SwissSPAD2, this is measured to be \(<1\)mW in the dark and \(\sim\)\(62\) mW in indoor lighting [19]. To estimate compute and readout power, we linearly scale the measurements presented in Tab. 1 for an array of \(512\times 256\) pixels. We note that this is a conservative estimate since UltraPhase is not designed to be a low-power device.
As seen in Tab. 1, under ambient lighting, the power consumption of our emulated cameras is higher than conventional CMOS cameras; while in low light, the SPAD consumes lesser power owing to fewer photon detections. We remark that without the bandwidth reduction facilitated by photon-cube projections, SPADs are at a considerable disadvantage compared to their CMOS counterparts. Finally, we provide a comparison against high-speed CMOS cameras, which can also be used to obtain photon-cube projections, albeit with a read noise penalty (Sec. 6.2), and higher power consumption.
### Visualization of Projections
Since UltraPhase is a low-resolution sensor-processor (\(12\times 24\) pixels), we visualize projections by repeating computations in a tiled manner to cover a region-of-interest (RoI) of \(60\times 60\) pixels. Supplementary Figure 9 shows the visualization of an event camera emulated on UltraPhase. To provide more context, we include the CPU visualization of the entire SwissSPAD2 event-frame. We also verified that the outputs of UltraPhase were identical to CPU-run outputs by computing the RMSE between event frames that result from UltraPhase computations and CPU computations (see error map in Supp. Fig. 9).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Photon detection} & \multirow{2}{*}{Compute} & \multirow{2}{*}{Readout} & \multicolumn{2}{c}{Total} \\ \cline{2-2} \cline{5-6} & Dark & Ambient & & & Dark & Ambient \\ \hline Photon-cube & \multirow{2}{*}{\(1\)} & \multirow{2}{*}{\(62\)} & \multirow{2}{*}{-} & \multirow{2}{*}{\(690\)} & \multirow{2}{*}{\(691\)} & \multirow{2}{*}{\(752\)} \\ readout & & & & & & \\ \cline{2-2} Sum-image & \(1\) & \(62\) & \(0.3\) & \(4.5\) & \(5.8\) & \(66.8\) \\ VCS & \(1\) & \(62\) & \(1.3\) & \(4.5\) & \(6.8\) & \(67.8\) \\ Motion proj. & \(1\) & \(62\) & \(0.7\) & \(4.5\) & \(6.2\) & \(67.2\) \\ Event camera & \(1\) & \(62\) & \(1\) & \(3.6\) & \(5.6\) & \(66.6\) \\ Three proj.(s) & \(1\) & \(62\) & \(3\) & \(13.5\) & \(17.5\) & \(78.5\) \\ \hline CMOS @ 40 FPS & \(\sim\)\(10\)–\(25\) & - & \(4.5\) & \(\sim\)\(15\)–\(30\) \\ CMOS @ 4k FPS & \(\sim\)\(600\)–\(2500\) & - & \(450\) & \(\sim\)\(1000\)–\(3000\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Power consumption** of SoDaCam versus conventional cameras (in mW), estimated for \(512\times 256\) pixels at 40 Hz readout. CMOS estimates assume the usage of column-parallel ADCs [15].
Figure 9: **Event camera computed using UltraPhase, on \(2500\) bit-planes of the falling die sequence. For visualization purposes, we run computations on UltraPhase in a tiled manner so as to cover a RoI of \(60\times 60\) pixels. We compare this to CPU-run outputs of the same RoI and verify that they are identical. For context, we highlight this RoI using a bounding box on the CPU-run event-frame that has a resolution of \(256\times 512\) pixels. The event simulation parameters used were \(\tau=0.45,\beta=0.95,T_{0}=80\).**
Listing 1: **Custom assembly code for implementing video compressive sensing on UltraPhase.** Here, we consider computing one compressive snapshot that is multiplexed by \(16\) binary random masks.
```
1:RAM(64..127) stores the 64 subframe compression code masks as one bit per pixel in byte1 and byte0:
2:The output is available in RMM(0..15)
3:trlOut is strobed after every binary frame
4:
5:#define pixelValue0
6:define frameIdx1
7:current binary frame
8:#define subFrameIdx2
9:subframe
10:define mask3
11:compression codes for the current subframe
12:#define aux04
13:#define aux15
14:#define CTlOut 0b10000
15:#define toMask 0b001000
16:LOD subFrameIdx, $64, 0
17:LOD subFrameIdx, $0, 1
18:LOD frameIdx, $16, 0
19:bit_planes_per_subframe
20:LOD frameIdx, $0, 1
21:#GTP 5, 1, 0
22:(pixelValue)
33:FETCM $bWFrameIdx, toMask, 0
34:AND pixelValue, mask, pixelValue
35:\(-\)apply mask i.e. multiply with code[subframe]
36:CALL 50
37:TELL Ctrlout, 0
38:
39:update indexes
40:OR aux0, aux0, aux0
41:SUBC frameIdx, $1
42://number 13
43:planes per subframe
44:JMPP 4
45:OR aux0, aux0, aux0
46:ADDC subFrameIdx, $1
47:\(-\)move to next subFrame
48:JMPP 2
49:this subroutine will accumulate the pixel values from pixelValue
50:LOD aux0, $16, 0
51:LOD aux0, $0, 1
52:LOD aux1, $0, 0
53:LOD aux1, $0, 1
54:OR aux0, aux0, aux0
55:SR0 pixelValue, pixelValue
56:ADDC (aux0), aux1, (aux0)
57:SUBC aux0, $1
58:JMPPN2 54
59:RET
```
Listing 1: **Custom assembly code for implementing video compressive sensing on UltraPhase.** Here, we consider computing one compressive snapshot that is multiplexed by \(16\) binary random masks.
```
1:RAM(64..127) stores the 64 subframe compression code masks as one bit per pixel in byte1 and byte0
2:The output is available in RMM(0..15)
3:trlOut is strobed after every binary frame
4:#define pixelValue0
5:#define frameIdx1
6:current binary frame
7:define subFrameIdx2
8:subFrame
9:define mask3
10:compression codes for the current subframe
11:define aux04
12:define curlOut 0b10000
13:define toMask 0b001000
14:LOAD subFrameIdx, $64, 0
15:LOAD subFrameIdx, $0, 1
16:LOAD frameIdx, $16, 0
17:bit_planes_per_subframe
18:LOAD frameIdx, $0, 1
19:\(-\)get pixel data and store it in R0
20:(pixelValue)
21:FETCM $bWFrameIdx, toMask, 0
22:AND pixelValue, mask, pixelValue
23:\(-\)apply mask i.e. multiply with code[subframe]
24:CALL 50
25:TELL Ctrlout, 0
26:\(-\)update indexes
27:OR aux0, aux0, aux0
28:\(-\)clear flags
29:SUBC frameIdx, $1
29:\(-\)check if we finished with all the binary
30:JMPP 4
31:OR aux0, aux0, aux0
32:\(-\)clear flags
33:ADDC subFrameIdx, $1
34:\(-\)move to next subFrame
35:JMPP 2
36:\(-\)reset frameIdx and continue
37:\(-\)this subroutine will accumulate the pixel values from pixelValue
38:LOAD aux0, $16, 0
39:\(-\)set R4 (aux0) to 16
40:LOAD aux0, $0, 1
41:LOAD aux1, $0, 0
42:\(-\)set R5 (aux1) to 0
43:LOAD aux1, $0, 1
44:OR aux0, aux0, aux0
45:SR0 pixelValue, pixelValue
46:ADDC (aux0), aux1, (aux0)
47:SUBC aux0, $1
48:JMPN2 54
49:RET
```
Listing 2: **Custom assembly code for implementing video compressive sensing on UltraPhase.** Here, we consider computing one compressive snapshot that is multiplexed by \(16\) binary random masks.
* [1] The core will stroke Ctrlout every time an evet took place and the SoC needs to read RAM(0) to get it
* [2] #define coaux0 00000010
* [3] #define toaux1 0000100
* [4] #define pixelValue 0 --register R0 is used for the binary pixel values
* [5] #define aux0 1 --register R1 is used for misc
* [6] #define aux1 2 --register R2 is used for misc
* [7] #define Bpointer 3 --register R3 is used as a pointer to the reference_average stored in
* [8] #RAM(1..16)
* [9] #define pixelIdx 5 --register R5 is used for the current pixel index
* [10] #define Ctrlout 001000 --address for external trigger signal
* [11] #define hour 001000 --address for north trigger signal
* [12] #define decayAddress 127 --RAM address 127 stores the exponential_decay
* [13] #define decayComplementAddress 126 --RAM address 126 stores the value for 1-exponential_decay
* [14] #define intervalAddress 125 --RAM address 125 stores the initial_interval
* [15] #define thresholdAddress 124 --RAM address 124 stores the contrast_threshold
* [16] #define frameIdxAddress 123 --RAM address 123 stores the frame_index counter
* [17] # GETP 5, 1, 0 --get pixel data and
* [18] #store it in R0 (pixelValue) --get decay constant from RAM and
[MISSING_PAGE_POST]
* [25]FETCH (Appointer), toaux0, 0 --if yes, get current_average[pixelIdx]
* [26]STORE aux0, (Bpointer) --reference_average[pixelIdx]
* [27]JUMP 70 --GO TO NEXT
* [28]PIXEL
* [29]frame_index is larger than initial_interval
* [30]FETCH (Appointer), toaux0, 0 --if not, get current_average[pixelIdx] and store it into R1 (aux0)
* [31]FETCH (Bpointer), toaux1, 0 --get reference_average[pixelIdx] and store it into R2 (aux1)
* [32]SUS aux0, aux1, aux0 --diff[pixelIdx]
* [33]JUMPNC 60 --positive
* [34]compare abs(diff) with threshold if diff is negative
* [35]FETCH (hresholdAddress, toaux1, 0 --get contrast_threshold and store it into R2 (aux1)
* [36]NEG sux1, aux1 --diff is negative, so make contrast_threshold negative and store it into R2 (aux1)
* [37]CMP aux0, aux1 --diff < --contrast_threshold
* [38]JUMPNC 70 --NEXT PIXEL
* [39]NEG pixelIdx, --RAM(0) --pixel_index
* [30]i.e. a negative event
* [31]ADD (Bpointer), aux1, (Bpointer) --reference_average --contrast_threshold
* [32]TELL Ctrlout, 0 --stroke Ctrlout to signal an event
* [33]JUMP 70 --GO TO NEXT
* [34]Compare abs(diff) with threshold if diff is positive
* [35]FETCH (hresholdAddress, toaux1, 0 --get contrast_threshold and store it into R2 (aux1)
* [36]CMP aux1, aux0 --if diff > contrast_threshold
* [37]JUMPNC 70 --NEXT PIXEL
* [38]STORE pixelIdx, --GO --pixel_index
* [39]i.e. a positive event
* [40]ADD (Bpointer), aux1, (Bpointer) --reference_average --contrast_threshold
* [41]JUMPNC 70 --NEXT PIXEL
* [42]ADD (Bpointer), aux1, (Bpointer) --reference_average --contrast_threshold
* [43]JUMPNC 70 --NEXT PIXEL
* [44]TO TO NEXT PIXEL
* [45]OR pixelValue, pixelValue, pixelValue --clear flags
* [46]ADD Apointer, $1 --increment Apointer
* [47]ADD (Bpointer), $1 --increment Apointer
* [48]JUMPNN 7 --if not done with
* [49]JUMP 0
Listing 3: **Custom assembly code for implementing motion projections on UltraPhase.** Without loss of generality, we consider a linear projection along the horizontal direction.
```
1:TheprojectionisavailableinRAM(0..15)
2:Ctrloutisstrobedaftereveryframe
3:#defineXshift0--registerR0isusedforthehorizontalshift
4:#definetimestep1--registerR1isusedforthecurrenttimestep
5:#defineorigPixels2--registerR2isusedforthecurrentcore'spixels
6:#defineshiftPixels3--registerR3isusedfortheshiftedpixels
7:#defineaux04--registerR4isusedformisc
8:#defineaux15--registerR5isusedformisc
9:#defineCtrloutOb10000
10:#defineshiftL3neighAddir127--RAM(127)storestheOb0111_0111_0111mask
11:#defineshiftL3CarrAddir126--RAM(126)storestheOb1000_1000_1000_1000_mask
12:#defineshiftL2neighborsAddir125--RAM(125)storestheOb0011_0011_0011mask
13:#defineshiftL2currAddir124--RAM(124)storestheOb1011_01100_1100_mask
14:#defineshiftL123--RAM(123)storestheOb0001_0001_0001_mask
15:#defineshiftL1currAddir122--RAM(122)storestheOb1110_1110_1110_1110_mask
16:\(\_\)LOADXshift,$0xFFFF8,0--load-8intoXshiftasinitialvalue
17:\(\_\)LOADXshift,$0xFFFF,1
18:\(\_\)CALL50---getthecorrectpixelvaluesaccordingtoXshift
19:\(\_\)CALL30--accumulatepixels
20:\(\_\)CALL20---advancetime
21:\(\_\)TELLCtrlout,0--strokeCtrlouttosignalanewframe
22:\(\_\)Jump2--repeat
[MISSING_PAGE_POST]
* LOAD aux0, $4, 0
* LOAD aux0, $0, 1
* CMP aux0, aux1
* CMPMPC 105
* JUMP2 100
* JUMP2 100
* JUMP2 100
* LOAD aux0, $2, 0
* CMP aux0, aux1
* JUMPC 90
* JUMP2 80
* shift is 1
* SD shiftPixels, shiftPixels
* to the left 3 times
* SD shiftPixels, shiftPixels
* SD shiftPixels, shiftPixels
* OR origPixels, origPixels, aux1
* SRO aux1, aux1
* to be shifted to the right once
* AND shiftPixels, shiftL3currAddr, shiftPixels
* neighbour
* AND aux1, shiftL3neighAddr, aux1 current core
* OR shiftPixels, aux1, shiftPixels
* RET
* shift is 2
* SD sLO shiftPixels, shiftPixels
* be shifted to the left 2 times
* SD shiftPixels, shiftPixels
* OR origPixels, origPixels, aux1
* variable
* SRO aux1, aux1
* and shiftPixels, shiftL2currAddr, shiftPixels
* neighbour
* AND aux1, shiftL2neighAddr, aux1
* ORs firbitPixels, aux1, shiftPixels
* AND shiftPixels, shiftL3currAddr, shiftPixels
* AND shiftPixels, shiftL2currAddr, shiftPixels
* neighbour
* AND aux1, shiftL2neighAddr, aux1
* ORs firbitPixels, aux1, shiftPixels
* AND shift is 3
* SD shiftPixels, shiftPixels
* be shifted to the left once
* OR origPixels, origPixels, aux1
* variable
* OR shiftPixels, aux1, shiftPixels
* values
* RET
* shift is 3
* SD sLO shiftPixels, shiftPixels
* be shifted to the left once
* OR origPixels, origPixels, aux1
* variable
* SRO aux1, aux1
* this core needed to be shifted to the right 3 times
* SRO aux1, aux1
* AND shiftPixels, shiftL1currAddr, shiftPixels
* neighbour
* AND aux1, shiftL1neighAddr, aux1
* bits from current core
* OR shiftPixels, aux1, shiftPixels
* values
* RET
* shift is 4
* PUTN origPixels, 4 --shift your pixels to the left (sharepixels with neighbour)
* SWEN 1 --save pixels from rigth neighbour
* GEN 0, 8, 0 --get pixels from rigth neighbour and store in shiftPixels
* RET
shift is > 4
105PURNorigPixels, 4
neighbour)
106:seven1
107:gttn0,8,0
shiftPixels
108:Oraux1,aux1,aux1
--clearCarryflag
109:subcaux1,$4
--subtract4fromcurrentshiftbecausewereadfrom
110:jdump58
--shiftiszero
115:ORorigPixels,origPixels,shiftPixels --theshiftiszero,sokeepthepixels
116:RET
--negativeshifts
120:LOADaux0,$FFFC,0--setR4(aux0)to-4touseforcurrentshiftcomparison
121:LOADaux0,$0xffFF,1
122:CMPaux1,aux0
--currentshift<-4soweneedtoreadfrom
123:Jdump170
--currentshift<-4sowereholdtreadfrom
neighbourneighbourneighbour
124:Jdump2160
--currentshiftis-4
125:LOADaux0,$FFFE,0--setR4(aux0)to-2touseforcurrentshiftcomparison
126:CMPaux1,aux0
--currentshiftis-3
127:Jdump150
--currentshiftis-2
128:Jdump2140
--currentshiftis-2
129:shiftis-1
--pixelsfrom
130:subfibPixels,shiftPixels
131:SR0shiftPixels,shiftPixels
132:ORorigPixels,origPixels,aux1
--savecurrentpixelsin
--theaux1variable
133:SLOaux1,$0x1,--axis1
--pixelsfrom
--axis1
--thiscoreneedtobeshiftedtotheleftonce
--applymasktoselectrelevantbitsfrom
--neighbour
135:JANOaux1,$BhiftLcurrAddr,aux1 --applymasktoselectrelevantbitsfromcurrentcore
--relevantbitsfromcurrentcore
--combinettocreatefinal
136:ORshiftPixels,aux1,shiftPixels --combinettocreatefinal
137:RET
--shiftis-2
--shiftis-2
--savepixelsfrom
--savepixelsfrom
--neighbourneedtobeshiftedtotheright2times
--pixelsfrom
--neighbour
141:SR0shiftPixels,shiftPixels
--ororigPixels,origPixels,aux1 --theaux1variable
--savecurrentpixelsin
--axis1
--thiscoreneedtobeshiftedtotheleft2times
--pixelsfrom
--neighbourneedtobeshiftedtotheleft2times
--pixelsfrom
--neighbourneedtobeshiftedtotheright2times
--neighbour
[MISSING_PAGE_POST]
ne-axis1
[MISSING_PAGE_POST]
* [50] SR0 shiftPixels, shiftPixels --pixels from neighbour need to be shifted to the right once [51] OR origpixels, origpixels, aux1 --save current pixels in the aux1 variable
* [52] SL0 aux1, aux1 --pixels from aux1
* [53] SL0 aux1, aux1
* [54] SL0 aux1, aux1
* [55] AND shiftPixels, shiftL3neighAddr, shiftPixels --apply mask to select relevant bits from neighbour
* [56] AND aux1, shiftL3currAddr, aux1 --apply mask to select relevant bits from current core
* [57] OR shiftPixels, aux1, shiftPixels --combine to create final pixel values
* [58] RET
* [59] shift is -4
* [60] PUTN origPixels, 1 --shift your pixels to the right (share pixels with neighbour)
* [61] SAVEN 4 --save pixels from left neighbour
* [62] GETN 2, 8, 0 --get pixels from left neighbour and store in shiftPixels
* [63] RET
* [64] shift is -4
* [65] PUTN origPixels, 1 --shift your pixels to the right (share pixels with neighbour)
* [66] SNAVEN 4 --save pixels from left neighbour
* [67] GETN 2, 8, 0 --get pixels from left neighbour and store in shiftPixels
* [68] OR aux1, aux1, aux1 --clear Carry flag
* [69] ADC aux1, 84 --add 4 to current shift because we read from a neighbour
* [70] JUMP 120 |
2309.06417 | The trigger system for the CSR external-target experiment | A trigger system has been designed and implemented for the HIRFL-CSR external
target experiment (CEE), the spectrometer for studying nuclear matter
properties with heavy ion collisions in the GeV energy region. The system
adopts master-slave structure and serial data transmission mode using optical
fiber to deal with different types of detectors and long-distance signal
transmission. The trigger logic can be accessed based on command register and
controlled by a remote computer. The overall field programmable gate array
(FPGA) logic can be flexibly reconfigured online to match the physical
requirements of the experiment. The trigger system has been tested in beam
experiment. It is demonstrated that the trigger system functions correctly and
meets the physical requirements of CEE. | Dong Guo, Haoqian Xyu, DongDong Qi, HeXiang Wang, Lei Zhang, Zhengyang Sun, Zhi Qin, Botan Wang, Yingjie Zhou, Zekun Wang, Yuansheng Yang, Yuhao Qin, Xianglun Wei, Herun Yang, Yuhong Yu, Lei Zhao, Zhigang Xiao | 2023-09-12T17:30:12Z | http://arxiv.org/abs/2309.06417v1 | # The trigger system for the CSR external-target experiment
###### Abstract
A trigger system has been designed and implemented for the HIRFL-CSR external target experiment (CEE), the spectrometer for studying nuclear matter properties with heavy ion collisions in the GeV energy region. The system adopts master-slave structure and serial data transmission mode using optical fiber to deal with different types of detectors and long-distance signal transmission. The trigger logic can be accessed based on command register and controlled by a remote computer. The overall field programmable gate array (FPGA) logic can be flexibly reconfigured online to match the physical requirements of the experiment. The trigger system has been tested in beam experiment. It is demonstrated that the trigger system functions correctly and meets the physical requirements of CEE.
## I Introduction
The QCD phase structure at high net baryon density is drawing increasing attention in the field of high energy heavy ion physics [1]. Despite of enormous progress in studying the QCD phase diagram [2], the enriched structure on the diagram is still an open question drawing wide attention from the community of nuclear physics. Recently, the STAR experiment collected data in the energy range from 200 to 7.7 GeV [3; 4; 5], the cumulants of net proton and proton \(\kappa\sigma^{2}\) (or \(C_{4}/C_{2}\)) as a function of \(\sqrt{s_{NN}}\) were investigated in central Au+Au collisions, and a totally different behavior of \(\kappa\sigma^{2}\) at lower collision energies has been observed [5; 6]. It indicates the opportunities for accurate measurements of observables at high net baryon density region. Particularly in hadron phase, the determination of the isovector sector of the EoS of asymmetric nuclear matter, i.e., the density dependent nuclear symmetry energy, \(E_{\rm sym}(\rho)\), receives increasing interest too, because this not-well-constrained quantity plays an important role in radioactive nuclear structure, isospin dynamics in heavy ion reactions, liquid-gas phase transition in asymmetric nuclear matter and neutron star structure etc [7; 8; 9; 10]. The present theoretical and experimental work shows that the symmetry energy is still uncertain in the supersaturated density region [11; 12; 13; 14; 15]. In order to constrain stringently \(E_{\rm sym}(\rho)\) particularly at high densities, great efforts are taken in worldwide laboratories.
The CEE is a generally purposed spectrometer under construction on the Cooling Storage Ring at the Heavy Ion Research Facility in Lanzhou (HIRFL-CSR), China. It aims at the studies of the structure of QCD phase diagram at high net-baryon density region [16; 17]. The physical programs in plan include the search for the signals unraveling the existence of the critical end point (CEP) of QCD phase transition, the constraint of \(E_{\rm sym}(\rho)\) at supra-saturation density region using various observables, and the studies of the properties of hypernuclei as well as some other issues at the frontiers of nuclear physics [18]. It is a fixed target experiment and covers more than \(2\pi\) space in center of mass system. The highest beam energy achievable is 0.5 and 2.8 GeV/u for uranium and proton beam, respectively. CEE consists of various subsystems including tracking detectors, time of flight (TOF) detectors, zero-degree counters _etc_. It contains a total number of channels of about 20 K, running with a maximum event rate of 10 kHz.
The trigger system, playing significant role in large-scale experiments like CEE in nuclear physics, sees rapid developments in recent years due to the increasing application of the field programmable logic gate array (FPGA) technology [19; 20]. FPGA technology has obvious advantages of compact space, low power consumption, strong adaptability and good scalability [21], as well as of convenient operation for remote control [21; 22; 23]. Meanwhile, functions including digital signal processing algorithms [24; 25; 26], time digitization design [27; 28; 29], track reconstruction of charged particles [30; 31], can be implemented in the FPGA-based trigger system, as demonstrated by the wide applications in ATLAS [32; 33], ALICE [34; 35], and CMS [36; 37] etc.
Adapting the up-to-date FPGA technologies, the trigger system of CEE has been designed to fulfill the following functions:
1) In the beam experiment, it is able to provide the experimental trigger signals, selecting the physical collision events on the target and suppressing the background events off-target.
2) In the debugging process of each detector, it can provide laser testing and cosmic ray trigger signals.
3) It carries out the task of sending and receiving the
signals and commands of the global clock synchronization.
4) It has scalability and reloadability to realize data flow and command interaction with the DAQ system.
In this paper, we introduce the design of trigger system in CEE. Section II introduces the physical requirements and logic of the trigger system, section III introduces the design of the trigger system, section IV introduces the beam test results, and section V is the summary.
## II Trigger conditions for CEE
### Overall design of CEE
The main component of CEE is a large-gap dipole magnet, housing the tracking detectors in the magnetic field. The time projection chamber (TPC) with the sensitive volume of \(120(\mathrm{x})\times 80(\mathrm{y})\times 90(\mathrm{z})\mathrm{cm}^{3}\)[38], read out by gaseous electron multiplication (GEM) and induction pad plane, is installed in the center of the magnetic field. The TPC consists of two independent sets of detectors, which are symmetrically distributed along the beam direction, leaving a blank area for the beam to pass through. The inner time-of-flight (iTOF) detector [39] consists of multi resistive plate counters (MRPCs) covering the left, right and bottom sides of TPC. Three multi wire drift chamber (MWDC) tracking detectors of different sizes are installed covering the forward angle of \(5^{\circ}<\theta_{\mathrm{lab}}<30^{\circ}\) on the downstream side of the TPC. Two sets of MWDC are placed inside the atmospheric gap magnet, and one set of MWDC is placed outside the magnet. An inactive area is made to allow the beam passing through the MWDC array. The end cap time of flight detector (eTOF) [40] is located behind the MWDC, providing the arrival timing signal for the light charged particles in forward rapidity region. The starting time signal is provided by T\({}_{0}\) installed on the upstream of the target. The high performance time digitization modules (TDMs) are adopted to obtain high precision timing information [29]. The zero angle calorimeter (ZDC) [41] is located behind all other detector components and the magnetic field, in order to deliver the information related to the reaction plane and the centrality by measuring the charged-resolved spectators in very forward region. An active collimator detector (AC) and a silicon pixel detector (SiPix) [42] are installed between the target and T\({}_{0}\) on the beam line to suppress the background events induced by the projectile with upstream materials and to record the vertex of the beam, respectively. The overall design of CEE is shown in FIG.1 for the details.
### The physical requirements of trigger event
For the heavy ion collisions in the GeV/u energy region, the rough selection of the event geometry (impact parameter) is the main goal of the trigger system in beam experiment. In general, the trigger conditions are set by the multiplicity of charged particles. The high (low) multiplicity corresponds to the central (peripheral) collisions. Even though the fast trigger detectors covers part of the phase space, the partial multiplicity correlates to the total multiplicity and hence measures the event centrality. Using JET AA Microscopic Transport Model (JAM) [43] as the event generator in the framework of CEEROOT developed for experimental simulations and data analysis, one can calculate the multiplicity in iTOF and eTOF as a function of impact parameter \(b\), as shown in Fig. 2. It is clearly seen that the multiplicity of charged particles covered by iTOF and eTOF has a good reverse correlation with the impact parameter, indicating that the combination of iTOF and eTOF can give a rough selection of the impact parameter. Pattern recognition of the multiplicity distribution on eTOF can further delivers the centrality with better precision [44], this ability can be developed in offline analysis.
Figure 1: (Color online) The conceptual design of the CEE.
Figure 2: (Color online) The scattering plot of the multiplicity recorded the TOF MRPCs _vs._ the impact parameter.
### Trigger logic setting
Using the multiplicity as an observable characterizing the event geometry, the triggers signal in beam experiments is provided by AC, T\({}_{0}\), iTOF and eTOF, as described below. The signal firing T\({}_{0}\) indicates the beam pass through. The multiplicity over a certain threshold recorded on iTOF and eTOF detector in coincidence then characterizes the reactions on the beam path. In order to suppress the off-target reaction events on the upstream side, the AC is used as a veto in the trigger scheme. Hence, the CEE global trigger condition for beam experiment can be written in
\[\mathrm{GTRG}=\mathrm{T}_{0}\times\mathrm{iTOF}\times\mathrm{eTOF}\times \mathrm{\widetilde{AC}} \tag{1}\]
In more detail, the correspondence between the reaction events and the firing matrix in the four detectors are listed in Tab.1.
Further, in order to classify different types of physical events characterized by the impact parameter, three multiplicity thresholds are introduced in the trigger selection logic, including the high noise threshold \(M_{\mathrm{h}}\), the event classification threshold \(M_{\mathrm{e}}\) and the low noise threshold \(M_{\mathrm{l}}\). If the multiplicity \(M_{\mathrm{TOF}}\) recorded in iTOF and eTOF is lower than \(M_{\mathrm{l}}\) or unphysically higher than \(M_{\mathrm{h}}\), the events are regarded as noise events which are not of interest and will not be triggered. When the multiplicity \(M_{\mathrm{TOF}}\) satisfied \(M_{\mathrm{l}}<M_{\mathrm{TOF}}<M_{\mathrm{e}}\), it represents a minimum bias event, whereas the multiplicity \(M_{\mathrm{TOF}}\) satisfied \(M_{\mathrm{e}}<M_{\mathrm{TOF}}<M_{\mathrm{h}}\), it represents a central or semi-central collision.
## III Design of trigger system
### System architecture and electronics
The CEE trigger system is designed in a master-slave multi-hierarchy structure, maintaining good expansion capability. The firing signal uplink from trigger detectors and the trigger signal delivery to all subsystems are implemented by optical fiber long distance transmission. As shown in Fig.3, the multi-layer architecture of the trigger system consists of the front-end measurement module (FEMM), the Slave Trigger Module (STM) and the Master Trigger Module (MTM). All trigger electronics of the CEE use a global synchronous clock with 40 MHz frequency. The iTOF and eTOF systems adopt a 2-level trigger structure, functioning in the multiplicity calculation and summation. The tracking detector system MWDC and TPC adopt 2-level trigger structure because of the large number of channels. The T\({}_{0}\), AC, ZDC and SiPix systems adopt a 1-level trigger structure. In the trigger scheme, T\({}_{0}\), AC, iTOF and eTOF systems participate the uplink for the trigger signal input, and all systems complete the global trigger signal downlink transmission.
The hardware of the MTM mainly includes the electrical connector and optical port for signal transmission, the FPGA chips to implement the event selection algorithm and the power module. LEMO differential socket is adopted for the 40 MHz clock input electrical port. The interface used for signal transmission with STM L2 of the subsystem adopts LEMO double-layer single-ended socket. A total of 16 single-ended LEMO sockets and 9 Small form pluggable (SFP) can meet various signal transmission requirements of MTM.
The FPGA applied for the MTM is XC7A200TFFG1156 of Xilinx Artix 7 series [45; 46]. This type of FPGA integrates 16 GTP and supports serial data transmission rate from 500 Mbps to 6.6 Gbps. The total number of pins is 1156, providing 215360 logical units. The sufficient memory space can meet the needs of event selection algorithm and trigger information cache. The physical photo of the MTM board is shown in FIG.4 (a).
The STM mainly completes the information exchange with FEMM, the signal transmission between the 2-level trigger modules, and the communications with the MTM to receive and send signals. The STM L1 is equipped with 32 single-ended LEMO sockets and 2 SFP, while the STM L2 is designed to have 4 single-ended LEMO sockets and 11 SFP. Xilinx Artix 7 series XC7A200TFFG1156 is selected for both the type of STM FPGA [45; 46]. For the TOF subsystem, STM L1 receives the hit signals from the TDM and calculates the local multiplicity to be sent to STM L2 via optical fiber transmission. Then from STM L2 the summation operation to the received multiplicity is done and the result is sent to the MTM to construct the global trigger signal. The physical photo of STM L1 and STM L2 are shown in FIG.4 (b) and (c).
The trigger system adopts the Gigabit Transceiver with Low Power (GTP) integrated by Xilinx in Artix
Figure 3: (Color online) CEE trigger system master-slave multi-layer structure diagram. The figure shows the number of electronics required for each subsystem in the experiment
7 FPGA to implement the function of optical fiber long-distance signal transmission. In GTP, 8B/10B encoding mode of series-parallel/parallel-series conversion is used to ensure the DC balance in serial data transmission. After the serial signal is generated, the photoelectric conversion is further completed, and the electrical signal is converted into an optical signal sent through the SFP. FTLF8519P2BNL with SFP from Finisar [47] is selected in the experiment, which has a transmitter and a receiver. The maximum rate of 2.125 Gbps serial data transceiver within 500 meters can be achieved. In the logic function, the logic input electrical signal and the output electrical signal of GTP are 16-bit parallel signals, and when the electrical signal is converted into the optical signal in the logic, the serial signal of 1-bit needs to do series-parallel conversion (series signal input and parallel signal output, i.e. SIPO).
### The logic scheme of trigger
The trigger system conducts the following operations. 1) To construct and deliver the global trigger signal to each subsystem, 2) to switch different running mode of each subsystem upon users' requirement and 3) to provide the global control command and global standard time. The trigger logic module is implemented in the FPGA of STM L1, STM L2 and MTM.
The logical details and signal flow of the 2-level trigger link of iTOF and eTOF subsystem are shown in FIG.5. In the uplink chain, the local multiplicities in the TDMs are transformed to the STM L1 and summed up in a 75ns time window in the "Multiplicity SUM module" of the STM L1 logic. In order to suppress the background noise, The summed multiplicity in each STM L1 is discriminated by the "Threshold module" before it is converted into an optical signal in GTP and sent to STM L2. In STM L2, the multiplicity signals from all corresponding STMs L1 are summed to obtain the total multiplicity of the physical event, as shown in the "preprocess & SUM module" of STM L2 in FIG.5. After the parallel-series conversion (parallel signal input and series signal output, PISO) is performed, it is uploaded to the MTM as an electrical signal. The format of the total multiplicity uplink data is 10 bits serial digital signal.
Figure.6 shows the logical details and the signal flow of the tracking system, using downlink chain only. The global trigger signal generated from the MTM is sent to the STM L2 by electrical signal and fanned out in multi channels, which are converted into an optical signal by GTP and distributed to each STM L1. In STM L1, the multi-channel signals are further fanned out and converted into serial signals sent to the FEMMs.
Subsystems involving single level trigger links include T\({}_{0}\), AC, ZDC, and beam monitor subsystems. Since T\({}_{0}\) and AC signals are included in the calculation of global
Figure 4: (Color online) The real photo of trigger electronics. (a)The real photo of MTM, (b)The real photo of STM L1, (c)The real photo of STM L2
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Event type & \(T_{0}\) & eTOF & iTOF & AC \\ \hline Very peripheral reaction & \(\surd\) & \(\surd\) & \(-\) & \(-\) \\ Off-target events upstream & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Off-target events after TPC & \(\surd\) & \(\surd\) & \(-\) & \(-\) \\ Events on target & \(\surd\) & \(\surd\) & \(\surd\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: (Color online) The detector firing response matrix for various physical events, “\(\surd\)” (“\(-\)”) indicates that the detector responses (not) to the corresponding event type.
trigger, these two subsystems use both uplink and downlink transmission chain, while the ZDC and beam monitor subsystems need downlink chain only to receive trigger signal from MTM.
Finally, the logical details and the signal flow of MTM are shown in Figure.7. The signal of T\({}_{0}\) and AC are transmitted uplink from STM L1 to MTM via fiber. After the multiplicity of iTOF and eTOF are transmitted to MTM, respectively, they are serialized and converted into 10-bit parallel signals. The total multiplicity of TOF is calculated in the "preprocess & SUM module" and filtered in the "threshold judge module". The output of the threshold filtering is used to classify the event type and to generate the global trigger signal. Due to the different time delay caused by the logical operations for TOF, T\({}_{0}\) and AC, a further adjustable delay is enabled to synchronize all the subsystems.
### Trigger mode and global control
The trigger electronics are connected to the DAQ through optical fibers, and hence, the trigger system can be monitored and controlled through the DAQ terminal. The parameters used in the trigger logics, including the delays, coincidence gate width and the fraction divide factors, thresholds are all configurable remotely. In addition, the function of remote logic update to the main trigger electronics is also implemented based on QuickBoot. The DAQ transfers and stores the new logic to the SPI Flash of MTM, and then sends the reconfiguration command to realize the remote logic update function. The SPI Flash model used by MTM is N25Q256A with a capacity of 256MB. DAQ can obtain the status information of the trigger electronics by reading the sensor measurements of each modules and crates, including temperature, humidity and current information.
The trigger system can change the running mode of CEE according to the system requirements through the DAQ control. In addition to the above beam mode, the cosmic ray mode and the electronic self-test mode are also included. In the cosmic ray mode during beam-off, in which the T\({}_{0}\) and AC do not participate in the trigger, MTM calculate the global trigger directly from the the multiplicity information from iTOF or eTOF and deliver to each subsystem. The electronic self-check mode is designed to check the status of the electronics of each system during the beam off in the frequency which is configurable by the user. In addition, the trigger system provides the hardware basis for the global control of the system. Namely, the control commands such as start acquisition, stop acquisition, and time synchronization are sent to all systems via the same hardware links of the trigger system.
Figure 5: (Color online) The 2-level trigger link STM L1 and STM L2 trigger logic and signal transmission diagram of iTOF and eTOF subsystem
## IV Beam test results
The trigger system has been tested with the prototypes of the T\({}_{0}\), iTOF, eTOF, TPC and MWDC subsystems of CEE in beam experiment of 320MeV/u \({}^{56}\)Fe+\({}^{56}\)Fe at the second radioactive ion beam line at Lanzhou (RIBLL2). The setup of the beam test is shown in FIG.8. T\({}_{0}\) is placed 20 cm in front of the target, and other detectors are placed in the direction with \(\theta_{lab}=25^{\circ}\) from the beam line. The high energy products of light charged particles
Figure 6: (Color online) The 2-level trigger link STM L1 and STM L2 trigger logic and signal transmission diagram of TPC and MWDC subsystem
Figure 7: (Color online) The trigger logic and signal transmission diagram of MTM
pass through the iTOF, eTOF, TPC and MWDC, respectively. iTOF 0\(\#\) is placed 90 cm away from the target and 25cm away from iTOF 1\(\#\). eTOF 0\(\#\) and iTOF 1\(\#\) are separated by 47 cm, the two eTOF are close together, and the TPC is placed 50 cm to the final TOF detector, and the TPC and MWDC are 50cm apart. The TPC readout plane consists of 3 layers of GEM foils and a pad plane.
The 2-level trigger link of iTOF and eTOF subsystem, the 2-level link of TPC and MWDC subsystem, and the 1-level link of T\({}_{0}\) subsystem are set up. Since this experiment is in the prototype test stage and the number of electronics channels is about 1k. The TOF subsystem is set to share one STM L1 and STM L2, the Tracking subsystem is set to share one STM L2, and the TPC and MWDC are set to have one STM L1 respectively in their FEMM chassis. FIG.9 shows the electronic settings of the main control box in the beam experiment site. The MTM and STM L2 are generally placed in the main control box, in addition to the DAQ data summary board and clock electronic. The DAQ data summary board connects the MTM to the PC and constructs the hardware link for remote control. The average beam count is about 10\({}^{5}\)Hz and the average trigger rate is about 1 kHz, while the peak trigger rate is about 4 kHz. The noise level of TOF detector is tested at 500Hz. The low and high noise threshold is set to \(M_{\rm l}=3\) and \(M_{\rm h}=100\), respectively. The event classification threshold is set to \(M_{\rm tof}=10\). The fraction division ratio is set to 1.
In the experiment, the delay time of T\({}_{0}\) signal is set to 900 ns in MTM, the delay of TOF signal is unchanged, and the width of T\({}_{0}\) and TOF signal is extended to 200ns. FIG.10 shows the timing relationship between the upstream signal involved in trigger calculations during the beam. The STM L1 uplink transmission serial signal of the TOF subsystem is 1'b1+0000000100, representing the signal with multiplicity of 4. The STM L2 of the TOF subsystem summarizes the multiplicity signal from STM L1 and displays the consistent output signal of STM L1. The delay of the 2-level signal is 700 ns. The channel 7 generates the global trigger signal for MTM, and channel 8 is the trigger signal for STM L1 downlink transmission. The delay time of the 2-level trigger link from MTM to
Figure 8: (Color online) (a) Beam experiment detector position diagram. (b) Beam experiment site photos
Figure 10: (Color online) Uplink TOF multiplicity signal, T\({}_{0}\) signal and downlink global trigger signal waveform.
Figure 9: (Color online) Beam experiment master chassis electronics setup
STM L1 is 750 ns. Taking the iTOF subsystem as an example, the total delay of the trigger signal from TDM upstream transmission to STM L1 downstream transmission to TDM is 2.6\(\mu\)s.
FIG.11 present an event example of a track detected in TPC in coincidence with the TOF detectors. The readout plane of TPC is designed in such way that different shapes of pad are applied to study the tracking resolution, including rectangular pad and ZigZag pad. A straight track is clearly seen. The red triangle symbols denote the gravity center of the signs on each corresponding row. The shadowed areas are the charged distribution induced on the pads.
The iTOF and eTOF subsystems provide the hits multiplicity information required in trigger signal construction. FIG.12 shows the distribution of the time difference between T\({}_{0}\) and the fired TOF detector. The time resolution of on TOF prototype is about \(117/\sqrt{2}\approx 83\) ps. Here the resolution caused by the variation of the particles incident to the TOF detectors is not subtracted. The track reconstruction efficiency from the TOF hits is about 96%, slightly dependent on the firing detector. The results of the beam test demonstrate that the trigger system for CEE works correctly.
## V Conclusion
In this paper, the trigger system of HIRFL-CSR external-target experiment (CEE) is described. The trigger system has a master-slave structure, using optical fibers to transmit signals in the inter layer communication. The operations and calculations in the whole trigger scheme at all levels are implemented based on FPGA technology. Different running modes, including beam experiment, cosmic ray calibration and electronics self-checks are implemented. Communications of the trigger system with DAQ and other system can be established. Results in the beam test, where the prototypes of various sub-detectors, DAQ and global clocks are all connected, demonstrate the trigger system functions correctly and meets the requirements of CEE.
## Declaration of Competing Interest
The authors declare that they have no known competing final-cial interests or personal relationships that could have appeared to influence the work reported in this paper.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China under Grant Nos. 11927901 and 11890712 and by Tsinghua University Initiative Scientific Research Program.
Figure 11: (Color online) The particle tracks were obtained by fitting pad charge deposition at each layer of TPC
Figure 12: (Color online) The TOF subsystem measured time-of-flight resolution. |
2305.20063 | On the Origin of Linearity and Unitarity in Quantum Theory | We reconstruct the transformations of quantum theory using a physically
motivated postulate. This postulate states that transformations should be
locally applicable, and recovers the linear unitary maps from pure quantum
theory, as well as the completely positive, trace-preserving maps from mixed
quantum theory. Notably, in the pure case, linearity with respect to the
superposition rule on Hilbert spaces is derived rather than assumed (and
without any continuity assumptions). | Matt Wilson, Nick Ormrod | 2023-05-31T17:39:26Z | http://arxiv.org/abs/2305.20063v2 | # On the Origin of Linearity and Unitarity in Quantum Theory
###### Abstract
We reconstruct the transformations of quantum theory using a physically motivated postulate. This postulate states that transformations should be _locally applicable_, and recovers the linear unitary maps from pure quantum theory, as well as the completely positive, trace-preserving maps from mixed quantum theory. Notably, in the pure case, linearity with respect to the superposition rule is derived rather than assumed (and without any continuity assumptions).
###### Contents
* 1 Introduction
* 2 State-measurement Theories
* 2.1 Basic Structure
* 2.2 Spatial Composition
* 2.3 Locally Applicable Transformations
* 3 Reconstruction of Standard Physical Transformations
* 3.1 Unitary linear maps
* 3.2 Quantum Channels
* 3.3 Equivalences
* 3.4 From Transformations to Evolutions
* 4 Comparison with the Argument of Gisin
* 4.1 A review of the argument
* 4.2 Non-linear functions with no-superluminal signalling
* 4.3 On the Argument for Complete Positivity
* 4.4 Summary of Comparison
* 5 Conclusion
* A Using the Tensor Product to Construct Standard Transformations
* B One-to-one Correspondences
* C A nonlinear map that is deterministic and compatible with relativity
* D Diagrammatic Perspective
* D.1 Main Theorem: Diagrammatic Proof
## 1 Introduction
Why do pure quantum states evolve linearly and unitarily, and why do mixed states evolve according to completely positive trace-preserving maps? Here is a way to make this question more precise. Suppose one has a theory that describes the states of physical systems and measurements on them in the same way that quantum theory does. Is it necessary that this theory should treat transformations in the same way too? Or is there some reasonable modification of quantum theory that changes the transformations while leaving the rest intact [1, 2, 3]?
Of course, the answer depends on what is meant by'reasonable'. Here is a suggestion: a reasonable transformation on a system is one that is _locally applicable_. By this we mean, roughly speaking, that one can imagine that the system is accompanied by some (possibly far away) environment, on which the transformation does not act. A hint that such a principle could be used to recover standard quantum transformations can be found in a pair of recent papers [4, 5], in which a formalization of local applicability in a rather different context was used to re-characterise the quantum supermaps [6, 7], also known as process matrices [8].
On this definition of'reasonable', we show that the answer is 'no': at least in the finite-dimensional case, one cannot modify the transformations of quantum theory without either embracing a failure of local applicability or modifying its description of states and measurements. To that end, we introduce the notion of a _state-measurement theory_, which accounts for states and measurements while remaining agnostic about dynamics (Section 2). We then offer a formal definition of the locally applicable transformations on any state-measurement theory, and suggest that the possibility to formalise the axiom in such a theory-independent way is a good test for physicality of the axiom.
Then, we show that local applicability is an axiom from which the dynamics of pure and mixed quantum theory can be derived (Section 3). The locally applicable transfor
Figure 1: The intuition behind local applicability. The system is separate from its environment; the locally applicable transformation \(\mathscr{L}\) acts only on the system; and in particular it is independent of measurements performed on the environment. One of the authors was immensely proud to have managed to represent a measurement using a picture of a magnifying glass in TikZ, the other simply figured there was a glitch.
mations on finite-dimensional pure quantum theory are precisely the linear and isometric maps (unitary when the dimensions of the input and output match). Similarly, the locally applicable transformations on finite-dimensional mixed quantum theory are precisely the quantum channels.
Therefore, the transformations of quantum theory cannot be modified without either embracing nonlocality or modifying the description of states and measurements. This provides an explanation for why quantum systems evolve the way that they do: linear, unitary, and completely positive, trace-preserving transformations on systems are the only ones that can be applied locally given the states and measurements of their respective theories.
There are a few ways in which we imagine this result finding relevance and applications in non-linear quantum mechanics and informational quantum foundations. Firstly, it provides a no-go theorem for non-linear, non-unitary, or non-completely-positive modifications of quantum theory. This is relevant for proposed modifications that reject unitarity in order to avoid a measurement problem [9, 10, 11, 12, 13, 14, 15, 16, 17, 18], and that aim to model interactions between quantum particles and classical gravitational fields [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60], and that all other \(1\)-dimensional pure quantum theory are precisely the same as the
izes that idea, guided by the assumption that what remains in such a theory is just a description of states and measurements.1 Since we will be able to define locally applicable transformations on general _state-measurement_ theories, and not just the quantum ones, this approach could in the future be used to reconstruct and define the appropriate transformations in a variety of other contexts.
Footnote 1: Accordingly, we are restricting ourselves _operational_ theories in which the notion of measurement plays a fundamental role, such as standard quantum mechanics. This is in contrast to theories like Newtonian or Bohmian mechanics in which measurements are just dynamical processes like any other.
### Basic Structure
In textbook quantum mechanics, systems are represented with Hilbert spaces, states are represented by the normalized vectors, and measurement outcomes are represented by projectors. Connecting these structures, the Born rule gives the probability of observation of each measurement outcome on a system conditional on the state to be measured, and the projection postulate gives a way to compute the post-measurement state from which probabilities of future measurement outcomes can be obtained. Together this data (in which the deterministic dynamics have yet to be specified) constitutes what we will refer to as pure quantum theory2.
Footnote 2: mixed quantum theory can be defined in an analogous way.
We can generalize this idea beyond quantum theory. A general state-measurement theory essentially has to do two things. Firstly, it must prescribe probabilities for the possible outcomes of any measurement of any system, conditioned on the state of the system before the measurement. Secondly, it must provide a rule determining the state of the system after observation of a measurement outcome. These basic notions are illustrated in Figure 2, and formalized below.
**Definition 1**.: _A state-measurement theory consists of a set of system \(O=\{A,B,\ldots\}\), and for each system \(A\)_
* _a set of states_ \(S_{A}\)_;_
* _a set of (representatives of) measurement outcomes_ \(M_{A}\)_;_
* _a probability function_ \(p_{A}:S_{A}\times M_{A}\rightarrow[0,1]\)_;_
* _a partial function_ \(u_{A}:S_{A}\times M_{A}\to S_{A}\) _expressing the updated description of a state after a measurement outcome is obtained; and finally_
Figure 2: The basic idea of a state-measurement theory. The theory allows one to calculate the probability of obtaining a measurement outcome m given the initial state \(s\), as well as the updated state \(s^{\prime}\) after the measurement.
* _a 'null' measurement_ \(\mathbb{I}_{A}\) _satisfying_ \(u_{A}(s,\mathbb{I}_{A})=s\) _and_ \(p_{A}(s,\mathbb{I}_{A})=1\)_._
The quantum theories of pure states and mixed states provide the two key instances of state-measurement theories we will study in this paper. Their precise definitions are given in Section 3.
A few clarifications are in order. \(M_{A}\) covers all possible outcomes of all possible measurements on \(A\). In theories where a given outcome can be embedded in multiple measurement contexts, but where its probability and state-measurement rule are the same in each case, \(M_{A}\) needs only to contain one copy of this outcome. As the next section explains, an example is pure quantum theory, where we can think of \(M_{A}\) as a set of projectors, each of which can appear in multiple contexts.3 We do not require that \(u_{A}\) is a full function (i.e. one that is defined on all possible inputs) since we want to allow for the edge-case in which \(s\) cannot be updated according to the observation of \(\mathfrak{m}\) because the probability \(p_{A}(s,\mathfrak{m})\) of observation of \(\mathfrak{m}\) is \(0\). The null measurement represents the absence of a measurement.
Footnote 3: On the other hand, in a theory where an outcome is represented by a projector but the probability and update rules took into account which measurement that projector is a part of, \(M_{A}\) would have included to include multiple elements that correspond to that same projector.
### Spatial Composition
A general state-measurement theory needn't have any notion of _joint_ systems comprised of multiple subsystems. In particular, there might not be any way of considering combinations of states or measurements that are performed at the same time (see Figure 3). Since such notions are a prerequisite for discussions of locality, let us now define the special class of state-measurement theories that have them.
**Definition 2**.: _A spatial state-measurement theory is a state-measurement theory equipped with_
* _An associative function_ \(\otimes:O\times O\to O\)_, meaning_ \((A\otimes B)\otimes C=A\otimes(B\otimes C)\)__
* _For each_ \(A,B\) _an associative family of functions_ \(\otimes_{AB}:S_{A}\times S_{B}\to S_{A\otimes B}\)_, meaning that_ \((\psi_{A}\otimes\psi_{B})\otimes\psi_{C}=\psi_{A}\otimes(\psi_{B}\otimes\psi_{C})\)__
* _For each_ \(A,B\) _an associative family of functions_ \(\otimes:M_{A}\times M_{B}\to M_{A\otimes B}\)_, meaning that_ \((\mathfrak{m}_{A}\otimes\mathfrak{m}_{B})\otimes\mathfrak{m}_{C}=\mathfrak{m} _{A}\otimes(\mathfrak{m}_{B}\otimes\mathfrak{m}_{C})\)__
_We will immediately adopt the convention of writing tensor products of systems without the symbol \(\otimes\), for instance denoting \(A\otimes B\) by simply \(AB\)._
Figure 3: The intuition behind spatial state-measurement theories. Particularly if systems are objects in space, one should be able to consider combinations of them. This means allowing both combinations of states \(s\) and \(r\) and measurement outcomes \(\mathfrak{m}\) and \(\mathfrak{m}\).
One could reasonably ask for compatibility requirements between probabilities and updates and the spatial composition structure4. Furthermore, one could ask for some equivalence between the states of \(A\otimes B\) and the states of \(B\otimes A\). We make no such assumptions here since they are not necessary for the results of the paper. We leave the formalization of such principles and their implications, to give a more fully developed framework for state-measurement theories, as a future research direction.
Footnote 4: Such as those which are introduced in the OPF approach to constructing the Born rule from the dynamics of quantum theory [46, 47, 48].
### Locally Applicable Transformations
We are finally in a position to define local applicability. A transformation on the system \(A\) should certainly give rise to a function on the states of \(A\). But if the transformation is locally applicable, then one should also be able to consider its action on \(A\otimes X\), where \(X\) is some distant environment, and find that it 'only acts on \(A\)'. This idea is formalized below.
**Definition 3**.: _Let \(A,B\) be systems of a spatial state-measurement theory. A locally applicable transformation of type \(\mathscr{L}:A\to B\) is a function \(\mathscr{L}_{[-]}:S_{A}\to S_{B}\) and a family \(\mathscr{L}_{X}:S_{AX}\to S_{BX}\) of functions such that_
* _State locality. Parallel composition commutes with the action of_ \(\mathscr{L}\)_. Formally, for all_ \(\psi\in S_{A}\) _and_ \(\phi\in S_{X}\) _we have_ \(\mathscr{L}_{X}(\psi\otimes\phi)=\mathscr{L}_{[-]}(\psi)\otimes\phi\)_, and similarly for all_ \(\psi_{AX}\in S_{AX}\) _and_ \(\phi_{X^{\prime}}\in S_{X^{\prime}}\) _then_ \(\mathscr{L}_{XX^{\prime}}(\psi\otimes\phi)=\mathscr{L}_{X}(\psi)\otimes\phi\)_._
* _No signaling. For every state_ \(\psi\in S_{A}\) _and measurement_ \(\mathfrak{m}\in M_{X}\)_, the probability of_ \(\mathfrak{m}\) _is not affected by_ \(\mathscr{L}\)_. Formally,_ \(p_{BX}(\mathscr{L}_{X}(\psi),\mathfrak{l}\otimes\mathfrak{m})=p_{AX}(\psi, \mathfrak{l}\otimes\mathfrak{m})\)_._
* _Update commutativity. It does not matter whether one first acts locally on_ \(A\) _and then obtains a measurement outcome on the environment, or the other way round. Formally, for every state_ \(\psi\in S_{A}\) _and measurement outcome_ \(\mathfrak{m}\in M_{X}\)_,_ \(u_{BX}(\mathscr{L}_{X}(\psi),\mathfrak{l}\otimes\mathfrak{m})=\mathscr{L}_{X} (u_{AX}(\psi,\mathfrak{l}\otimes\mathfrak{m}))\)_._
We note that Definition 3 nowhere assumes any sort of linearity. Nevertheless, the next section will show that the standard linear quantum dynamics are precisely the locally applicable transformations on the appropriate state-measurement theories.
Our formalization of a transformation might strike the reader as a little unusual. The standard way to think of a transformation \(U:A\to B\) in quantum theory applied to part of a system \(A\otimes X\) is to simply take the tensor product with the identity
Figure 4: Intuition for no-signalling. The probability of some outcome \(\mathfrak{m}\) after measuring the environment is independent of whether or not the locally applicable transformation \(\mathscr{L}\) is performed.
\(B\otimes X\), in which case one needn't bother to talk about any family of function as part of \(U\)'s definition. Here, however, we cannot do that since _the tensor product is an operation on linear maps_, and its extension to non-linear maps is unclear. Since we do not want to assume from the start that dynamics are given by linear maps, talk of general families of functions is necessary, however cumbersome. Ultimately, we will be able to recover the tensor product once we derive linearity from local applicability.
## 3 Reconstruction of Standard Physical Transformations
Having established the theory-independent definitions of static physical theories as state-measurement theories and then dynamics as locally applicable transformations, we will now move on to our main results. For the state-measurement theory of pure quantum states, we will find the locally applicable transformations are precisely the unitary linear maps. For the state-measurement theory of mixed quantum states, the locally applicable transformations are precisely the quantum channels.
Before we move on to formal proofs, let us explain why the result holds on an intuitive level. At the heart of the proofs is quantum entanglement, and in particular the Bell state \(\ket{\Phi^{+}}:=\frac{1}{\sqrt{D}}\sum_{i}\ket{ii}\). An important feature of entangled states is that they facilitate the teleportation of other states [61]. In the case of the 'pure' quantum theory of vectors on Hilbert spaces, we will show that an arbitrary locally applicable transformation is a linear (and ultimately unitary) map by applying it to the Bell-state to produce some output vector, and post-selecting on another Bell state in order to teleport an arbitrary state \(\psi\) into the input of the locally applicable transformation:
Figure 5: Intuition for update commutativity. It does not matter whether one first applies \(\mathscr{L}\) and then obtains an outcome or the other way round.
It turns out that this protocol is equivalent to simply applying the locally applicable transformation directly on \(\psi\). This shows that the locally applicable transformation has to be linear, since it can be constructed by combining bras and kets. A similar argument establishes that in the'mixed' quantum theory of density operators, the locally applicable transformations are convex linear (and ultimately completely positive, trace-preserving) maps. Thus the existence of entanglement - a core conceptual ingredient of quantum theory - is the reason that local applicability recovers the dynamics of quantum theory.
### Unitary linear maps
To study the locally applicable transformations on pure quantum theory, let us lay out its state-measurement structure.
**Example 1** (Pure Quantum Theory).: _In finite-dimensional pure quantum theory each system \(A\) is a finite-dimensional Hilbert space. The set \(S_{A}\) of states of system \(A\) is given by the set \(A_{1}\) of elements of the Hilbert space of norm \(1\). The set \(M_{A}\) of measurements on \(A\) is given by the set \(\Pi(A)\) of orthogonal projectors on \(A\). The probability rule is given by_
\[p(\psi,\pi):=\left\langle\psi\right|\pi\left|\psi\right\rangle\]
_and we take the update rule to be_
\[u(\psi,\pi):=\frac{\pi\left|\psi\right\rangle}{\sqrt{p(\psi,\pi)}}\]
_the nothing measurement is given by the identity map which is indeed an orthogonal projector. This theory is a spatial state-update theory, with spatial composition of systems, states, and measurements all inherited from the Hilbert space tensor product._
We can now describe the locally applicable transformations on this state-measurement theory. Before we begin let us establish a point on notation, for a function \(\mathscr{L}\) on vectors in Hilbert space, a few choices are available, such as \(\mathscr{L}_{[-]}(\psi)\), \(\mathscr{L}_{[-]}(\left|\psi\right\rangle)\), and \(\left|\mathscr{L}_{[-]}(\psi)\right\rangle\). Finding a unique choice of notation that does not in someplace produce unreadable mathematics is a harder problem than the one tackled in this paper5. For now, we hope that the reader can forgive us for freely switching between those three notations whenever it suits the presentation.
Footnote 5:...and so is left for future work.
**Example 2** (Locally Applicable Transformations on pure quantum theory).: _Let \(A,B\) be Hilbert spaces, a locally applicable transformation on pure quantum theory of type \(A\to B\) is a function \(\mathscr{L}_{[-]}:S_{A}\to S_{B}\) family of functions \(L_{X}:(A\otimes X)_{1}\rightarrow(B\otimes X)_{1}\) such that:_
* _Parallel composition commutes with the action of_ \(\mathscr{L}\)_, meaning_ \[\mathscr{L}_{XX^{\prime}}(\psi\otimes\phi)=\mathscr{L}_{X}(\psi)\otimes\phi, \qquad\mathscr{L}_{X}(\psi\otimes\phi)=\mathscr{L}_{[-]}(\psi)\otimes\phi\]
* _Probabilities of measurements on auxiliary systems are unaffected by_ \(\mathscr{L}\)_, meaning_ \[\left\langle\mathscr{L}_{X}(\psi)\right|I\otimes\pi\left|\mathscr{L}_{X}(\psi) \right\rangle=\left\langle\psi\right|I\otimes\pi\left|\psi\right\rangle\]
* _The updates due to measurements on auxiliary systems commute with_ \(\mathscr{L}\)_, meaning_ \[\frac{I\otimes\pi\left|\mathscr{L}_{X}(\psi)\right\rangle}{\sqrt{p(\mathscr{L}_{X }(\psi),I\otimes\pi)}}=\mathscr{L}_{X}\Bigg{(}\frac{I\otimes\pi\left|\psi \right\rangle}{\sqrt{p(\psi,I\otimes\pi)}}\Bigg{)},\] _where we have defined_ \(p(\psi,\pi):=\left\langle\psi\right|\pi\left|\psi\right\rangle\)__
Note that using the second bullet point in combination with the third returns
\[\frac{I\otimes\pi\left|\mathscr{L}_{X}(\psi)\right\rangle}{\sqrt{p(\psi,I \otimes\pi)}}=\mathscr{L}_{X}\Bigg{(}\frac{I\otimes\pi\left|\psi\right\rangle} {\sqrt{p(\psi,I\otimes\pi)}}\Bigg{)}.\]
We will prove that every locally applicable transformation is a unitary linear map, first though let us note the easier converse, that every unitary linear map can be used to construct a locally applicable transformation.
**Lemma 1** (Unitary Linear Maps).: _Let \(U:A\to B\) be a unitary linear map between Hilbert spaces. The family of functions \(\mathscr{L}_{X}^{U}\) defined by_
\[\mathscr{L}_{X}^{U}(\psi):=U\otimes I_{X}\left|\psi\right\rangle,\]
_is a locally-applicable transformation on pure quantum theory._
Proof.: Given in the Appendix.
Let us now state and prove the first of our main theorems, which says that the _only_ locally applicable transformations are the unitary linear maps. In other words, local applicability implies linearity and unitarity.
**Theorem 1**.: _A function \(T:A\to A\) is a linear and unitary map if and only if \(T=\mathscr{L}_{[-]}\) for some locally applicable transformation \(\mathscr{L}:A\to A\) on pure quantum theory._
Proof.: By Lemma 1, if \(T\) is unitary, then \(T=\mathscr{L}_{[-]}\) for some locally applicable transformation \(\mathscr{L}:A\to A\).
For the other direction, suppose that \(\mathscr{L}:A\to A\) is a locally applicable transformation. We will show that
\[\mathscr{L}_{[-]}(\left|\psi\right\rangle)=d_{A}(I\otimes\left\langle\Phi^{+} \right|)(\left|\mathscr{L}_{A}(\Phi^{+})\right\rangle\otimes\left|\psi\right\rangle),\]
where \(d\) is the dimension of \(A\). It will follow that \(\mathscr{L}_{[-]}\) is a linear map, since it can be constructed by combining bras, kets, tensor products, and the identity operator - all of which are linear. Later, we will check that unitarity immediately follows. Consider the action of \(\mathscr{L}_{[-]}\) on \(\psi\in S_{A}\). It follows from state locality (the first part of local applicability) that
\[\begin{split}\mathscr{L}_{[-]}(\left|\psi\right\rangle)& =(I\otimes\left\langle\psi\right|^{*})(\mathscr{L}_{[-]}(\left| \psi\right\rangle)\otimes\left|\psi\right\rangle^{*})\\ &=(I\otimes\left\langle\psi\right|^{*})\mathscr{L}_{A}(\left| \psi\right\rangle\otimes\left|\psi\right\rangle^{*})\end{split} \tag{1}\]
Above, \(\mathscr{L}_{A}\) denotes the extension of \(\mathscr{L}_{[-]}\) that also acts on another copy of \(A\), and \(\left|\psi\right\rangle^{*}\) is the conjugate of \(\left|\psi\right\rangle\). We define the maximally entangled state \(\left|\Phi^{+}\right\rangle:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d}\left|i\right\rangle \otimes\left|i\right\rangle\), and the \(\left|i\right\rangle\) form some self-conjugate basis for \(A\). A very quick bit of algebra reveals that \(\left|\psi\right\rangle\otimes\left|\psi\right\rangle^{*}\) is the state obtained when a measurement of \(\left|\Phi^{+}\right\rangle\) yields the outcome corresponding to \(I\otimes\left|\psi\right\rangle^{*}\left\langle\psi\right|^{*}\). Or, put more mathematically,
\[\left|\psi\right\rangle\otimes\left|\psi\right\rangle^{*}=(I\otimes\sqrt{d} \left|\psi\right\rangle^{*}\left\langle\psi\right|^{*})\left|\Phi^{+}\right\rangle. \tag{2}\]
Now, recall that locally applicable transformations commute with state-updates on the extension systems. This, together with equations (1) and (2), implies that
\[\begin{split}(I\otimes\frac{1}{\sqrt{p}}\bra{\psi}^{*})\mathscr{L}_{ A}(|\Phi^{+}\rangle)&=(I\otimes\bra{\psi}^{*})(I\otimes\frac{1}{\sqrt{p}} \ket{\psi}^{*}\bra{\psi}^{*})\mathscr{L}_{A}(|\Phi^{+}\rangle)\\ \text{by }update&=(I\otimes\bra{\psi}^{*})\mathscr{L}_{A} ((I\otimes\sqrt{d}\ket{\psi}^{*}\bra{\psi}^{*})\ket{\Phi^{+}})\\ \text{by }(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:
**Example 3** (Mixed quantum theory).: _In finite-dimensional mixed quantum theory, each system \(A\) is a finite-dimensional Hilbert space. The set \(S_{A}\) of states of the system \(A\) is the set \(D(A)\) of density operators (i.e. the trace-one positive operators) on \(A\). The set \(M_{A}\) of measurement outcomes is the set \(\Pi(A)\) of projectors on \(A\); the nothing outcome is represented by the identity operator. The probability rule \(p:D(A)\times\Pi(A)\to[0,1]\) is given by_
\[p(\rho,\pi):=\operatorname{Tr}(\rho\pi),\]
_and the update rule \(u:D(A)\times\Pi(A)\to D(A)\) is given by_
\[u(\rho,\pi)=\frac{\pi\rho\pi}{p(\rho,\pi)}.\]
_This is a spatial state-update theory where parallel composition on both states and measurements is given by the tensor product._
More generally, one could include arbitrary positive-operator valued measurements (POVMs) in the state-measurement theory, or, even more generally, arbitrary quantum instruments. In these cases, the following lemma and theorems still hold. Let us now describe the locally applicable transformations on mixed quantum theory.
**Example 4** (Locally applicable transformations on static mixed quantum theory).: _Let \(A,B\) be Hilbert spaces. A locally applicable transformation on mixed quantum theory of type \(A\to B\) is a family of functions \(L_{X}:D(A\otimes X)\to D(B\otimes X)\) such that:_
* _Parallel composition commutes with the action of_ \(\mathscr{L}\)_, meaning_ \[\mathscr{L}_{XX^{\prime}}(\rho_{AX}\otimes\sigma_{X^{\prime}})=\mathscr{L}_{ X}(\rho_{AX})\otimes\sigma_{X^{\prime}},\qquad\mathscr{L}_{X}(\rho_{A}\otimes \sigma_{X})=\mathscr{L}_{[-]}(\rho_{A})\otimes\sigma_{X}\]
* _Probabilities of measurements on auxiliary systems are untouched by_ \(\mathscr{L}\)_, meaning_ \[\operatorname{Tr}(\mathscr{L}_{X}(\rho)(I\otimes\pi_{X}))=\operatorname{Tr}( \rho(I\otimes\pi_{X}))\]
* _The update due to measurements on auxiliary systems are untouched by_ \(\mathscr{L}\)_, meaning_ \[\frac{(I\otimes\pi)\mathscr{L}_{X}(\rho)(I\otimes\pi)}{p(\mathscr{L}_{X}(\rho ),I\otimes\pi)}=\mathscr{L}_{X}\Bigg{(}\frac{(I\otimes\pi)\rho(I\otimes\pi)}{ p(\rho,I\otimes\pi)}\Bigg{)},\] _where we recall that we have defined_ \(p(\rho,\pi):=\operatorname{Tr}(\rho\pi)\)_._
Again using the second and third bullet points together gives
\[\frac{(I\otimes\pi)\mathscr{L}_{X}(\rho)(I\otimes\pi)}{p(\rho,I\otimes\pi)} =\mathscr{L}_{X}\Bigg{(}\frac{(I\otimes\pi)\rho(I\otimes\pi)}{p(\rho,I\otimes \pi)}\Bigg{)}.\]
Note that again, locally applicable transformations are only ever defined on the most easily physically interpretable positive linear operators, those which are normalised. We now take the reader through an analogous path to establishing that locally applicable transformations perfectly replace completely positive and trace preservation as a postulate for transformations of density matrices. To begin, with let us take the easy direction. Clearly, any quantum channel can be used to define a locally applicable transformation, by taking tensor products with appropriate identity channels.
**Lemma 2** (Quantum Channels).: _Let \(\mathcal{E}:A\to B\) be a completely positive trace-preserving map between Hilbert spaces. The family of functions \(\mathscr{L}_{X}^{\mathcal{E}}\) defined by_
\[\mathscr{L}_{X}^{\mathcal{E}}(\rho):=\mathcal{E}\otimes\mathcal{I}_{X}[\rho],\]
_is a locally applicable transformation on mixed quantum theory._
Proof.: Given in the Appendix.
Now for the crucial point. Every locally applicable transformation on density matrices is just some collection of extensions of a quantum channel. In other words, quantum channels can be completely characterised as the only function on density matrices that can appear as the unextended part of a locally applicable transformation.
**Theorem 2**.: _A transformation \(T:A\to B\) in finite-dimensional mixed quantum theory is a quantum channel (i.e. a linear, completely positive, tracing preserving map from the linear operators on \(A\) to those on \(B\)) if and only if \(T=\mathscr{L}_{[-]}\) for some locally applicable transformation \(\mathscr{L}:A\to A\)._
Proof.: Let us begin by noting that for any Hilbert space \(A\) with dimension \(d\) then
\[d^{2}(I\otimes\Phi^{+})(\Phi^{+}\otimes\rho)(I\otimes\Phi^{+})=\rho\otimes \Phi^{+},\]
where \(\Phi^{+}:=|\Phi^{+}\rangle\!\langle\Phi^{+}|\). We now define the map
\[\mathcal{E}^{\mathscr{L}}(\rho):=d^{2}(I_{A}\otimes\langle\Phi^{+}|)( \mathscr{L}_{A^{*}}(\Phi^{+})\otimes\rho)(I_{A}\otimes|\Phi^{+}\rangle),\]
and confirm that \(\mathcal{E}^{\mathscr{L}}(\rho)=\mathscr{L}_{[-]}(\rho)\):
\[\mathcal{E}^{\mathscr{L}}(\rho): =d^{2}(I_{A}\otimes\langle\Phi^{+}|)(\mathscr{L}_{A^{*}}(\Phi^{+ })\otimes\rho)(I_{A}\otimes|\Phi^{+}\rangle)\] \[=d^{2}(I_{A}\otimes\langle\Phi^{+}|)(\mathscr{L}_{A^{*}A}(\Phi^{ +}\otimes\rho)(I_{A}\otimes|\Phi^{+}\rangle)\] \[=\operatorname{Tr}_{A^{*}A}[d^{2}(I_{A}\otimes\Phi^{+})( \mathscr{L}_{A^{*}A}(\Phi^{+}\otimes\rho)(I_{A}\otimes\Phi^{+})]\] \[=\operatorname{Tr}_{A^{*}A}[\mathscr{L}_{A^{*}A}(d^{2}(I_{A} \otimes\Phi^{+})(\Phi^{+}\otimes\rho)(I_{A}\otimes\Phi^{+}))]\] \[=\operatorname{Tr}_{A^{*}A}[\mathscr{L}_{A^{*}A}(\rho\otimes \Phi^{+})]\] \[=\operatorname{Tr}_{A^{*}A}[\mathscr{L}_{[-]}(\rho)\otimes\Phi^{ +}]\] \[=\mathscr{L}_{[-]}(\rho)\]
Finally the map \(\rho\mapsto d^{2}(I_{A}\otimes\langle\Phi^{+}|)(\mathscr{L}_{A^{*}}(\Phi^{+ })\otimes\rho)(I_{A}\otimes|\Phi^{+}\rangle)\) is completely positive as it is the concatenation of the completely positive map \(\rho\mapsto\mathscr{L}_{A^{*}}(\Phi^{+})\otimes\rho\) with the completely positive map \(\rho\mapsto(I_{A}\otimes\langle\Phi^{+}|)(\rho)(I_{A}\otimes|\Phi^{+}\rangle)\) along with multiplication by a positive scalar. This map is trace-preserving since each \(\mathscr{L}_{X}\) is defined only on normalised states, so that for every \(\rho\) with trace one by assumption \(\mathscr{L}_{[-]}(\rho)=\mathcal{E}^{\mathscr{L}}(\rho)\) has trace \(1\).
### Equivalences
It is clear from Theorem 1 that each locally applicable transformation on pure quantum theory can be used to define the unique unitary linear map that is its unextended function \(\mathscr{L}_{[-]}\). However, one might wonder whether the same unitary can correspond to more than one locally applicable transformation. This would be possible if there were other ways of extending \(\mathscr{L}_{[-]}\) besides taking \(\mathscr{L}_{X}{:=}\mathscr{L}_{[-]}\otimes I_{X}\). But in fact, there are no such alternative extensions, meaning that the locally applicable transformations and the unitary linear maps are in perfect one-to-one correspondence.
**Theorem 3**.: _Every locally applicable transformation \(\mathscr{L}:A\to B\) on pure quantum theory can be written_
\[\mathscr{L}_{X}(\psi)=\mathscr{L}_{[-]}\otimes I_{X}\ket{\psi},\]
_with \(\mathscr{L}_{[-]}\) a unitary linear map._
Proof.: Given in the Appendix B. The argument is a straightforward generalization of the proof of Theorem 1), where we replace \(\mathscr{L}_{[-]}\) with \(\mathscr{L}_{X}\).
Let us spell out in detail then how it is that the previous theorem sets up a one-to-one correspondence. Every locally applicable transformation \(\mathscr{L}\) can be used to construct a unitary by simply forgetting all but \(\mathscr{L}_{[-]}\). Conversely, a locally applicable transformation can be constructed from any unitary \(U\) by defining \(\mathscr{L}_{X}\ket{\psi}=U\otimes I_{X}\ket{\psi}\). It is easy to see that these constructions are inverse to each other in either direction, meaning that unitaries and locally applicable transformations are in bijection.
As one can straightforwardly verify, the outlined correspondence even preserves composition. That is, the locally applicable transformation \(\mathscr{L}^{U}\) corresponding to the unitary \(U=U_{2}\circ U_{1}\) is the one obtained by applying \(\mathscr{L}^{U_{1}}\) followed by \(\mathscr{L}^{U_{2}}\), and similarly we have \(U^{\mathscr{L}}=U^{\mathscr{L}_{2}}\circ U^{\mathscr{L}_{1}}\) whenever \(\mathscr{L}=\mathscr{L}_{2}\circ\mathscr{L}_{1}\)7. Consequently, locally applicable transformations and unitary linear maps are equivalent in quite a robust sense. The postulate that the dynamics of pure quantum theory should be unitary linear maps (tensored with identities when an environment is present) is logically equivalent to the postulate that the dynamics are locally applicable transformations.
Footnote 7: Here the composition and identity for the monoid and in this case group of locally applicable transformations is given in the obvious way, defining \((\mathscr{L}_{2}\circ\mathscr{L}_{1})_{X}:=(\mathscr{L}_{1})_{X}\circ( \mathscr{L}_{2})_{X}\)
One can tell a similar story in the case of mixed quantum theory.
**Theorem 4**.: _Every locally applicable transformation \(\mathscr{L}:A\to A\) on mixed quantum can be written_
\[\mathscr{L}_{X}(\rho)=\mathscr{L}_{[-]}\otimes\mathcal{I}_{X}(\rho),\]
_with \(\mathscr{L}_{[-]}\) a quantum channel._
Proof.: Given in Appendix B. The argument is a straightforward generalisation of the proof for Theorem 2.
Consequently, there is a composition-preserving, one-to-one correspondence between the locally applicable transformations on mixed quantum theory and the quantum channels. The postulate of local applicability is logically equivalent to the postulate of complete positivity and trace preservation for the transformations of mixed quantum theory.
### From Transformations to Evolutions
So far, we have dealt with _transformations_ - functions that discontinuously map states at the time \(t\) to states at some later \(t^{\prime}\) (if indeed one wants to consider them as embedded in time at all). We have yet to reconstruct any _evolutions_, which describe how states evolve continuously throughout time. In pure quantum mechanics, the Schrodinger evolution applies
\[i\hbar\frac{d}{dt}\ket{\psi(t)}=H\ket{\psi(t)},\]
and in the mixed setting, general Markovian dynamics are modelled using the Lindblad evolution
\[\frac{d\rho(t)}{dt}=-\frac{i}{\hbar}[H,\rho(t)]+\sum_{j}\gamma_{j}\left(2L_{j} \rho(t)L_{j}^{\dagger}-L_{j}^{\dagger}L_{j}\rho(t)-\rho(t)L_{j}^{\dagger}L_{j} \right).\]
To move from transformations to evolutions, we would need to assume that our locally applicable transformations form at least a _dynamical semigroup_. This means giving an assignment of continuous locally applicable transformations to each value of a continuous time variable \(t\) such that the transformation \(\mathscr{L}_{X}(0)\) at \(t=0\) is the identity function, the transformations obey the Markovianity condition \(\mathscr{L}_{X}(t+s)=\mathscr{L}_{X}(t)\circ\mathscr{L}_{X}(s)\), and the map \(t\mapsto\mathscr{L}(t)(\psi)\) is strongly continuous8. The dynamical semi-groups of locally applicable transformations on mixed quantum theory are by our second main theorem quantum dynamical semigroups and so would recover Lindblad master equation when uniform continuity is also assumed [65]. Stone's theorem says that any one-parameter group of unitary linear maps can be generated from the Schrodinger evolution for some appropriate Hamiltonian. It seems then that asking for dynamical groups of locally applicable transformations on pure quantum theory recovers the Schrodinger evolution. It would considerably be more satisfying if there was some semi-group style condition which works in both cases.
Footnote 8: Note that to require continuity requirements from the outset, would require us to define topological state-measurement theories, an adaptation of the standard framework in which each state space is upgraded to a topological space. It is conceivable that one could do better, and even canonically inherit a suitable topological structure from the domain \([0,1]\) of the probability functions, such a possibility is left for future work
It is important to bear in mind, however, that linearity, unitarity, complete positivity, and trace preservation can be derived from local applicability _before_ one assumes that the dynamics form a semigroup. A consequence is that if one does not wish to assume that time is continuous - if one wishes to speak only of transformations and not of evolutions - then the reconstructions of these attributes are still valid.
## 4 Comparison with the Argument of Gisin
In [38], Gisin addressed the question of whether quantum dynamics are a consequence of relativity. Defining the Schrodinger evolution as the rule that states evolve as
\[\left|\psi(t)\right\rangle=e^{-iHt/\hbar}\left|\psi(0)\right\rangle \tag{6}\]
for some Hamiltonian \(H\), and assuming the projection postulate, the abstract of [38] includes the following statement:
_the Schrodinger evolution is the only quantum evolution that is deterministic and compatible with relativity._
While definitions of each individual term are not explicitly given, it seems reasonable to infer that 'quantum evolution' refers to something like a family of functions on quantum states parameterized by time9; 'deterministic' refers to a mapping from pure states to
pure states10; and 'compatible with relativity' means that the evolution does not lead to superluminal signaling.
Footnote 10: See the bottom of p.364, and p.366 where it is suggested that any nondeterministic evolution would map a pure state to a mixed state.
When we try to explicitly bring the argument to its full conclusion we will find that we need some additional assumption(s). Before getting there, let us first recap the argument as given in [38], which elegantly recovers convex linearity on density matrices from compatibility with relativity.
### A review of the argument
In order to derive the Schrodinger evolution, [38] sets out to establish that the function \(g\) on pure density operators representing the dynamics is _convex linear_, in the sense that
\[\begin{split}\sum_{x}& p_{x}\left|\psi^{(x)} \right\rangle\left\langle\psi^{(x)}\right|\\ &=\sum_{x}q_{x}\left|\phi^{(x)}\right\rangle\left\langle\phi^{(x) }\right|\implies\sum_{x}p_{x}g(\left|\psi^{(x)}\right\rangle\left\langle\psi^{ (x)}\right|)=\sum_{x}q_{x}g(ket\phi^{(x)}\left\langle\phi^{(x)}\right|)\end{split} \tag{7}\]
for any two probabilistic ensembles of pure states \(\{p_{x},\psi^{(x)}\}\) and \(\{q_{x},\phi^{(x)}\}\).
Let us call a pair of ensembles satisfying \(\sum_{x}p_{x}\psi^{(x)}=\sum_{x}q_{x}\phi^{(x)}\)_indistinguishable_. Suppose that \(g\) is not convex linear, i.e. does not satisfy (7). Then there exists an indistinguishable pair of ensembles \((e_{1},e_{2})\) that are mapped to a distinguishable pair \((e_{1}^{\prime},e_{2}^{\prime})\). But if \((e_{1},e_{2})\) are indistinguishable, then a useful lemma from [38] states that, given the usual treatment of states and measurements in pure quantum theory, one can devise a situation in which a spacelike separated agent can remotely choose whether the system of interest is prepared in \(e_{1}\) or \(e_{2}\) by choosing what measurement to perform on a different system.11 If \(g\) is then performed on the system of interest, then this agent effectively chooses between \(e_{1}^{\prime}\) or \(e_{2}^{\prime}\), meaning she can send signals to an agent with access to the system of interest. Consequently, the usual treatment of states and measurements in pure quantum theory implies that any convex nonlinear dynamical function leads to superluminal signaling.
Footnote 11: Concretely: Alice and Bob share an appropriate entangled state and Alice chooses whether to perform the measurement \(m_{1}\) or \(m_{2}\). The usual collapse rule implies that Bob’s system is prepared in \(e_{j}\) when Alice chooses \(m_{j}\).
After this derivation of convex linearity, a further argument is not given (or cited) as to why the Schrodinger evolution (6) is the only quantum evolution that is deterministic and compatible with relativity. In the next section we will note that continuous time parameters are necessary to reach such a conclusion. It is unclear to us whether continuity is sufficient however, without the additional assistance of reversibility.
### Non-linear functions with no-superluminal signalling
Consider the following function, where \(\left|0\right\rangle\left\langle 0\right|\) is some fixed state.
\[g(\left|\psi\right\rangle\left\langle\psi\right|)=\left|0\right\rangle\left\langle 0\right| \tag{8}\]
Clearly, \(g\) does not lead to signaling in the scenario from [38], and is deterministic in the sense of mapping pure states to pure states. It follows from the argument in [38] that it is convex linear, as is easily verified.
Nevertheless, \(g\) does not arise from any Schrodinger evolution of the form (6). Indeed, \(g\) does not even act linearly on the Hilbert space - there is no linear operator \(L\) such that \(g(\ket{\psi}\bra{\psi})=L\ket{\psi}\bra{\psi}L^{\dagger}\) (see Appendix C). Convex linearity does not imply linearity with respect to superpositions; as a result, the determinism and no-signaling assumptions of [38] do not get us all the way to unitary linear evolution without additional assumptions. As outlined in the introduction, such a reliance means that discrete time evolutions cannot be argued to be linear using the approach of [38].
With this in mind, and a noted in [66], one way to complete the argument is to invoke a theorem from [67]. Theorem 3.4 of [67] states that any dynamical _group_ - a semigroup where each element has an inverse - of positive linear maps on density operators is generated by the Schrodinger evolution. Since any convex linear map on projectors defines a positive linear map on density operators, combining this with the argument from [38] leads to the following theorem.
Determinism + Compatibility with relativity + Dynamical groups \(\implies\) Schrodinger evolution
Leaning on the additional mathematical structure of dynamical group seems to come with three additional conceptual additions to the argument, the existence of a continuous time-parameter, the reversibility of transformations, and Markovianity.
Perhaps the reversibility assumption could be dropped: the theorem above might still hold if we replace 'groups' with'semigroups'. One could then even say that the quote form the start of this section is true as long as forming a dynamical semigroup is a thought of as a defining property of an evolution. This would be an interesting and valuable result which would clarify the role of signalling constraints in deriving linearity, well worth either proving or disproving in future work.
If a proof can be given without reversibility, one still has to assume that one has a continuous time parameter, and, moreover, a dynamical semigroup, before one can derive linearity with respect to superpositions and unitarity12. A conveneient feature of local applicability is that it gives linear and unitary transformations even when one does not assume one has a dynamical semigroup or any kind of time-parameter. One only needs to invoke semigroups if one wants to further argue that those unitary transformations arise from the Schrodinger equation. As a result, the local applicability approach is more suitable for any discrete-time setting, and is more likely to be generalizable to quantum gravitational contexts.
Footnote 12: Note that this also means that it is unclear how a Gisin style argument could be used to derive the linearity of the (isometric) transformations between distinct Hilbert spaces, meaning there is a whole class of kinds of transformations of pure states who’s linearity is so-far only derivable from local applicability
### On the Argument for Complete Positivity
Another nice feature of the local applicability axiom is that it can be applied without tweaking to recover the linear unitary maps in the pure setting and the quantum channels in the mixed setting. In [39], it is suggested that a Gisin-style no superluminal signalling by dynamics of density matrices argument could be used to recover the quantum channels. One might wonder then, whether a Gisin style argument, when assisted with dynamical group conditions, could be used in a similar way. In trying to do so we find there are two key challenges.
First, the argument goes as far as to show that any function from pure states to density matrices which satisfies no superluminal signalling can be extended to a convex
linear map on the density matrices. Positivity is clear by construction, however, less clear is complete positivity. Indeed, from this point onwards, complete positivity is axiomatised in the usual way as an additional property for positive linear maps. It is unclear how to phrase such a requirement in a theory-independent way, without prior knowledge of linearity.
Second, without relaxation of the dynamical group requirement in the pure setting to a dynamical semi-group requirement, the assistance of reversibility used to complete the argument in the pure setting is then too strong in the mixed setting, ruling out all quantum channels which are not isometric.
### Summary of Comparison
The approach of [38] provides an enlightening and efficient derivation of convex linearity, something more needs to be said however before quantum linearity and unitarity are recovered. That'something more' at the very least prevents the argument from applying to discrete time evolution. Additionally, it seems to be not possible to put together the arguments of [38, 39], in such a way as to give a simple one-size-fits-all axiom for transformations.
This highlights a couple of nice features of local applicability. First, it recovers linearity and unitarity without background assumptions of reversibility or any a particular notion of time. Second, it is a working axiom for quantum transformations, which works without tweaking simultaneously for the pure and mixed settings.
## 5 Conclusion
On a standard approach to quantum theory, one starts by making the assumption that the dynamics of an isolated system can be represented by a linear and unitary operator on its Hilbert space. Then, one further assumes that the action of \(U\) in the presence of an environment is given by \(U\otimes I\). Once both these assumptions are made, it follows that \(U\) is locally applicable.
But the argument can be reversed - one can _assume_ local applicability and then _derive_ linearity, unitarity, and the role of \(U\otimes I\) (or, in the mixed case, convex linearity, completely positivity and \(\mathcal{E}\otimes\mathcal{I}\)). And there is every reason to prefer it this way round, given the appealing nature of local applicability. This flipping of the script not only makes for a more intuitive formulation of quantum theory; it also throws light on its connection with the other pillar of modern physics, relativity theory.
In connecting locality with linearity, the results of this paper also connect locality with core foundational issues in quantum theory. In particular, linearity with respect to superpositions is responsible for the Wigner's friend paradox [68] and the associated problems of quantum measurements. Theorem 1 thus reveals a sense in which locality leads to a measurement problem. This complements insights from recent no-go theorems [69, 70, 71, 72, 73] stating that the combination of quantum theory and relativity (and more generally, Bell nonlocality, the preservation of information, and dynamical locality [18]) leads to a particularly acute sort of measurement problem, in which the observed outcomes and settings of measurements fail to be absolute.
On the mathematical side, some connection appears to be forming between the foundations of physics and the Yoneda lemma, a fundamental theorem of abstract algebra and category theory [74]. This lemma states that families of functions (suitably well-behaved and called natural transformations) will always inherit the defining structural features of
the mathematical theories they act on. In this paper, we have seen that whichever form of linearity was present in a state space (be it the superposition rules of Hilbert spaces or the mixing rules of operator spaces) was inherited by families of functions (suitably well-behaved and called locally-applicable transformations). This suggests the possibility of a version of the Yoneda lemma specially adapted for physical theories.
In order to derive our results, we were led to introduce the notion of a _state-measurement theory._ In the future, this very abstract framework could be used much more generally to study what dynamics are imposed on a theory either by local applicability or by other assumptions. Most obviously, one could attempt to extend our theorems to infinite-dimensional quantum state-measurement theories, or to state-measurement theories associated with other operational probabilistic theories [43]. One could also attempt an extension to theories with modified Born rules [46, 47, 48, 60, 75, 76]. Finally, one could attempt to derive features of a given theory besides its dynamics, including the tensor product rule, the direct sum, and even the state-measurement rule itself. If local applicability is not enough for all of this, there is a natural question: what other assumption(s) will be?
It is expected in quantum geometries that local subsystems will be represented with more subtle mathematical tools than simple tensor products, such as restrictions [77, 78] and non-factor sub-algebras [51, 57, 79, 80, 81]. One could attempt to derive the dynamics in these contexts by considering something like local applicability, but defined in terms of _direct sum_ structures rather than (just) in terms of tensor products. Given the problems associated with time in quantum gravity (including its possible discreteness, the lack of a global time parameter, and superpositions of time), it is encouraging that our derivation of unitarity did not rely on having any time parameter (continuous or otherwise).
Finally, in very broad terms, this is not the first context in which local applicability has been used to generalise, and reconstruct standard some notion transformation: [4, 5] did something similar in the more specialised study of quantum supermaps. This leads us to wonder in which other settings this simple physical principle can be formalised and used to reconstruct and generalise accepted notions of transformation in science and mathematics.
## Acknowledgements
We are pleased to thank Kai-isaak Ellers and Jonathan Barrett for useful conversations, and the organisers of the QISS Spring School 2023 for a thoroughly enjoyable week that stimulated the development of this paper. This work was funded by the Engineering and Physical Sciences Research Council [grant number EP/W524335/1] and [grant number EP-T517823-1].
|
2309.14013 | The Academic Midas Touch: An Unconventional Scientometric for Evaluating
Academic Excellence | The recognition of academic excellence is fundamental to the scientific and
academic endeavor. In particular, academic scientometrics that are able to
computationally capture academic excellence are of great interest. In this
work, we propose and investigate an unconventional scientometric termed the
Academic Midas Touch (AMT) that refers to a researcher's tendency to produce
outstanding publications (i.e., golden publications). Using an extensive
dataset of mathematicians, both award-winning and otherwise, we show that the
AMT scientometric is a valid and arguably valuable scientometric for the
distinction of academic excellence. | Ariel Rosenfled, Ariel Alexi, Liel Mushiev, Teddy Lazebnik | 2023-09-25T10:27:00Z | http://arxiv.org/abs/2309.14013v1 | # The Academic Midas Touch: An Unconventional Scientometric for Evaluating Academic Excellence
###### Abstract
The recognition of academic excellence is fundamental to the scientific and academic endeavor. In particular, academic scientometrics that are able to computationally capture academic excellence are of great interest. In this work, we propose and investigate an unconventional scientometric termed the "Academic Midas Touch" (AMT) that refers to a researcher's tendency to produce outstanding publications (i.e., "golden" publications). Using an extensive dataset of mathematicians, both award-winning and otherwise, we show that the AMT scientometric is a valid and arguably valuable scientometric for the distinction of academic excellence.
**Keywords:** Scientometrics, Academic Excellence, Researcher-level Evaluation, Unconventional Scientometrics
## 1 Introduction
Evaluating and assessing researchers' academic performance is crucial to maintaining the quality and integrity of research, promoting scientific progress, allocating resources effectively, and ensuring accountability in the research community [1, 2, 3, 4, 5, 6]. A central issue within this realm is the identification and recognition of academic excellence [7, 8]. Specifically, the acknowledgment of outstanding researchers who perform excellent science. This, in turn, can reinforce high-quality scientific progress and serve as a catalyst for the pursuit of academic distinction, visibility, and impact.
Due to its amorphous nature, the term "academic excellence" is often interpreted in various ways which need not necessarily align. Indeed, over a hundred different researcher-level performance indicators, which are typically at the basis of academic excellence identification, have been proposed and evaluated in prior literature [9, 10, 11, 12]. Amongst these measures, citation-based scientometrics seems to be the most widely accepted quantitative measure for assessing scientific excellence [13, 14, 15]. These measures range from simple ones such as the number of publications at the top 5% most frequently cited publications in the field [16], the number of publications in highly-cited journals [17], the number of publications which received at least 10 citations [18], a factoring of both citations and the number of publications such as the various versions of the h-index [19, 20, 21], to more sophisticated ones such as the g-index [22], the scientist impact factor [23], and the u-index [24], to name a few. Unfortunately, as most researchers tend to agree, defining "excellence" in a standardized and consistent way presents serious difficulties [25]. Specifically, each researcher-level indicator reflects just one particular dimension of the general concept of research performance or excellence [26, 27]. Consequently, the use of only a single indicator in order to gauge the overall academic performance of a researcher may provide an incomplete picture, and thus a combination of the various types of indicators is needed in order to offer policymakers and evaluators valid and useful assessments [28, 29, 30, 31].
In this work, we propose and investigate an _unconventional_ scientometric for evaluating research performance that focuses on a unique aspect of research excellence. We proposed a novel indicator we term the "Academic Midas Touch" (AMT) which draws inspiration from the famous tale of King Midas from Greek mythology. Specifically, we adopt the most well-known aspect of the tale which is the "Midas touch" - everything King Midas touched turned to gold - and adapt it to the researcher evaluation context. Intuitively, an AMT refers to a researcher's tendency to produce outstanding publications (i.e., "golden" publications). This underlying motivation provides a distinctive and yet-to-be-explored perspective on academic excellence.
Following the formal introduction of the AMT scientometric provided next, we provide a thorough investigation of AMT's properties, focusing on the field of Mathematics. Specifically, we show that it is guaranteed to monotonically non-decrease in the number of citations attracted by one's body of work and, using a large sample of mathematicians (both award-winning ones and otherwise), we situate and demonstrate AMT's potential advantages compared to other popular scientometrics.
The article is organized as follows: Section 2 introduces the definition of the AMT scientometric. Then, in Section 3, we present the investigation of the AMT. Finally, Section 4 interprets our results in context, discusses the main limitations of the proposed metric, and highlights potential future work directions.
## 2 The Academic Midas Touch
In order to define AMT, one first needs to specify what constitutes as "gold" in our context. Since the definition of a publication's distinction (or even impact) is, by itself, a point of contentious [32], we propose the use of a function \(\mathcal{G}(\cdot)\) to measure the "goldness" associated with an individual publication and calculate its average across a researcher's body of work. Formally, let \(s\) be a scholar and let \(p\in P\) denote a publication within her body of work. A scholar's AMT is defined as follows:
\[AMT(s):=\frac{1}{|P|}\sum_{p\in P}\mathcal{G}(p). \tag{1}\]
Note that \(\mathcal{G}(\cdot)\) may be defined in many different ways depending on the specific characteristics one wishes to consider when determining the "goldness" of a given publication. For example, it may consider the scientometrics associated with the publication outlet [33], the number and positioning of authors [34], altmetrics [35, 36], etc. Here, we adopt a simple yet effective citation-based approach where \(\mathcal{G}(\cdot)\) is defined to be a predictor of the following form; a publication is deemed "gold" (i.e., \(\mathcal{G}(p)=1\)) if and only if it has accumulated at least \(y\) citations over the first \(x\) years since its publication, and \(\mathcal{G}(p)=0\) otherwise. Formally, we represent each publication as a series \(p:=c_{0},c_{1},\ldots\) where \(c_{i}\) is the number of citations \(p\) has accumulated in the first \(i\) years since its publication. Integrating this definition into Eq. 2 brings about the following formula which essentially measures the rate of "golden publications" within a scholar's body of work:
\[AMT(s):=\frac{1}{|P|}\sum_{p\in P}\begin{cases}1,&c_{x}\geq y\\ 0,&\text{otherwise}\end{cases} \tag{2}\]
where \(x\) and \(y\) are hyper-parameters denoting the "time threshold" and "citations threshold", respectively. For an empty set of publications (i.e., \(|P|=0\)), we define \(AMT(s):=0\).
## 3 Investigation
We start by establishing the theoretical soundness of the AMT by proving that it is monotonic in the number of citations. Then, we empirically explore the properties of the AMT using extensive data from the field of Mathematics.
### Theoretical Soundness
We first establish that AMT satisfies the theoretical property of _monotonicity_ as defined below. For convenience, we denote \(x:=[x_{1},\ldots,x_{|P|}]\) as the vector of citation counts associated with a scholar's body of work - i.e., the citation vector. Recall that a scientometric \(f\) is considered _monotonic_[37] if for any two citation vectors, \(a,b\in\mathbb{R}^{n}\),
\[f(a)\leq f(b)\leftrightarrow\sum_{i\in[0,n]}a_{i}\leq\sum_{i\in[0,n]}b_{i}.\]
**Lemma 1**.: _AMT is Monotonic_
Proof.: Let us assume to the contrary that two citation vectors \(a,b\in\mathbb{R}^{n}\) exist such that \(f(a)>f(b)\leftrightarrow\sum_{i\in[0,n]}a_{i}\leq\sum_{i\in[0,n]}b_{i}\). Hence, for the same number of publications, there must be at least one publication that obtains the value 1 rather than 0 in the summation. Therefore, there exist values \(v_{1}\) and \(v_{2}\) such that \(v_{1}<v_{2}\) and \(v_{1}\geq y\) while \(v_{2}<y\) which can not be true due to the transitive nature of the operator \(<\). Consequently, AMT is monotonic in the number of citations.
### Empirical Evaluation
In order to practically explore the properties of AMT (as defined in Eq. 2), we conducted a four-phased empirical evaluation based on a sample of 8,468 mathematicians. First, we discuss the data curation and processing procedures along with descriptive statistics of the sample. Second, we explore AMT's parameter sensitivity and perform parameter tuning to determine appropriate parameters for the field of Mathematics. Third, we distinguish AMT from other popular scientometrics (i.e., H-index, i10-index, and citation count) through pair-wise correlation testing. Finally, we examine the AMT scores of 100 highly acclaimed mathematicians and show that it is effective in distinguishing award-winning mathematicians from other mathematicians. Specifically, we show that AMT can serve as a useful indicator for statistically distinguishing between these two groups, whereas other popular scientometrics (i.e., H-index, i10-index, and citation count) do not. Fig. 1 presents a schematic view of the empirical evaluation process.
#### 3.2.1 Data curation and processing
For our empirical investigation, we used three popular academic datasets: DBLP1, Google Scholar2, and Scopus3. First, all researchers indexed in DBLP were extracted as of June 2023. Note that DBLP is a bibliometric dataset that focuses on the computational domain (i.e., Mathematics, Computer Science, and Engineering) [38, 39, 40, 41] and is considered by many as the state of the art in coverage and accuracy [42]. In order to single out researchers who primarily publish in the field of Mathematics, i.e., mathematicians, we rely on Scopus's journal subject classification system [43, 44]. Specifically, a publication is considered to be "mathematical" if the journal it was published in was classified as mathematical one according to the Web of Science (WOS) index [45]. For our sample, we consider researchers who published at least five journal articles, of which more than 50% are classified as mathematical over a time span of at least five years. Overall, 8468 mathematicians were selected
Figure 1: A schematic view of the empirical evaluation process.
for consideration. For each of these mathematicians, we extracted their publication records using Google Scholar along with the automatically calculated H-index, i10-index, and citation count. Google Scholar is considered to have the best publication coverage compared to other indexes [46], and thus, it was chosen for this purpose.
For completeness, we further identified each researcher's main affiliation (based on each Google Scholar's profile) and classified it to either Europe, North America, Asia, Africa, Oceania, or Other/Unknown. Overall, 3988 mathematicians (47.1%) are affiliated with European-based universities, 2439 (28.8%) are affiliated with North American-based universities, 1118 (13.2%) are affiliated with Asian-based universities, 398 (4.7%) are affiliated with African-based universities, 212 (2.5%) are affiliated with Oceania-based universities, and 313 (3.7%) are affiliated with other or unknown universities. Furthermore, we use the gender identification model proposed by [47], which was trained on around 100 million pairs of names and gender association, and a confidence threshold of 95% to assign each researcher with an estimated gender. Our sample consists of 7460 (88.1%) male, 899 (10.6%) female, and 110 (1.3%) unknown mathematicians. The academic age (i.e., years since first publication) of the mathematicians in our sample ranges from 3 and 38 years with mean and standard deviation of \(24.18\pm 4.53\). On average, mathematicians in our sample published \(52.47\pm 23.09\) publications with an average of \(2.24\pm 2.91\) publications each year. Overall, 486622 publications were considered.
#### 3.2.2 Sensitivity and Parameter Tuning
Eq. (2) consists of two hyper-parameters - \(x\) and \(y\). In order to understand the sensitivity of the formula and identify a sensible tuning for the hyper-parameters, we explore several settings and report the average AMT score across the mathematicians of our sample. As can be observed in Fig. 2, the higher the values set for \(x\) and \(y\) - the higher the average AMT score in the sample. This result is very natural as the number of citations accumulated to a publication is monotonically non-decreasing over time. Using a least mean square approach [48], we fit the results with a linear function obtaining: \(AMT=0.0438+0.0659x-0.0096y\) with a solid coefficient of determination of \(R^{2}=0.837\). That is, as the time threshold \((x)\) increases and as the citation threshold \((y)\) decreases, the average AMT score increases in an (almost) linear fashion.
Figure 2: The average AMT score in our sample as a function of \(x\) (time threshold) and \(y\) (citations threshold).
For our subsequent evaluation, we use \(x=3\) and \(y=15\). These values were chosen for the following reasons: First, it is often argued that 10 citations are considered by many to be a decent threshold to indicate that a publication has been accepted by the academic community [49] as also highlighted by the i10-index [50, 51]. As such, we chose to set a higher standard (by 50%) for what will be considered as a "golden" publication. Second, three years are, arguably, enough exposure time to allow a publication to be cited in other scholars' subsequent publications [52]. Third, using these parameters, on average, 10% of a researcher's publications are considered "golden", providing a reasonable balance between ordinary (i.e., non-golden) and extra-ordinary (i.e., golden) publications. Finally, using these parameters, the AMT score distribution in our data does not seem to statistically differ from a Normal distribution using the Shapiro-Wilk Normality test [53], at \(p>0.1\).
#### 3.2.3 Comparison with Popular Scientometrics
As discussed before, there are multiple scientometrics for measuring the academic impact of a researcher's body of work with the most commonly used one being the h-index, i10-index, and citation count [54, 55, 56, 57, 58, 59]. These scientometrics are available as part of a researcher's Google Scholar profile and were extracted as part of our data curation phase. Accordingly, Table 1 reports the pair-wise Pearson correlation between AMT, h-index, i10-index, and citation count matrix.
As can be observed from Table 1, despite the high correlation between the H-index, i10-index, and total number of citations (correlations range between 0.62 and 0.81), the AMT scores seem to only moderately correlate with other examined metrics (correlations range between 0.34 and 0.58). Presumably, this result is an indication that the AMT scientometric captures a slightly different notion of academic performance which is similar, yet not entirely aligned, with other popular metrics.
#### 3.2.4 Academic Excellence
Next, we examine the AMT scores of highly acclaimed mathematicians. To that end, we sample 100 award-winning mathematicians who received at least one of the following distinctions: Fields Medal4, Abel Prize5, or Wolf Prize6. The mathematicians selected for this analysis, which we refer to as the award-winning sample, were chosen at random subject to their having a Google Scholar profile. A full list is provided in the Appendix.
Footnote 4: According to the international mathematical union, the Fields Medal is awarded every four years to recognize outstanding mathematical achievement for existing work and for the promise of future achievement. It is considered the most prestige award one can obtain for a mathematical work.
Footnote 5: The Abel Prize recognizes pioneering scientific achievements in mathematics, established by the Norwegian Parliament (The Storting) in 2002.
Footnote 6: The Wolf Prize has been provided since 1978 to outstanding scientists and artists from around the world by the Wolf Foundation.
In order to examine whether AMT is a useful scientometric for distinguishing award-winning mathematicians from others, we examine the AMT distribution of the award-winning sample and compare it to an age-controlled sample of non-award-winning mathematicians (age-controlled sample for short). Specifically, we consider a randomly sampled subset of size \(n=100\) from the entire sample which does not statistically differ from the award-winners in terms of academic age at \(p=0.192\) and verify that they have an empty intersection with the award-winning sample. Fig. 3 depicts the AMT score distributions of the award-winning sample (in green) and the age-controlled sample (in blue). As can be observed from the figure, the award-winning sample demonstrates a very different AMT score distribution which is centered around much higher AMT scores than the age-controlled
\begin{table}
\begin{tabular}{c|c c c c} & **AMT** & **H-index** & **i10-index** & **Citations** \\ \hline
**AMT** & 1 & 0.44 (\(<0.001\)) & 0.34 (\(<0.001\)) & 0.58 (\(<0.01\)) \\
**H-index** & 0.44 (\(<0.001\)) & 1 & 0.81 (\(<0.01\)) & 0.62 (\(<0.001\)) \\
**i10-index** & 0.34 (\(<0.001\)) & 0.81 (\(<0.01\)) & 1 & 0.75 (\(<0.001\)) \\
**Citations** & 0.58 (\(<0.01\)) & 0.62 (\(<0.001\)) & 0.75 (\(<0.001\)) & 1 \\ \end{tabular}
\end{table}
Table 1: Pair-wise Pearson correlations between the examined scientometrics. The results are shown as the correlation coefficient and the p-value in brackets.
sample. Specifically, a one-tail T-test reveals that the two samples are, indeed, statistically different with the award-winning mathematicians demonstrating higher average AMT scores of \(0.45\pm 0.06\) compared to the age-controlled sampled \(0.38\pm 0.04\), at \(p=0.038\). The other examined scientometrics (H-index, i10-index, and citation count) do not seem to provide a similarly suitable differentiation between the two groups using a one-tail T-test with \(p=0.057\), \(p=0.092\), and \(p=0.085\), respectively. The results are summarized in Table 2.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Scientometric** & AMT & H-index & i10-index & Citation count \\ \hline \hline
**Relative difference** & 18.42\% & 13.36\% & 4.12\% & 5.35\% \\ \(p\)**-value** & 0.038 & 0.057 & 0.092 & 0.085 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The relative difference (first row) and the statistical significance of the difference (second row) between the mean of the award-winning sample and the age-controlled sample using the AMT and other benchmark scientometrics (columns).
Figure 3: The AMT score distributions of the award-winning sample (in green) and the age-controlled sample (in blue).
## 4 Discussion and Conclusion
In this study, we proposed and investigated an unconventional scientometric for evaluating academic excellence termed the Academic Midas Touch (AMT). Intuitively, AMT measures the propensity of a scholar to produce exceptional publications. In our simple yet effective instantiation of this intuition, we measure the rate of publications one has made that have attracted considerable academic attention (i.e., at least 15 citations) over a short time span since their publication (i.e., three years), as indicated by two tunable hyper-parameters.
From a theoretical perspective, our investigation shows that the AMT is monotonic in the number of citations thus guaranteeing that it is non-decreasing in the number of citations one's body of work attracts over time. From an empirical perspective, using extensive data from the field of Mathematics, we show that AMT correlates, but does not fully align, with other popular scientometrics while favorably comparing to them in distinguishing highly acclaimed mathematicians from others. Taken jointly, these results seem to suggest that AMT is a valid and arguably valuable scientometric for the distinction of academic excellence. Specifically, we believe that AMT's unconventional perspective can help elucidate the multi-faceted notion of academic excellence and can be instrumental, as a complementary scientometric, for recognizing excellent researchers.
It is important to note that this work has several limitations that offer fruitful opportunities for future work. First, our empirical evaluation focuses on a sample of mathematicians identified through their journal publications. Since the delineation of any scientific field is unclear and journals' boundaries need not necessarily align with those of any given field of study, journal subject classifications may not align with one's expectations [60]. In other words, various other definitions of which scientists should be considered as "mathematicians" could be applied, leading to potentially different results. Second, as different scientific fields may have irregular publication patterns, citation practices, and evaluation criteria, our results may not generalize well outside the field of Mathematics. As such, the exploration of AMT in additional scientific fields seems merited. Third, our mathematical definition of AMT takes a simple functional form that has two hyper-parameters. Alternative formulations, as well as different parameter setups, could lead to other outcomes and, potentially, bring about other conclusions. It is thus important to carefully consider the exact formulation and parameter setup given one's needs, expectations, and use context. In future work, we plan to study parameter-free alternatives to the proposed instantiation which will follow a similar underlying rationale. Last, it is important to note that, similar to other researcher-level performance scientometrics, AMT does not fully capture the multidimensional nature of academic conduct such as collaborations, mentoring, and societal impact which are, by themselves, highly complex and multifaceted. As such, it is intended as a complementary scientometric which is to be considered in tandem with others.
## Declarations
### Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
### Conflicts of interest/Competing interests
None.
### Code and Data availability
The code and data that have been used in this study are available upon request from the corresponding author.
### Acknowledgement
The last author wishes to thank Avner Friedman for inspiring the question at the base of this work.
## Author Contribution
* Original Draft, Writing
- Review & Editing, Supervision.
* Review & Editing.
* Review & Editing.
* Original Draft, Supervision, Project administration.
|
2309.04200 | Cyclic Operator Precedence Grammars for Parallel Parsing | Operator precedence languages (OPL) enjoy the local parsability property,
which essentially means that a code fragment enclosed within a pair of markers
-- playing the role of parentheses -- can be compiled with no knowledge of its
external context. Such a property has been exploited to build parallel
compilers for languages formalized as OPLs. It has been observed, however, that
when the syntax trees of the sentences have a linear substructure, its parsing
must necessarily proceed sequentially making it impossible to split such a
subtree into chunks to be processed in parallel. Such an inconvenience is due
to the fact that so far much literature on OPLs has assumed the hypothesis that
equality precedence relation cannot be cyclic. This hypothesis was motivated by
the need to keep the mathematical notation as simple as possible.
We present an enriched version of operator precedence grammars, called
cyclic, that allows to use a simplified version of regular expressions in the
right hand sides of grammar's rules; for this class of operator precedence
grammars the acyclicity hypothesis of the equality precedence relation is no
more needed to guarantee the algebraic properties of the generated languages.
The expressive power of the cyclic grammars is now fully equivalent to that of
other formalisms defining OPLs such as operator precedence automata, monadic
second order logic and operator precedence expressions. As a result cyclic
operator precedence grammars now produce also unranked syntax trees and
sentences with flat unbounded substructures that can be naturally partitioned
into chunks suitable for parallel parsing. | Michele Chiari, Dino Mandrioli, Matteo Pradella | 2023-09-08T08:25:03Z | http://arxiv.org/abs/2309.04200v1 | # Cyclic Operator Precedence Grammars for Parallel Parsing
###### Abstract
Operator precedence languages (OPL) enjoy the local parsability property, which essentially means that a code fragment enclosed within a pair of markers --playing the role of parentheses-- can be compiled with no knowledge of its external context. Such a property has been exploited to build parallel compilers for languages formalized as OPLs. It has been observed, however, that when the syntax trees of the sentences have a linear substructure, its parsing must necessarily proceed sequentially making it impossible to split such a subtree into chunks to be processed in parallel. Such an inconvenience is due to the fact that so far much literature on OPLs has assumed the hypothesis that equality precedence relation cannot be cyclic. This hypothesis was motivated by the need to keep the mathematical notation as simple as possible.
We present an enriched version of operator precedence grammars, called cyclic, that allows to use a simplified version of regular expressions in the right hand sides of grammar's rules; for this class of operator precedence grammars the acyclicity hypothesis of the equality precedence relation is no more needed to guarantee the algebraic properties of the generated languages. The expressive power of the cyclic grammars is now fully equivalent to that of other formalisms defining OPLs such as operator precedence automata, monadic second order logic and operator precedence expressions. As a result cyclic operator precedence grammars now produce also unranked syntax trees and sentences with flat unbounded substructures that can be naturally partitioned into chunks suitable for parallel parsing.
Keywords:Operator Precedence Languages Cyclic Precedence Relations Parallel Parsing
## 1 Introduction
_Operator precedence languages (OPL)_ are a "historical" family of languages invented by R. Floyd [8] to support fast deterministic parsing. Together with their _operator precedence grammars (OPG)_, they are still used within modern
compilers to parse expressions with operators ranked by priority. The key feature that makes them well amenable for efficient parsing and compilation is that the syntax tree of a sentence is determined exclusively by three binary precedence relations over the terminal alphabet that are easily pre-computed from the grammar productions. For readers unacquainted with OPLs, we anticipate an example: the arithmetic sentence \(a+b\times c\) does not make manifest the natural structure \((a+(b\times c))\), but the latter is implied by the fact that the plus operator yields precedence to the times.
Early theoretical investigation [6], originally motivated by grammar inference goals, realized that, thanks to the tree structure assigned to strings by the precedence relations, many closure properties of regular languages hold for OPLs too, but only after a long intermission, renewed research [5] proved further algebraic properties of OPLs. Then, in [13] the _Operator Precedence automata (OPA)_ were defined as a formalism, paired with OPGs as well as pushdown automata are paired with context-free grammars (CFG), to formalize the recognition and parsing of OPLs. In the same paper a _monadic second-order (MSO) logic_ characterization of OPLs that naturally extends the classic one for regular languages was also produced. Furthermore [15] introduced the _operator precedence expressions (OPE)_ which extend traditional regular expressions in the same way as the MSO logic for OPLs extends the traditional one for regular languages. The extension of fundamental algebraic and logic properties of regular languages to OPLs has been completed in [11] where a characterization of OPLs in terms of a syntactic congruence with a finite number of equivalence classes is given.
Another distinguishing property of OPLs is their _local parsability_, i.e. the fact that a chunk of text included within a pair of symmetric precedence relations can be deterministically parsed even without knowing its context [3]. This feature was exploited to produce a parallel parser generator which exhibited high performances w.r.t. traditional sequential parsers [2].
A different research branch --which however is not the core aspect of this paper-- exploited the algebraic and logic properties of this family to extend to it the successful verification techniques based on model checking, with comparable performances with those obtained for regular languages [4]. In summary, this historical family of languages enjoys properties that allowed relevant modern applications such as automatic verification and parallel compilation.
It must be pointed out, however, that many --not all-- of the algebraic properties discovered during such a research activity that lasted several decades were proved by assuming a hypothesis on the precedence relations defined on the input alphabet. Although from a theoretical point of view this hypothesis slightly affects the generative power of OPGs --but is not necessary, e.g., for OPAs and MSO logic so that these two formalisms are a little more powerful than OPGs--, so far no practical limitation due to it was discovered in terms of formalizing the syntax of real-life programming and data description languages. Thus, it has been constantly adopted in the various developments to avoid making the mathematical notation and technical details too cumbersome.
The recent contribution [12], however, pointed out a weakness of the parallel compilation algorithm described in [2] which in some cases hampers partitioning the input to be parsed in well-balanced chunks, so that the benefits of parallelism are affected. Intuitively, the weakness is due to the fact that the "normal" precedence relation on the arithmetic operator + compels to parse a sequence thereof by associating them either to the left or to right so that parsing becomes necessarily sequential in this case. The authors also proposed a special technique to overtake this difficulty by allowing for an acceptable level of ambiguity, which in the case of OPGs determines a conflict for some precedence relations on the terminal alphabet.
Such normal precedence relation, however, which either let + yield precedence to, or take precedence over itself, has no correspondence with the semantics of the operation whose result does not depend on the order of association. So, why not giving the various occurrences of the + operator the same precedence level as suggested by arithmetic's laws?4 Figure 1 gives an intuitive idea of the different structures given to a sequence of + operators by traditional OPGs and the natural semantics of the sum operation. More precise explanations will be given in the following sections.
Footnote 4: Of course, assuming that the use of parentheses does not alter the normal precedence between the various operators.
The answer to this question comes exactly from the above mentioned hypothesis: it forbids cyclic sequences of symbols that are at the same level of precedence; so that + cannot be at the same level of itself. What was a, deemed irrelevant, theoretical limit hid a conceptual inadequacy which had not practical impact until OPLs properties have been exploited to build parallel compilers 5 We felt
Figure 1: Left-associative syntax tree (left) vs equal-level one (right) of the plus operator. The left syntax tree imposes a sequential left-to-right parsing and semantic processing whereas the right one can be split onto several branches to be partially processed independently and further aggregated.
therefore "compelled" to finally remove this relatively disturbing hypothesis: this is the object of the present paper.
As often claimed in previous literature, when discussing the impact of forbidding cycles of operators all at the same level of precedence, removing such a restriction requires allowing grammar _right hand sides (rhs) to include the Kleene \({}^{*}\) operator_ (or the \({}^{+}\) one which excludes the empty string from its result). Thus, we introduce the _Cyclic operator-precedence grammars (C-OPG)_ which include the above feature: whereas such a feature is often used in general context-free grammars to make the syntax of programming languages more compact but does not increase their expressive power, we show that C-OPGs are now fully equivalent to OPAs and other formalisms to define OPLs, such as the MSO logic. We also show that all results previously obtained under the above hypothesis still hold by using C-OPGs instead of traditional OPGs. Although the goal of this paper is not to develop parallel compilation algorithms rooted in C-OPGs, we show how they naturally overtake the difficulty pointed out by [12] and would allow to revisit their techniques, or to improve the efficiency of our previous parallel parser [2].
## 2 Background
We assume some familiarity with the classical literature on formal language and automata theory, e.g., [18, 10]. Here, we just list and explain our notations for the basic concepts we use from this theory. The terminal alphabet is usually denoted by \(\Sigma\), and the empty string is \(\varepsilon\). For a string, or set, \(x\), \(|x|\) denotes the length, or the cardinality, of \(x\). The character \(\#\), not present in the terminal alphabet, is used as string _delimiter_, and we define the alphabet \(\Sigma_{\#}=\Sigma\cup\{\#\}\).
### Regular languages: automata, regular expressions, logic
A _finite automaton_ (FA) \(\mathcal{A}\) is defined by a 5-tuple \((Q,\Sigma,\delta,I,F)\) where \(Q\) is the set of states, \(\delta\) the _state-transition relation_ (or its _graph_ denoted by \(\longrightarrow\)), \(\delta\subseteq Q\times\Sigma\times Q\); \(I\) and \(F\) are the nonempty subsets of \(Q\) respectively comprising the initial and final states. If the tuple \((q,a,q^{\prime})\) is in the relation \(\delta\), the edge \(q\stackrel{{ a}}{{\longrightarrow}}q^{\prime}\) is in the graph. The transitive closure of the relation is defined as usual. Thus, for a string \(x\in\Sigma^{*}\) such that there is a path from state \(q\) to \(q^{\prime}\) labeled with \(x\), the notation \(q\stackrel{{ x}}{{\longrightarrow}}q^{\prime}\) is equivalent to \((q,x,q^{\prime})\in\delta^{*}\); if \(q\in I\) and \(q^{\prime}\in F\), then the string \(x\) is _accepted_ by \(\mathcal{A}\), and \(L(\mathcal{A})\) is the language of the accepted strings.
A _regular expression_ (RE) over an alphabet \(\Sigma\) is a well-formed formula made with the characters of \(\Sigma\), \(\emptyset\), \(\varepsilon\), the Boolean operators \(\cup,\neg,\cap\), the concatenation \(\cdot\), and the Kleene star operator \({}^{*}\). We also use the operator \({}^{+}\), and we assume that its scope is always marked by parentheses. An RE \(E\) defines a language over \(\Sigma\), denoted by \(L(E)\), in the natural way.
Finite automata and regular expressions define the same family of languages named _regular_ (or rational) languages (REG).
### Grammars
Definition 1 (Grammar and language): A _context-free (CF) grammar_ is a tuple \(G=(\Sigma,V_{N},P,S)\) where \(\Sigma\) and \(V_{N}\), with \(\Sigma\cap V_{N}=\emptyset\), are resp. the terminal and the nonterminal alphabets, the total alphabet is \(V=\Sigma\cup V_{N}\), \(P\subseteq V_{N}\times V^{*}\) is the rule (or production) set, and \(S\subseteq V_{N}\), \(S\neq\emptyset\), is the axiom set. For a generic rule, denoted as \(A\to\alpha\), where \(A\) and \(\alpha\) are resp. called the left/right hand sides (lhs / rhs), the following forms are relevant:
\begin{tabular}{l l} axiomatic : & \(A\in S\) \\ terminal & : & \(\alpha\in\Sigma^{+}\) \\ empty & : & \(\alpha=\varepsilon\) \\ renaming & : & \(\alpha\in V_{N}\) \\ operator & : & \(\alpha\not\in V^{*}V_{N}V_{N}V^{*}\), i.e., at least one terminal is interposed between \\ & & \(\text{any two nonterminals occurring in }\alpha\) \\ \end{tabular}
A grammar is called _backward deterministic_ or a BD-grammar (or _invertible_) if \((B\to\alpha,C\to\alpha\in P)\) implies \(B=C\).
If all rules of a grammar are in operator form, the grammar is called an _operator grammar_ or O-grammar.
For brevity we give for granted the usual definition of _derivation_ denoted by the symbols \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\) (immediate derivation), \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\) (reflexive and transitive closure of \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\)), \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\) (transitive closure of \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\)), \(\mathrel{\mathop{\Longrightarrow}\limits_{G}}\) (derivation in \(m\) steps); the subscript \(G\) will be omitted whenever clear from the context. We give also for granted the notion of _syntax tree (ST)_. As usual, the _frontier_ of a syntax tree is the ordered left to right sequence of the leaves of the tree.
The _language_ defined by a grammar starting from a nonterminal \(A\) is \(L_{G}(A)=\left\{w\mid w\in\Sigma^{*},A\mathrel{\mathop{\Longrightarrow}\limits _{G}}w\right\}.\)
We call \(w\) a _sentence_ if \(A\in S\). The union of \(L_{G}(A)\) for all \(A\in S\) is the language \(L(G)\) defined by \(G\).
Two grammars defining the same language are _equivalent_. Two grammars generating the same set of syntax trees are _structurally equivalent_.
Notation:In the following, _unless otherwise explicitly stated_, lowercase letters at the beginning of the alphabet will denote terminal symbols, lowercase letters at the end of the alphabet will denote strings of terminals, Greek letters at the beginning of the alphabet will denote strings in \(V^{*}\). Capital letters will be used for nonterminal symbols.
Any grammar can be effectively transformed into an equivalent BD-grammar, and also into an O-grammar [1, 10] without renaming rules and without empty rules but possibly a single rule whose lhs is an axiom not otherwise occurring in any other production. _From now on, w.l.o.g., we exclusively deal with O-grammars without renaming and empty rules, with the only exception that, if \(\varepsilon\) is part of the language, there is a unique empty rule whose lhs is an axiom that does not appear in the rhs of any production.
### Operator precedence grammars
We define the operator precedence grammars (OPGs) following primarily [14].
Intuitively, operator precedence grammars are O-grammars whose parsing is driven by three _precedence relations_, called _equal_, _yield_ and _take_, included in \(\Sigma_{\#}\times X_{\#}\). They are defined in such a way that two consecutive terminals of a grammar's rhs --ignoring possible nonterminals in between-- are in the equal relation, while the two extreme ones --again, whether or not preceded or followed by a nonterminal-- are preceded by a yield and followed by a take relation, respectively; in this way a complete rhs of a grammar rule is identified and can be _reduced_ to a corresponding lhs by a typical bottom-up parsing. More precisely, the three relations are defined as follows. Subsequently we show how they can drive the bottom-up parsing of sentences.
Definition 2 ([8]): Let \(G=(\Sigma,V_{N},P,S)\) be an O-grammar. Let \(a,b\) denote elements in \(\Sigma\), \(A,B\) in \(V_{N}\), \(C\) either an element of \(V_{N}\) or the empty string \(\varepsilon\), and \(\alpha,\beta\) range over \(V^{*}\). The left and right terminal sets of nonterminals are respectively:
\[\mathcal{L}_{G}(A)=\left\{a\in\Sigma\mid\exists C:A\xRightarrow{*}G\alpha \right\}\text{ and }\mathcal{R}_{G}(A)=\left\{a\in\Sigma\mid\exists C:A\xRightarrow{*}G \alpha C\right\}.\]
(The grammar name will be omitted unless necessary to prevent confusion.)
The _operator precedence (OP) relations_ are defined over \(\Sigma_{\#}\times\Sigma_{\#}\) as follows:
* equal in precedence: \(a\doteq b\iff\exists A\rightarrow\alpha aCb\beta\in P\)
* takes precedence: \(a\gtrgtr b\iff\exists A\rightarrow\alpha Bb\beta\in P,a\in\mathcal{R}(B);\) \(a\gtr\#\iff a\in\mathcal{R}(B),B\in S\)
* yields precedence: \(a\lessgtr b\iff\exists A\rightarrow\alpha aB\beta\in P,b\in\mathcal{L}(B);\) \(\#\lessgtr b\iff\mathsf{L}(B),B\in S.\)
The OP relations can be collected into a \(|\Sigma_{\#}|\times|\Sigma_{\#}|\) array, called the _operator precedence matrix_ of the grammar, \(OPM(G)\): for each (ordered) pair \((a,b)\in\Sigma_{\#}\times\Sigma_{\#}\), \(OPM_{a,b}(G)\) contains the OP relations holding between \(a\) and \(b\).
Consider a square matrix: \(M=\left\{M_{a,b}\subseteq\{\doteq,\lessgtr,\gtrgtr\}\mid a,b\in\Sigma_{\#}\right\}\). Such a matrix is called _conflict-free_ iff \(\forall a,b\in\Sigma_{\#}\), \(0\leq|M_{a,b}|\leq 1\). A conflict-free matrix is called _total_ or _complete_ iff \(\forall a,b\in\Sigma_{\#}\), \(|M_{a,b}|=1\). By convention, if \(M_{\#,\#}\) is not empty, \(M_{\#,\#}=\{\doteq\}\). A matrix is \(\doteq\)-_acyclic_ if the transitive closure of the \(\doteq\) relation over \(\Sigma\times\Sigma\) is irreflexive.
We extend the set inclusion relations and the Boolean operations in the obvious cell by cell way, to any two matrices having the same terminal alphabet. Two matrices are _compatible_ iff their union is conflict-free.
Definition 3 (Operator precedence grammar): A grammar \(G\) is an _operator precedence grammar_ (OPG) iff the matrix \(OPM(G)\) is conflict-free, i.e. the three OP relations are disjoint. An OPG is \(\doteq\)-acyclic if \(OPM(G)\) is so. An _operator precedence language_ (OPL)_ is a language generated by an OPG.
Figure 2 (left) displays an OPG, \(G_{AE}\), which generates simple, unparenthesized arithmetic expressions and its OPM (center). The left and right terminal sets of \(G_{AE}\)'s nonterminals \(E\), \(T\) and \(F\) are, respectively: \(\mathcal{L}(E)=\{+,\times,n\}\), \(\mathcal{L}(T)=\{\times,n\}\), \(\mathcal{L}(F)=\{n\}\), \(\mathcal{R}(E)=\{+,\times,n\}\), \(\mathcal{R}(T)=\{\times,n\}\), and \(\mathcal{R}(F)=\{n\}\).
Remark: If the relation \(\doteq\) is acyclic, then the length of the rhs of any rule of \(G\) is bounded by the length of the longest \(\doteq\)-chain in \(OPM(G)\).
Unlike the arithmetic relations having similar typography, the OP relations do not enjoy any of the transitive, symmetric, reflexive properties. We kept the original Floyd's notation but we urge the reader not to be confused by the similarity of the two notations.
It is known that the family of OPLs is strictly included within the deterministic and reverse-deterministic CF family, i.e., the languages that can be deterministically parsed both from left to right and from right to left.
The key feature of OPLs is that a conflict-free OPM \(M\) defines a universe of _strings compatible with \(M\)_ and associates to each of them a unique _syntax tree_ whose internal nodes are unlabeled and whose leaves are elements of \(\Sigma\), or, equivalently, a unique parenthesization. We illustrate such a feature through a simple example and refer the reader to previous literature for a thorough description of OP parsing [9, 14].
Example 4: Consider the \(OPM(G_{AE})\) of Figure 2 and the string \(n+n\times n+n\). Display all precedence relations holding between consecutive terminal characters, _including the relations with the delimiters #_ as shown below:
\[\#\less n\gtrdot+\less n\gtrdot\times\less n\gtrdot+\less n\gtrdot\#\]
each pair \(\lessdot,\gtrdot\) (with no further \(\lessdot,\gtrdot\) in between) includes a _possible_ rhs of a production of _any OPG_ sharing the OPM with \(G_{AE}\), not necessarily a \(G_{AE}\) rhs. Thus, as it happens in typical bottom-up parsing, we replace --_possibly in parallel_-- each string included within the pair \(\lessdot,\gtrdot\) with a _dummy nonterminal_\(N\); this is because nonterminals are irrelevant for OPMs. The result is the string \(\#N+N\times N+N\#\). Next, we compute again the precedence relation
Figure 2: \(G_{AE}\) (left), its OPM (center), and the syntax tree of \(n+n\times n+n\) according to the OPM (right).
between consecutive terminal characters by _ignoring nonterminals_: the result is \(\#\lessdot N+\lessdot N\times N\gtrdot+N\gtrdot\#\).
This time, there is only one pair \(\lessdot,\gtrdot\) including a potential rhs determined by the OPM (the fact that the external \(\lessdot\) and \(\gtrdot\) "look matched" is coincidental as it can be easily verified by repeating the previous procedure with the string \(n+n\times n+n+n\)). Again, we replace the pattern \(N\times N\), with the dummy nonterminal \(N\); notice that there is no doubt about associating the two \(N\) to the \(\times\) rather than to one of the adjacent \(+\) symbols: if we replaced, say, just the \(\times\) with an \(N\) we would obtain the string \(N+NNN+N\) which cannot be derived by an O-grammar. By recomputing the precedence relations we obtain the string \(\#\lessdot N+N\gtrdot+N\gtrdot\#\). Finally, by applying twice the replacing of \(N+N\) by \(N\) we obtain \(\#N\#\).
The result of the whole bottom-up reduction procedure is synthetically represented by the _syntax tree_ of Figure 2 (right) which shows the precedence of the multiplication operation over the additive one in traditional arithmetics. It also suggests a natural association to the left of both operations: if we reverted the order of the rhs of the rules rewriting \(E\) and \(T\), the structure of the tree would have suggested associativity to the right of both operations which would not have altered the semantics of the two operations which can indifferently be associated to the left and to the right; not so is we dealt with, say, subtraction or division which instead _impose_ association to the left.
Notice that the tree of Figure 2 has been obtained --uniquely and deterministically-- by using exclusively the OPM, not the grammar \(G_{AE}\) although the string \(n+n\times n+n\in L(G_{AE})\)6.
Footnote 6: The above procedure that led to the syntax tree of Figure 2 could be easily adapted to become an algorithm that produces a new syntax tree whose internal nodes are labeled by \(G_{AE}\)’s nonterminals. Such an algorithm could be made deterministic by transforming \(G_{AE}\) into an equivalent BD grammar (sharing the same OPM).
Obviously, all sentences of \(L(G_{AE})\) can be given a syntax tree by \(OPM(G_{AE})\), but there are also strings in \(\Sigma^{*}\) that can be parsed according to the same OPM but are not in \(L(G_{AE})\). E.g., the string \(+++\) is parsed according to the \(OPM(G_{AE})\) as a ST that associates the \(+\) characters to the left. Notice also that, in general, not every string in \(\Sigma^{*}\) is assigned a syntax tree by an OPM; e.g., in the case of \(OPM(G_{AE})\) the parsing procedure applied to \(nn\) is immediately blocked since there is no precedence relation between \(n\) and itself.
The following definition synthesizes the concepts introduced by Example 4.
Definition 5 (OP-alphabet and Maxlanguage):
* A string in \(\Sigma^{*}\) is _compatible_ with an OPM \(M\) iff the procedure described in Example 4 terminates by producing the pattern \(\#N\#\). The set of all strings compatible with an OPM \(M\) is called the _maxlanguage_ or the _universe_ of \(M\) and is simply denoted as \(L(M)\).
* Let \(M\) be a conflict-free OPM over \(\Sigma_{\#}\times\Sigma_{\#}\). We use the same identifier \(M\) to denote the --partial-- function \(M\) that assigns to strings in \(\Sigma^{*}\) their unique ST as informally illustrated in Example 4.
* _The pair_ \((\Sigma,M)\) _where_ \(M\) _is a conflict-free OPM over_ \(\Sigma_{\#}\times\Sigma_{\#}\)_, is called an_ OP-alphabet_. We introduce the concept of OP-alphabet as a pair to emphasize that it defines a universe of strings on the alphabet_ \(\Sigma\) _--not necessarily covering the whole_ \(\Sigma^{*}\)_-- and implicitly assigns them a structure univocally determined by the OPM, or, equivalently, by the function_ \(M\)_._
* _The class of_ \((\Sigma,M)\)-compatible _OPGs and OPLs are:_ \[\mathscr{G}_{M}=\{G\mid G\text{ is an OPG and }OPM(G)\subseteq M\},\quad \mathscr{L}_{M}=\{L(G)\mid G\in\mathscr{G}_{M}\}.\]
Various formal properties of OPGs and OPLs are documented in the literature, e.g., in [6, 5, 14]. The next proposition recalls those that are relevant for this article.
Proposition 6 (Algebraic properties of OPGs and OPLs):
1. _If an OPM_ \(M\) _is total, then the corresponding homonymous function is total as well, i.e.,_ \(L(M)=\Sigma^{*}\)_._
2. _Let_ \((\Sigma,M)\) _be an OP-alphabet where_ \(M\) _is_ \(\doteq\)_-acyclic. The class_ \(\mathscr{G}_{M}\) _contains an OPG, called the_ maxgrammar _of_ \(M\)_, denoted by_ \(G_{max,M}\)_, which generates the maxlanguage_ \(L(M)\)_. For all grammars_ \(G\in\mathscr{G}_{M}\)_,_ \(L(G)\subseteq L(M)\)_._
3. _The closure properties of the family_ \(\mathscr{L}_{M}\) _of_ \((\Sigma,M)\)_-compatible OPLs defined by a total OPM are the following:_ * \(\mathscr{L}_{M}\) _is closed under union, intersection and set-difference, therefore also under complement (if a maxgrammar of_ \(M\) _exists)._ * \(\mathscr{L}_{M}\) _is closed under concatenation._ * _if_ \(M\) _is_ \(\doteq\)_-acyclic,_ \(\mathscr{L}_{M}\) _is closed under Kleene star._
Remark: Thanks to the fact that a conflict-free OPM assigns to each string at most one ST --and exactly one if the OPM is complete-- the above closure properties of OPLs w.r.t. Boolean operations automatically extend to sets of their STs. The same does not apply to the case of concatenation which in general may produce significant reshaping of the original STs [5]. Furthermore, any complete, conflict-free, \(\doteq\)-acyclic OPM defines a _universe of STs_ whose frontiers are \(\Sigma^{*}\).
### Chains and Operator Precedence Automata (OPA)
The notion of _chain_ introduced next is an alternative way to represent STs where internal nodes are irrelevant and "anonymized".
Definition 7 (Chains): Let \((\Sigma,M)\) be an OP-alphabet.
* \(A\) _simple chain_ is a word_ \(a_{0}a_{1}a_{2}\ldots\ a_{n}a_{n+1}\)_, written as_ \({}^{a_{0}}[a_{1}a_{2}\ldots a_{n}]^{a_{n+1}}\)_, such that:_ \(a_{0},a_{n+1}\in\Sigma\cup\{\#\}\)_,_ \(a_{i}\in\Sigma\) _for every_ \(i:1\leq i\leq n\)_,_ \(M_{a_{0}a_{n+1}}\neq\emptyset\)_, and_ \(a_{0}\lessdot a_{1}\doteq a_{2}\ldots a_{n-1}\doteq a_{n}\gtrgtr a_{n+1}\)_._
* \(A\) _composed chain_ is a word_ \(a_{0}x_{0}a_{1}x_{1}a_{2}\ldots a_{n}x_{n}a_{n+1}\)_, with_ \(x_{i}\in\Sigma^{*}\)_, where_ \({}^{a_{0}}[a_{1}a_{2}\ldots a_{n}]^{a_{n+1}}\) _is a simple chain, and either_ \(x_{i}=\varepsilon\) _or_ \({}^{a_{i}}[x_{i}]^{a_{i+1}}\) _is a chain (simple or composed), for every_ \(i:0\leq i\leq n\)_. Such a composed chain will be written as_ \({}^{a_{0}}[x_{0}a_{1}x_{1}a_{2}\ldots a_{n}x_{n}]^{a_{n+1}}\)_._
* _The body of a chain_ \({}^{a}[x]^{b}\)_, simple or composed, is the word_ \(x\)
_Given a chain_ \({}^{a}[x]^{b}\) _the_ depth \(d(x)\) _of its body_ \(x\) _is defined recursively:_ \(d(x)=1\) _if the chain is simple, whereas_ \(d(x_{0}a_{1}x_{1}\ldots a_{n}x_{n})=1+\max_{i}d(x_{i})\)_. The depth of a chain is the depth of its body._
For instance, the ST of Figure 2 (right) is biunivocally represented by the composed chain \({}^{\#}[x_{0}+x_{1}]^{\#}\), where, in turn \(x_{0}\) is the body of the composed chain \({}^{\#}[y_{0}+y_{1}]^{+}\), \(y_{0}\) is the body of the simple chain \({}^{\#}[e]^{+}\), \(y_{1}\) is the body of the composed chain \({}^{+}[z_{0}\times z_{1}]^{+}\), etc. The depth of the main chain is 5.
As well as an OPG selects a set of STs within the universe defined by its OPM, an _operator precedence automaton (OPA)_ selects a set of chains within the universe defined by an OP-alphabet.
Definition 8 (Operator precedence automaton): A nondeterministic _operator precedence automaton_ (OPA) is given by a tuple: \(\mathcal{A}=\langle\Sigma,M,Q,I,F,\delta\rangle\) where:
* \((\Sigma,M)\) is an operator precedence alphabet,
* \(Q\) is a set of states (disjoint from \(\Sigma\)),
* \(I\subseteq Q\) is a set of initial states,
* \(F\subseteq Q\) is a set of final states,
* \(\delta\), named transition function, is a triple of functions: \[\delta_{\text{shift}}:Q\times\Sigma\rightarrow\wp(Q)\qquad\delta_{\text{push }}:Q\times\Sigma\rightarrow\wp(Q)\qquad\delta_{\text{pop}}:Q\times Q \rightarrow\wp(Q)\]
We represent a nondeterministic OPA by a graph with \(Q\) as the set of vertices and \(\Sigma\cup Q\) as the set of edge labelings. The edges of the graph are denoted by different shapes of arrows to distinguish the three types of transitions: there is an edge from state \(q\) to state \(p\) labeled by \(a\in\Sigma\) denoted by a dashed (respectively, normal) arrow if and only if \(p\in\delta_{\text{shift}}(q,a)\) (respectively, \(p\in\delta_{\text{push}}(q,a)\)) and there is an edge from state \(q\) to state \(p\) labeled by \(r\in Q\) and denoted by a double arrow if and only if \(p\in\delta_{\text{pop}}(q,r)\).
To define the semantics of the automaton, we introduce some notations.
We use letters \(p,q,p_{i},q_{i},\ldots\) to denote states in \(Q\). Let \(\Gamma\) be \(\Sigma\times Q\) and let \(\Gamma^{\prime}\) be \(\Gamma\cup\{\bot\}\); we denote symbols in \(\Gamma^{\prime}\) as \([a,\ q]\) or \(\bot\). We set \(symbol([a,\ q])=a\), \(symbol(\bot)=\#\), and \(state([a,\ q])=q\). Given a string \(\Pi=\bot\pi_{1}\pi_{2}\ldots\pi_{n}\), with \(\pi_{i}\in\Gamma\), \(n\geq 0\), we set \(symbol(\Pi)=symbol(\pi_{n})\), including the particular case \(symbol(\bot)=\#\).
A _configuration_ of an OPA is a triple \(C=\langle\Pi,\ q,\ w\rangle\), where \(\Pi\in\bot\Gamma^{*}\), \(q\in Q\) and \(w\in\Sigma^{*}\#\). The first component represents the contents of the stack, the second component represents the current state of the automaton, while the third component is the part of input still to be read.
A _computation_ or _run_ of the automaton is a finite sequence of _moves_ or _transitions_\(C_{1}\vdash C_{2}\); there are three kinds of moves, depending on the precedence relation between the symbol on top of the stack and the next symbol to read:
**push move:** if \(symbol(\Pi)\lessdot\ a\) then \(\langle\Pi,\ p,\ ax\rangle\vdash\langle\Pi[a,\ p],\ q,\ x\rangle\), with \(q\in\delta_{\text{push}}(p,a)\);
**shift move:** if \(a\doteq b\) then \(\langle\Pi[a,\ p],\ q,\ bx\rangle\vdash\langle\Pi[b,\ p],\ r,\ x\rangle\), with \(r\in\delta_{\text{shift}}(q,b)\);
**pop move:** if \(a\gtrdot b\) then \(\langle\Pi[a,\ p],\ q,\ bx\rangle\vdash\langle\Pi,\ r,\ bx\rangle\), with \(r\in\delta_{\mathrm{pop}}(q,p)\).
Notice that shift and pop moves are never performed when the stack contains only \(\bot\).
Push and shift moves update the current state of the automaton according to the transition function \(\delta_{\mathrm{push}}\) and \(\delta_{\mathrm{shift}}\), respectively: push moves put a new element on the top of the stack consisting of the input symbol together with the current state of the automaton, whereas shift moves update the top element of the stack by changing its input symbol only. The pop move removes the symbol on the top of the stack, and the state of the automaton is updated by \(\delta_{\mathrm{pop}}\) on the basis of the pair of states consisting of the current state of the automaton and the state of the removed stack symbol; notice that in this move the input symbol is used only to establish the \(\gtrdot\) relation and it remains available for the following move.
A configuration \(\langle\bot,\ q_{I},\ x\#\rangle\) is _initial_ if \(q_{I}\in I\); a configuration \(\langle\bot,\ q_{F},\ \#\rangle\) is _accepting_ if \(q_{F}\in F\). The language accepted by the automaton is:
\[L(\mathcal{A})=\left\{x\mid\langle\bot,\ q_{I},\ x\#\rangle\stackrel{{ *}}{{\vdash}}\langle\bot,\ q_{F},\ \#\rangle,q_{I}\in I,q_{F}\in F\right\}.\]
Example 9: The OPA depicted in Figure 3 (top, left) based on the OPM at the (top, right) accepts the language of arithmetic expressions enriched w.r.t \(L(G_{AE})\) in that it introduces the use of explicit parentheses to alter the natural precedence of arithmetic operations. The same figure (bottom) also shows an accepting computation on input \(n+n\times(\!n+n\!)\).
Definition 10: Let \(\mathcal{A}\) be an OPA. A _support_ for a simple chain \({}^{a_{0}}[a_{1}a_{2}\ldots a_{n}]^{a_{n+1}}\) is any path in \(\mathcal{A}\) of the form
\[q_{0}\stackrel{{ a_{1}}}{{\longrightarrow}}q_{1}\stackrel{{ -}}{{\longrightarrow}}\ldots\stackrel{{-}}{{ \longrightarrow}}q_{n-1}\stackrel{{ a_{n}}}{{\longrightarrow}}q_{n} \stackrel{{ q_{0}}}{{\Longrightarrow}}q_{n+1} \tag{1}\]
Notice that the label of the last (and only) pop is exactly \(q_{0}\), i.e. the first state of the path; this pop is executed because of relations \(a_{0}\lessdot a_{1}\) and \(a_{n}\gtrdot a_{n+1}\).
A _support for the composed chain \({}^{a_{0}}[x_{0}a_{1}x_{1}a_{2}\ldots a_{n}x_{n}]^{a_{n+1}}\) is any path in \(\mathcal{A}\) of the form
\[q_{0}\stackrel{{ x_{0}}}{{\leadsto}}q_{0}^{\prime}\stackrel{{ a_{1}}}{{\longrightarrow}}q_{1}\stackrel{{ x_{1}}}{{\leadsto}}q_{1}^{\prime}\stackrel{{ a_{2}}}{{\longrightarrow}}\ldots\stackrel{{ a_{n}}}{{\longrightarrow}}q_{n}\stackrel{{ x_{n}}}{{\leadsto}}q_{n}^{\prime}\stackrel{{ q_{0}^{\prime}}}{{\Longrightarrow}}q_{n+1} \tag{2}\]
where for every \(i:0\leq i\leq n\):
* if \(x_{i}\neq\varepsilon\), then \(q_{i}\stackrel{{ x_{i}}}{{\leadsto}}q_{i}^{\prime}\) is a support for the (simple or composed) chain
* if \(x_{i}=\varepsilon\), then \(q_{i}^{\prime}=q_{i}\).
Notice that the label of the last pop is exactly \(q_{0}^{\prime}\).
The support of a chain with body \(x\) will be denoted by \(q_{0}\stackrel{{ x}}{{\leadsto}}q_{n+1}\).
Notice that the context \(a,b\) of a chain \({}^{a}[x]^{b}\) is used by the automaton to build its support only because \(a\lessdot x\) and \(x\gtrdot b\); thus, the chain's body contains all information needed by the automaton to build the subtree whose frontier is that string, once it is understood that its first move is a push and its last one is pop. This is a distinguishing feature of OPLs, not shared by other deterministic languages: we call it the _locality principle_ of OPLs, which has been exploited to build parallel and/or incremental OP parsers [3].
## 3 Cyclic Operator Precedence Grammars (C-OPGs) and their equivalence with OPAs
Proposition 6 shows that _some, but not all_, of the algebraic properties of OPLs depend critically on the \(\doteq\)-acyclicity hypothesis. This is due to the fact that without such a hypothesis the rhs of an OPG have an unbounded length but cannot be infinite: e.g., no OPG can generate the language \(\{a,b\}^{*}\) if \(a\doteq b\) and \(b\doteq a\). In most cases cycles of this type can be "broken" as it has been done up
Figure 3: An OPA (top, left), its OPM (top, right) and an example of computation for the language of Example 9 (bottom). Arrows \(\longrightarrow\), \(\dashrightarrow\) and \(\Longrightarrow\) denote push, shift and pop transitions, respectively. To avoid confusion with the overloaded parenthesis symbols, the parentheses used as terminal symbols are denoted as (\(\!\) and \(\!\)).
to now, e.g., to avoid the \(+\doteq+\) relation in arithmetic expressions by associating the operator indifferently to the right or to left. Thus, although it is known that, from a theoretical point of view, the \(\doteq\)-acyclicity hypothesis affects the expressive power of OPGs,7 we accepted so far the \(\doteq\)-acyclicity hypothesis to keep the notation as simple as possible. Recently, however, it has been observed [12] that such a restriction may hamper the benefits achievable by the parallel compilation techniques that exploit the local parsability property of OPLs [2]. Thus, it is time to introduce the necessary extension of OPGs so that the \(\doteq\)-acyclicity hypothesis can be avoided and they become fully equivalent to other formalisms to define OPLs.
Footnote 7: The language \(\{a^{n}(bc)^{n}\}\cup\{b^{n}(ca)^{n}\}\cup\{c^{n}(ab)^{n}\}\cup(abc)^{+}\) cannot be generated by an OPG because the \(a\doteq b\doteq c\doteq a\) relations are necessary [7].
Definition 11 (Cyclic Operator Precedence Grammar): __
* A \({}^{+}\)-O-expression on \(V^{*}\) is an expression obtained from the elements of \(V\) by iterative application of concatenation and the usual \({}^{+}\) operator 8, provided that any substring thereof has no two adjacent nonterminals; for convenience, and w.l.o.g., we assume that all subexpressions that are argument of the \({}^{+}\) operator are terminated by a terminal character. Footnote 8: For our purposes \({}^{+}\) is more convenient than \({}^{*}\) without affecting the generality.
* A Cyclic O-grammar (C-OG) is an O-grammar whose production rhs are \({}^{+}\)-O-expressions.
* For a grammar rule \(A\to\alpha\) of an C-OG, the \(\underset{G}{\Longrightarrow}\) (immediate derivation) relation is defined as \(\beta A\gamma\Longrightarrow\beta\zeta\gamma\) iff \(\zeta\) is a string belonging to the language defined by the \({}^{+}\)-O-expression \(\alpha\), \(L(\alpha)\).
* The equal in precedence relation is redefined as \(a\doteq b\) iff \(\exists A\to\alpha\wedge\exists\zeta=\eta aBb\theta\mid(B\in V_{N}\cup\{ \varepsilon\}\wedge\zeta\in L(\alpha))\). The other precedence relations remain defined as for non-cyclic O-grammars.
* A C-OG is a cyclic operator precedence grammar (C-OPG) iff its OPM is conflict-free.
As a consequence of the definition of the immediate derivation relation for C-OPGs the syntax-trees derived therefrom can be unranked, i.e., their internal nodes may have an unbounded number of children.
Example 12: The C-OPG depicted in Figure 4 with its OPM generates a fairly complete language of --not necessarily fully-- parenthesized arithmetic expressions involving the four basic operations: as usual the multiplicative operations take precedence over the additive ones; furthermore subtraction takes precedence over sum and division over multiplication. The key novelty w.r.t. the traditional way of formalizing arithmetic expressions by means of OPGs are the \(+\doteq+\) and \(\times\doteq\times\) OP relations; on the contrary we kept the structure that associates subtraction and division to the left, so that the grammar's STs --an example thereof is given in Figure 5-- now fully reflect the semantics of the arithmetic operations.
\(G_{\text{A-AE}}:\)
\(S=\{P,T,M,N,F,D,E\}\)
\(P\rightarrow(T+)^{+}T\)
\(T\rightarrow(F\times)^{+}F\mid M-N\mid D/E\)
\(M\to M-N\mid(F\times)^{+}F\mid D/E\)
\(N\rightarrow(F\times)^{+}F\mid D/E\)
\(F\to D/E\)
\(D\to D/E\)
\(\{P,T,M,N,F,D,E\}\rightarrow\left\{\!\!\left\{P,T,M,N,F,D,E\right\}\!\right\} \mid n\)
[MISSING_PAGE_POST]
Notice that \(G_{\text{A-AE}}\) is purposely ambiguous to keep it compact. To support deterministic parsing it should be transformed into a BD --and therefore unambiguous-- form.
By looking at the ST of Figure 5 and comparing it with the original Figure 1, one can get a better envision of why the introduction of cyclic \(\doteq\) can support more effective parallel compilation algorithms for OPLs. Parallel parsing and compilation for OPLs is rooted in the _local parsability property_ of this family: thanks to this property any fragment of input string enclosed within a pair of corresponding \(\lessdot\) and \(\gtrdot\) OP relations can be processed in parallel with other similar fragments. However, if, say, the \(+\) operator is associated to the left (or right) as in the case of the left ST of Figure 1 the parsing of a sequence of \(+\) must necessarily proceed sequentially from left to right. Conversely, if the ST has a structure like that of the right tree of Figure 1 the sequence of \(+\) --whether intermixed or not with other subtrees-- can be arbitrarily split into several branches which can be parsed and compiled in parallel and, after that, can be joined into a unique subtree as imposed by the \(\doteq\) OP relation between the corresponding extreme terminals of contiguous branches.
### Equivalence between C-OPGs and OPAs
The equivalence is obtained by a slight modification of the analogous proof given in [13] where the additional hypothesis of \(M\) being \(\doteq\)-acyclic was exploited.
First, we describe a procedure to build an OPA equivalent to a C-OPG. Then, we provide the converse construction.
Theorem 3.1 (From C-OPGs to OPAs): _Let \((\Sigma,M)\) be an OP-alphabet. For any C-OPG defined thereon an equivalent OPA can be effectively built._
Proof: A nondeterministic OPA1\(\mathcal{A}=\langle\Sigma,M,Q,I,\,F,\delta\rangle\) from a given C-OPG \(G\) with the same precedence matrix \(M\) as \(G\) is built in such a way that a successful computation thereof corresponds to building bottom-up a syntax tree of \(G\): the automaton performs a push transition when it reads the first terminal of a new rhs; it performs a shift transition when it reads a terminal symbol inside a rhs, i.e. a leaf with some left sibling leaf. It performs a pop transition when it completes the recognition of a rhs, then it guesses (nondeterministically) the nonterminal at the lhs. Each state contains two pieces of information: the first component represents the prefix of the rhs under construction, whereas the second component is used to recover the rhs _previously under construction_ (see Figure 6) whenever all rhs nested below have been completed. Precisely, the construction of \(\mathcal{A}\) is defined as follows.
Footnote 1: Any nondeterministic OPA can be transformed into a deterministic one at the cost of quadratic exponential increase in the size of the state space [13].
Let \(\tilde{P}\) be the set of rhs \(\gamma\) where all \({}^{+}\) and related parentheses have been erased. Let \(\tilde{P}\) be the set of strings \(\tilde{\gamma}\in V^{+}\) belonging to the language of some rhs \(\gamma\) of \(P\) that is inductively defined as follows:
if \((\eta)^{+}\) is a subexpression of \(\gamma\) such that \(\eta\) is a single string \(\in V^{+}\) then \(\tilde{\eta}=\{\eta,\eta\eta\}\);
if \(\eta_{l}=\alpha_{1}(\beta_{1})^{+}\alpha_{2}(\beta_{2})^{+}\ldots\alpha_{n}\) where \(\alpha_{i}\in V^{*}\), then \(\tilde{\eta}=\{\eta_{1},\eta_{1}\eta_{1}\}\) where \(\eta_{1}=\alpha_{1}\tilde{\beta}_{1}\alpha_{2}\tilde{\beta}_{2}\ldots\alpha_{n}\).
E.g., let \(\eta\) be \((Ba(bc)^{+})^{+}\); then \(\tilde{\eta}=\{Babc,Babcbc,BabcBabc,BabcbcBabc,\)\(BabcBabc,Babcbc,BabcbcBabc\}\) and \(\hat{\eta}=\{Babc\}\)
Let \(\mathbb{P}=\{\alpha\in V^{*}\Sigma\mid\exists A\to\eta\in P\wedge\exists \alpha,\beta\mid\alpha\beta\in\tilde{\eta}\}\) be the set of prefixes, ending with a terminal symbol, of strings \(\in\tilde{P}\); define \(\mathbb{Q}=\{\varepsilon\}\cup\mathbb{P}\cup N\), \(Q=\mathbb{Q}\times(\{\varepsilon\}\cup\mathbb{P})\), \(I=\{\langle\varepsilon,\varepsilon\rangle\}\), and \(F=S\times\{\varepsilon\}\cup\{\langle\varepsilon,\varepsilon\rangle\text{ if }\varepsilon\in L(G)\}\). Note that \(|\mathbb{Q}|=1+|\mathbb{P}|+|N|\) is \(O(m^{h})\) where \(m\) is the maximum length of the rhs in \(P\), and \(h\) is the maximum nesting level of \({}^{+}\) operators in rhs; therefore \(|Q|\) is \(O(m^{2h})\).
The transition functions are defined by the following formulas, for \(a\in\Sigma\) and \(\alpha,\alpha_{1},\alpha_{2}\in\mathbb{Q}\), \(\beta,\beta_{1},\beta_{2}\in\{\varepsilon\}\cup\mathbb{P}\), and where for any expression \(\xi\), \(\bar{\xi}\) is obtained from \(\xi\) by erasing parentheses and \({}^{+}\) operators:
* \(\delta_{\text{shift}}(\langle\alpha,\beta\rangle,a)\ni\begin{cases}\text{if } \alpha\not\in N:&\begin{cases}\langle\bar{\eta}\bar{\zeta},\beta\rangle&\text{ if }\begin{pmatrix}\exists A\to\gamma\mid\gamma=\eta(\zeta)^{+}\theta\wedge\\ \alpha a=\bar{\eta}\bar{\zeta}\bar{\zeta},\alpha a\bar{\theta}\in L(\gamma) \cap\tilde{P}\end{pmatrix}\\ \text{if }\alpha\in N:&\begin{cases}\langle\bar{\eta}\bar{\zeta},\beta\rangle&\text{ if }\begin{pmatrix}\exists A\to\gamma\mid\gamma=\eta(\zeta)^{+}\theta\wedge\\ \beta\alpha a=\bar{\eta}\bar{\zeta}\bar{\zeta},\alpha a\bar{\theta}\in L(\gamma) \cap\tilde{P}\end{pmatrix}\\ \langle\beta\alpha a,\beta\rangle&\text{otherwise}\end{cases}\)
* \(\delta_{\text{push}}(\langle\alpha,\beta\rangle,a)\ni\begin{cases}\langle a, \alpha\rangle&\text{if }\alpha\not\in N\\ \langle\alpha a,\beta\rangle&\text{if }\alpha\in N\end{cases}\)
* \(\delta_{\text{pop}}(\langle\alpha_{1},\beta_{1}\rangle,\langle\alpha_{2}, \beta_{2}\rangle)\ni\langle A,\gamma\rangle\)
for every \(A\) such that \(\begin{cases}\text{if }\alpha_{1}\notin N:A\to\alpha\in P\wedge\alpha_{1}\in L( \alpha)\cap\hat{P}\\ \text{if }\alpha_{1}\in N:A\to\delta\in P\wedge\beta_{1}\alpha_{1}\in L(\delta) \cap\hat{P}\end{cases}\)
and \(\gamma=\begin{cases}\alpha_{2}\text{ if }\alpha_{2}\notin N\\ \beta_{2}\text{ if }\alpha_{2}\in N.\end{cases}\)
The states reached by push and shift transitions have the first component in \(\mathbb{P}\). If state \(\langle\alpha,\beta\rangle\) is reached after a push transition, then \(\alpha\) is the prefix of the rhs (deprived of the \({}^{+}\) operators) that is currently under construction and \(\beta\) is the prefix previously under construction; in this case \(\alpha\) is either a terminal symbol or a nonterminal followed by a terminal one.
Figure 6: When parsing \(\alpha\), the prefix previously under construction is \(\beta\).
If the state is reached after a shift transition, and the \(\alpha\) component of the previous state was not a single nonterminal, then the new \(\alpha\) is the concatenation of the first component of the previous state with the read character. If, instead, the \(\alpha\) component of the previous state was a single nonterminal --which was produced by a pop transition-- then the new \(\alpha\) also includes the previous \(\beta\) (see Figure 6) and \(\beta\) is not changed from the previous state. However, if the new \(\alpha\) becomes such that a suffix thereof is a double occurrence of a string \(\zeta\in L((\zeta)^{+})\) --hence \(\alpha\in\mathbb{P}\)-- then the second occurrence of \(\zeta\) is cut from the new \(\alpha\), which therefore becomes a prefix of an element of \(\hat{P}\).
The states reached by a pop transition have the first component in \(N\): if \(\langle A,\gamma\rangle\) is such a state, then \(A\) is the corresponding lhs, and \(\gamma\) is the prefix previously under construction.
For instance, imagine that a C-OPG contains the rules \(A\to(Ba(bc)^{+})^{+}a\) and \(B\to h\) and that the corresponding OPA \(\mathcal{A}\) parses the string \(habcbchabca\): after scanning the prefix \(habcb\)\(\mathcal{A}\) has reduced \(h\) to \(B\) and has \(Babcb\) as the first component of its state; after reading the new \(c\) it recognizes that the suffix of the first state component would become a second instance of \(bc\) belonging to \((bc)^{+}\); thus, it goes back to \(Babc\). Then, it proceeds with a new reduction of \(h\) to \(B\) and, when reading with a shift the second \(a\) appends \(Ba\) to its current \(\beta\) which was produced by the previous pop so that the new \(\alpha\) becomes \(BabcBa\); after shifting \(b\) it reads \(c\) and realizes that its new \(\alpha\) would become \(BabcBabc\), i.e., an element of \((Ba(bc)^{+})^{+}\) and therefore "cuts" it to the single instance thereof, i.e., \(Babc\). Finally, after having shifted the last \(a\) it is ready for the last pop.
Notice that the result of \(\delta_{\mathrm{shift}}\) and \(\delta_{\mathrm{push}}\) is a singleton, whereas \(\delta_{\mathrm{pop}}\) may produce several states, in case of repeated rhs. Thus, if \(G\) is BD, the corresponding \(\mathcal{A}\) is deterministic.
The equivalence between \(G\) and \(\mathcal{A}\) derives from the following Lemmas 14 and 15, when \(\beta=\gamma=\varepsilon\), \(\Pi=\bot\) and \(A\) is an axiom. Their statements are identical to Lemmas 3.2 and 3.3 given in [13] for the original construction applied to non-C-OPGs.
Lemma 14: _Let \(x\) be the body of a chain and \(\beta,\gamma\in\mathbb{P}\cup\{\varepsilon\}\). Then \(\langle\beta,\gamma\rangle\stackrel{{ x}}{{\rightsquigarrow}}q\) implies the existence of \(A\in N\) such that \(A\stackrel{{*}}{{\Rightarrow}}x\) in \(G\) and \(q=\langle A,\beta\rangle\)._
Lemma 15: _Let \(x\) be the body of a chain and \(A\in N\). Then, \(A\stackrel{{*}}{{\Rightarrow}}x\) in \(G\) implies \(\langle\beta,\gamma\rangle\stackrel{{ x}}{{\rightsquigarrow}} \langle A,\beta\rangle\) for every \(\beta,\gamma\in\mathbb{P}\cup\{\varepsilon\}\)._
The proof of the above lemmas is based on a natural induction on the depth \(h\) of the chains visited by the OPA and is not repeated here. We simply point out the only key difference due to the fact that simple chains generated by C-OPGs can have unbounded length.
During a series of shift transitions scanning a simple chain the \(\beta\) component of the state remains unchanged, whereas the \(\alpha\) component at each transition appends the read character --always under the constraint that it belongs to \(\mathbb{P}\)--up to the point where the last append would repeat a suffix of \(\alpha\) --say a string \(y\) in case of simple chains-- identically; if \(y\) occurs under the \({}^{+}\) operator of a rhs of
which \(\alpha\) is a prefix then the second occurrence of \(y\) is erased and the state is the same as after reading its first occurrence, thus closing a loop in the same way as it happens with finite state machines. As a consequence the OPA can repeat the loop reading \(y\) any number of times just as the C-OPG can generate, in an immediate derivation, any number of occurrences of \(y\) thanks to the \({}^{+}\) operator.
The converse statement that if \(A\Longrightarrow x\) then \(\langle\beta,\gamma\rangle\stackrel{{ x}}{{\rightsquigarrow}} \langle A,\beta\rangle\) for every \(\beta,\gamma\in\mathbb{P}\cup\{\varepsilon\}\) follows directly from the construction of \(\mathcal{A}\).
The generalization of the above reasoning to the case of composed chains perfectly parallels the induction step of the corresponding lemmas of [13].
Example 16: Figure 7 displays a run of the OPA obtained from the C-OPG of Figure 4 accepting the sentence \(n+n+n/n/n+n+n\).
The construction of a C-OPG equivalent to a given OPA is far simpler than the converse one, thanks to the explicit structure associated to words by the precedence matrix. The key difference w.r.t. the analogous construction given in [13] is that even simple chains can have unbounded length due to the fact that the \(\doteq\) relation may be circular.
First, in analogy with the definition of \(\tilde{P}\), we define _essential supports_ for both simple and composed chains, as those supports where possible cyclic behaviors of the OPA along a sequence of terminals --whether with interposing nonterminals or not-- occur exactly twice.
Figure 7: A run of the OPA built from the C-OPG of Figure 4 accepting the sentence \(n+n+n/n/n+n+n\). The states truncated by erasing a repeated suffix \(\zeta\) occurring under the scope of a \({}^{+}\) operator are emphasized in boldface.
**Definition 17**.: _(Essential chain supports) An essential support of a simple chain \({}^{a_{0}}[a_{1}a_{2}\ldots a_{n}]^{a_{n+1}}\) is any path in \(\mathcal{A}\) of the form_
\[q_{0}\xrightarrow{a_{1}}q_{1}\dashrightarrow\ldots\dashrightarrow q_{n-1} \stackrel{{ a_{n}}}{{\dashrightarrow}}q_{n}\stackrel{{ q_{0}}}{{\Longrightarrow}}q_{n+1} \tag{3}\]
_where any cycle \(q_{i}a_{1}...a_{ik}q_{i}\) is repeated exactly twice._
_Essential supports for composed chains are defined similarly._
For instance, with reference to the OPA built from the C-OPG of Figure 4 an essential support of the chain \({}^{\#}[n+n+n+n+n]^{\#}\) is:
\[\langle\varepsilon,\varepsilon\rangle\stackrel{{ n}}{{ \rightsquigarrow}}\langle T,\varepsilon\rangle\stackrel{{+}}{{ \rightsquigarrow}}\langle T+,\varepsilon\rangle\stackrel{{ n}}{{\rightsquigarrow}}\langle T+,T+\rangle\stackrel{{+}}{{ \rightsquigarrow}}\langle T+,T+\rangle\stackrel{{ n}}{{\rightsquigarrow}} \langle T+,T+\rangle\stackrel{{+}}{{\rightsquigarrow}} \langle T+,T+\rangle\stackrel{{+}}{{\rightsquigarrow}} \stackrel{{+}}{{\rightsquigarrow}}\] \[\langle T+,T+\rangle\stackrel{{ n}}{{\rightsquigarrow}} \langle T,T+\rangle\stackrel{{\langle T,\varepsilon\rangle}}{{ \Longrightarrow}}\langle P,\varepsilon\rangle\]
**Lemma 18**.: _The essential supports of simple chains of any OPA have an effectively computable bounded length._
**Theorem 19** (From OPAs to C-OPGs).: _Let \((\Sigma,M)\) be an OP-alphabet. For any OPA defined thereon an equivalent C-OPG can be effectively built._
Proof.: Given an OPA \(\mathcal{A}=\langle\Sigma,M,Q,I,F,\delta\rangle\), we show how to build an equivalent OPG \(G\) having operator precedence matrix M. The equivalence between \(\mathcal{A}\) and \(G\) should then be rather obvious.
\(G\)'s nonterminals are the 4-tuples \((a,q,p,b)\in\Sigma\times Q\times Q\times\Sigma\), written as \(\langle{}^{a}p,q^{b}\rangle\). \(G\)'s rules are built as follows:
* for every essential support of a simple chain, the rule \[\langle{}^{a_{0}}q_{0},q_{n+1}{}^{a_{n+1}}\rangle\longrightarrow a_{1}a_{2} \ldots a_{n}\ ;\] where every double sequence \(a_{i1}...a_{ik}a_{i1}...a_{ik}\) is recursively replaced by \((a_{i1}...a_{ik})^{+}\) by proceeding from the innermost cycles to the outermost ones, is in \(P\); furthermore, if \(a_{0}=a_{n+1}=\#\), \(q_{0}\) is initial, and \(q_{n+1}\) is final, then \(\langle{}^{\#}q_{0},q_{n+1}{}^{\#}\rangle\) is in \(S\);
* for every essential support of a composed chain, add the rule \[\langle{}^{a_{0}}q_{0},q_{n+1}{}^{a_{n+1}}\rangle\longrightarrow\Lambda_{0} a_{1}\Lambda_{1}a_{2}\ldots a_{n}\Lambda_{n}\ ;\] where, for every \(i=0,1,\ldots,n\), \(\Lambda_{i}=\langle{}^{a_{i}}q_{i},q_{i}^{\prime a_{i+1}}\rangle\) if \(x_{i}\neq\varepsilon\) and \(\Lambda_{i}=\varepsilon\) otherwise, by replacing double cyclic sequences \(\alpha_{i}\alpha_{i}\) with \((\alpha_{i})^{+}\) in the same way as for simple chains; furthermore, if \(a_{0}=a_{n+1}=\#\), \(q_{0}\) is initial, and \(q_{n+1}\) is final, then add \(\langle{}^{\#}q_{0},q_{n+1}{}^{\#}\rangle\) to \(S\), and, if \(\varepsilon\) is accepted by \(\mathcal{A}\), add \(A\rightarrow\varepsilon\), \(A\) being a new axiom not otherwise occurring in any other rule.
Notice that the above construction is effective thanks to Lemma 18 and to the fact that subchains of composed chains are replaced by nonterminals \(\Lambda_{i}\).
_Remark_.: The definition of C-OPG and the constructions of Theorems 13 and 19 have been given with the same approach as used in [13], i.e., avoiding possible optimizations to keep technical details as simple and essential as possible. For instance, we allowed only the \({}^{+}\) operator in grammar's rhs as the minimum necessary extension to obtain the expressive power of OPAs: allowing, e.g., to use set union within the scope of a \({}^{+}\) operator would have allowed in some cases more compact grammars but such a choice would have also caused an explosion of the cardinality of the \(\tilde{P}\) set. For the same reason we excluded a priori renaming and \(\varepsilon\)-rules in our grammars.
## 4 Other equivalences among OPL formalisms
Figure 8 displays the five equivalent formalisms introduced in the literature to define OPLs. The reciprocal inclusions between MSO and OPA, between OPA and the finite equivalence classes of OPL syntactic congruence, and the inclusion of MSO in OPE have been proved without the hypothesis of non-circularity of the \(\doteq\) relation; the reciprocal inclusions between C-OPG and OPA have been restated in Section 3; it remains to consider the inclusion of OPE in OPG which was proved in [15] under the restrictive hypothesis. The proof used the claim that deriving an OPG from an OPE may exploit the closure properties of OPLs, in particular, w.r.t. the Kleene \({}^{*}\) operator; such a closure, however, was proved in [5] by using OPGs, again, under the restrictive hypothesis.
Although we will see, in the next section, that all closure properties of OPLs still hold even when there are circularities in the \(\doteq\) relation, here it is convenient to consider the case of the Kleene \({}^{*}\) operator of OPEs separately. The reason is that in OPEs the Kleene \({}^{*}\) operator is now applied to subexpressions independently on the OP relations between their last and first terminal character.
Thus, it is first convenient to rewrite the OPE in a normal form using the \({}^{+}\) operator instead of the \({}^{*}\) one to avoid having to deal explicitly with the case of
Figure 8: The relations among the various characterizations of OPLs.
the \(\varepsilon\) string. Then, subexpressions of type \((\alpha)^{+}\) where the last terminal of \(\alpha\) is not in relation \(\doteq\) with the first one are replaced by the same procedure defined in [5] to prove effectively the closure w.r.t. the \({}^{*}\) operator. The new rules will produce a right or left-linear subtree of the occurrences of \(\alpha\) depending on the OP relation between the two extreme terminals of \(\alpha\) and will avoid the use of the \({}^{*}\) and \({}^{+}\) operators which are not permitted in the original OPGs.
The remaining substrings including the \({}^{+}\) operator are the new rhs of the C-OPG. The other technicalities of the construction of an OPG equivalent to an OPE are identical to those given in [15] and are not repeated here.
## 5 OPLs closure properties revisited
All major closure properties of OPLs have been originally proved by referring to their generating OPGs and some of them, in particular the closure w.r.t. the Kleene\({}^{*}\) operator, required the \(\doteq\)-acyclicity hypothesis. Thus, it is necessary to prove them again. However, since some of those proofs are technically rather involved, here we simply observe that it is easier to restate the same properties by exploiting OPAs which are now fully equivalent to C-OPGs.
Observe, in fact, that, thanks to the determinization of nondeterministic OPAs, closure w.r.t. boolean operations is "for free". Closure w.r.t. concatenation can be seen as a corollary of the closure proved in [13] of the concatenation between an OPL whose strings have finite length and an \(\omega\)-OPL, i.e., a language of infinite strings. The construction is based on a nondeterministic guess of the position where a string of the first language could end --and a nontrivial technique to decide whether it could accepted even in the absence of the # delimiter--. Then, the closure w.r.t. Kleene \({}^{*}\) is obtained simply by allowing the OPA to repeat such a guess any number of times until the real \({}^{\#}\) is found.
## 6 Conclusion
We have filled up a longstanding "hole" in the theory of OPLs under the pressure of recent practical applications in the field of parallel compilation that showed how such a hole could hamper the benefits of parallelism [12]. The new grammatical formalism of C-OPGs, now fully equivalent to OPAs, MSO-logic, and OPEs in the definition of OPLs, can therefore be exploited to revisit the parallel compilation techniques of [12] (a rather unusual case where theory comes _after_ its motivating application) or to improve the efficiency of techniques based on the less powerful OPGs [2].
Other algebraic and logic properties of OPLs, e.g., aperiodicity, star-freeness, first-order definability [17, 15] can be re-investigated at the light of this generalization. |
2307.16627 | Can wormholes and black holes be distinguished by magnification? | The magnification effect of wormholes and black holes has been extensively
researched. It is crucial to provide a finite distance analysis to understand
this magnification phenomenon better. In this article, the rotational
Simpson-Visser metric (RSV) is chosen as the focus of research. By calculating
the deflection of light in RSV metric, we determine the resulting magnification
effect, then applied the RSV metric to specific examples such as the
Ellis-Bronnikov wormhole, Schwarzschild black hole, and Kerr black hole (or
wormhole) to analyze the magnification. We find that Ellis-Bronnikov wormhole
only has single magnification peaks, while Kerr black hole has one to three
magnification peaks. In addition, the article's findings suggest that the
lensing effect of the Central Black Hole of the Milky Way Galaxy exhibits
magnification of multiple peaks. However, it should be noted that these effects
are not observable from Earth. | Ke Gao, Lei-Hua Liu | 2023-07-31T13:05:30Z | http://arxiv.org/abs/2307.16627v2 | # Can wormholes and black holes be distinguished by magnification?
###### Abstract
The magnification effect of wormholes and black holes has been extensively researched. It is crucial to provide a finite distance analysis to understand this magnification phenomenon better. In our article, the rotational Simpson-Visser metric (RSV) is chosen as the focus of research. By calculating the deflection angle of light in RSV metric, we determine the resulting magnification effect, then applied the RSV metric to specific examples such as an Ellis-Bronnikov wormhole, a Schwarzschild black hole, and a Kerr black hole (or wormhole) to analyze the magnification. We find that an Ellis-Bronnikov wormhole has only a single peak of magnification, while a Kerr black hole has one to three magnification peaks. In addition, the article's findings suggest that the lensing effect of the Central Black Hole of the Milky Way Galaxy exhibits multiple peaks of magnification. Our research provides the possibility of distinguishing between wormholes and black holes from a phenomenological perspective.
## I Introduction
The recent development of gravitational wave astronomy has brought a shift in the discussion surrounding black hole mimickers from theory to empirical observations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. With the absence of a definitive quantum gravity theory [11; 12], there is an attraction to explore alternative phenomenologically viable scenarios using simple meta-geometries. This approach offers a compact and accessible means of investigating such possibilities. Simpson and Visser [1; 9; 13] have put forward a static and spherically symmetric metric known as the SV metric, which provides a comprehensive and nuanced model for black holes and wormholes. This metric smoothly transitions between different cases and is described by the line element:
\[\begin{split} ds^{2}&=-\left(1-\frac{2M}{\sqrt{r^{ 2}+l^{2}}}\right)dt^{2}+\left(1-\frac{2M}{\sqrt{r^{2}+l^{2}}}\right)^{-1}dr^{2 }\\ &+\left(r^{2}+l^{2}\right)\left(d\theta^{2}+\sin^{2}\theta d\phi^ {2}\right),\end{split} \tag{1}\]
where \(M\geqslant 0\) represents the ADM mass and \(l\geqslant 0\) is a parameter responsible for regularizing the central singularity (when \(M=0\), \(l\) is the throat of Ellis-Bronnikov wormholes). By employing the Newman-Janis procedure [14], the rotating SV metric (RSV) can be obtained:
\[\begin{split} ds^{2}&=-\left(1-\frac{2M\sqrt{r^{ 2}+l^{2}}}{\Sigma}\right)dt^{2}+\frac{\Sigma}{\Delta}dr^{2}\\ &+\Sigma d\theta^{2}-\frac{4Ma\sin^{2}\theta\sqrt{r^{2}+l^{2}}}{ \Sigma}dtd\phi+\frac{\chi\sin^{2}\theta}{\Sigma}d\phi^{2},\end{split} \tag{2}\]
where \(\Sigma=r^{2}+l^{2}+a^{2}\cos^{2}(\theta)\), \(\Delta=r^{2}+l^{2}+a^{2}-2M\sqrt{r^{2}+l^{2}}\), and \(\chi=(r^{2}+l^{2}+a^{2})^{2}-\Delta a^{2}\sin^{2}\theta\). Here, \(a\) represents the ratio of spin angular momentum \(J\) to mass \(M\), \(a=\frac{J}{M}\). The RSV metric (2) transforms to the SV metric when \(a=0\) and to the Kerr metric when \(l=0\). The values of \(\frac{l}{M}\) and \(\frac{a}{M}\) determine the conversion between black holes and wormholes in the RSV metric, as shown in FIG. 1. These distinctive features of RSV make it a convenient tool for studying the differences between black holes and wormholes.
Gravitational lensing is a powerful technology for exploring the universe [25; 26; 27; 28]. The presence of a lens can alter the space-time and impact the observed physical quantities such as magnification [28; 29; 30; 31], event rate [26; 32; 33] and shadow [34; 35; 36; 37]. By studying these observable measurements, we can explore the lens itself. Before our work, physicists have extensively studied the lensing effect of black holes [30; 31; 38; 39; 40; 41] and wormholes [42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. Their research provides valuable insights that inspire us to search the possibility of distinguish
Figure 1: Parameter space and corresponding spacetime structure of RSV. WoH indicates traversable wormhole; nWoH denotes null WoH, i.e. one-way wormhole with the null throat; RBH-I expresses regular black hole with one horizon (in the \(r>0\) side, plus its mirror image in the \(r<0\) side); RBH-II signifies regular black hole with an outer and an inner horizon (per side); eRBH stand for extremal regular black hole (one extremal horizon per side); nRBH null RBH-I, i.e. a regular black hole with one horizon (per side) and a null throat. (Image referenced from [1]).
ing wormholes and black holes based on magnification: wormholes and black holes may exhibit different magnification patterns.
In this article, we select the RSV metric as the research object because it can smoothly transform between wormholes and black holes. We calculate the deflection angle of light in the RSV metric and study the resulting magnification effects. We specifically discuss the magnification effects of Ellis-Bronnikov wormhole, Schwarzschild black hole, and Kerr black hole, where we use the dimensionless units to analyze the variation of magnification with lens geometry, wormhole throat radius, ADM mass, and spin. At the end of the article, we restore physical units to analyze the magnification of the Central Black Hole of the Milky Way Galaxy.
The structure of this article is as follows: the first section I provides an introduction to background space and gravitational lensing. Section II describes some geometric quantities in axisymmetric rotational spacetime. Section III calculates the deflection angle of the RSV metric. Section IV analyzes the magnification effect of RSV associating spacetime. Section V concludes and provides future prospects.
## II Spacetime
In this section, we will explore various geometric quantities associated with the metric (2). We use the terminology following the literature references [16; 17; 18; 19]. For convenience, let us rewrite the metric (2) as:
\[ds^{2}=-Adt^{2}+Bdr^{2}+Cd\theta^{2}-2Hdtd\phi+Dd\phi^{2}, \tag{3}\]
where \(A=g_{tt}\) in the metric (2), \(B=g_{rr}\), and so on. Our primary focus is on investigating the motion of photons on the equatorial plane with \(\theta=\frac{\pi}{2}\) in the metric (3). The length along the path of light, denoted as \(ds^{2}=0\), can be expressed as:
\[d\ell^{2}\equiv\gamma_{ij}dx^{i}dx^{j}=\frac{B}{A}dr^{2}+\frac{C}{A}d\theta^{ 2}+\frac{AD+H^{2}}{A^{2}}d\phi^{2}. \tag{4}\]
Here, \(\gamma_{ij}\) defines a 3-dimensional Riemannian space where the photon's motion is described as a trajectory in a spatial curve \(\ell\). The closest distance between a photon and the central celestial body in the lens plane is known as the impact parameter \(b\) (the definition comes from straight-line approximation), which can be expressed as:
\[b\equiv\frac{L}{E}=\frac{-H+D\frac{d\phi}{dt}}{A+H\frac{d\phi}{dt}}. \tag{5}\]
This definition above is general for axisymmetric spacetime. \(L\) represents the angular momentum of the photon and \(E\) represents its energy. In terms of the impact parameter, the orbital equation of the photon can be expressed as:
\[\left(\frac{dr}{d\phi}\right)^{2}=\frac{AD+H^{2}}{B}\frac{D-2Hb-Ab^{2}}{(H+Ab) ^{2}}. \tag{6}\]
This equation can be solved to obtain \(r=F(\phi)\) using perturbational methods (see the appendix of [20]). At a zero-order approximation, we can obtain \(r=\frac{b}{\sin\phi}\). In metric (3), the geodesic curvature can be defined as:
\[\kappa=-\frac{1}{\sqrt{\gamma\gamma^{\theta\theta}}}\partial_{r}\left(\frac{H} {A}\right), \tag{7}\]
and the Gaussian curvature as:
\[K=-\sqrt{\frac{A^{3}}{B(AD+H^{2})}}\partial_{r}\left[\frac{1}{2}\sqrt{\frac{A ^{3}}{B(AD+H^{2})}}\partial_{r}\left(\frac{AD+H^{2}}{A^{2}}\right)\right]. \tag{8}\]
With these preparations, we can now investigate the lensing effect of the RSV metric.
## III Deflection angle
We consider that both the source and the observer are far away from the lens. The deflection angle \(\alpha\) is defined as the difference between the ray directions at the source
Figure 2: Displaying the Gaussian Bonnet Integral Domain. The spin of the celestial body will make the original geodesic no longer be the geodesic.
Figure 3: Showing the lens plane, \(L\) represents wormholes or black holes, O is the observer, S is the source, \(D_{LS},D_{L}\), and \(D_{S}\) is the angular radius distance. \(b\) is the impact parameter, while others are angular quantities.
\(\ell=-\infty\) and the observer \(\ell=\infty\) in the asymptotically flat regions [21],
\[\alpha_{\rm infinity}=e_{S}-e_{O}=-\int_{-\infty}^{\infty}d\ell\frac{de}{d\ell}. \tag{9}\]
We adopt GW method [22; 23] to calculate the deflection angle by Gaussian Bonnet theory:
\[\int\int_{D}KdS+\int_{\partial D}\kappa d\ell+\sum_{i}\alpha_{i}=2\pi\chi(D). \tag{10}\]
One can choose the integral domain \(D\) in Fig. 2, \(SO\) is the path of light. Besides, this Euler index \(\chi\) in domain \(D\) is one. Then,
\[\int\int_{D}KdS+\int_{\gamma}\kappa d\ell+\int_{OS}\kappa d\ell+\sum_{i}\alpha_ {i}=2\pi. \tag{11}\]
We can set \(\gamma\) to vertically intersect extension line of \(SO\) at infinity, which means
\[\sum_{i}\alpha_{i}=\frac{\pi}{2}(\infty)+\frac{\pi}{2}(\infty)+\rho_{s}+\rho_ {o}=\pi+\rho. \tag{12}\]
Here, \(\rho=\rho_{s}+\rho_{o}\) is the sum of external angles at points O and S. Then, one do an integral transformation
\[\kappa d\ell=\kappa\frac{d\ell}{d\phi}d\phi. \tag{13}\]
Here, \(\phi\) is the angular coordinate of the center at \(L\). We only fixed the endpoint of \(\gamma\), so we can directly set up \(\kappa\frac{d\ell}{d\phi}=1\) on \(\gamma\), which leads to
\[\int\int_{D}KdS+\int_{0}^{\pi-\rho+\alpha_{\rm finite}}d\phi+\int_{OS}\kappa d \ell+\pi+\rho=2\pi. \tag{14}\]
Here, \(\angle(LS,LO)+\rho_{s}+\rho_{o}+(\pi-\alpha_{\rm finite})=2\pi\), we select the \(\phi\) coordinate of point O to be zero. Now we assume that both the observer and the light source are far away from the lens center which means \(D_{L}\rightarrow\infty\) and \(D_{LS}\rightarrow\infty\). Then \(\alpha_{\rm finite}\rightarrow\alpha_{\rm infinite}\),
\[\alpha_{\rm infinite}=-\int_{\phi_{O}}^{\phi_{S}}\int_{r}^{\infty}KdS+\int_{ \phi_{O}}^{\phi_{S}}\kappa\psi d\phi, \tag{15}\]
where used \(d\ell=\psi d\phi\). In the case of metric (2), we present the results of each part of the calculation:
\[\frac{1}{r}=\frac{\sin\phi}{b}+\frac{M}{b^{2}}(1+\cos^{2}\phi)+\mathcal{O}(M^ {2},a^{2},l^{2},Ma), \tag{16}\]
\[K=-\frac{2M}{r^{3}}+\frac{3M^{2}}{r^{4}}-\frac{l^{2}}{r^{4}}+\mathcal{O}(l^{3 },a^{3},M^{3},...), \tag{17}\]
\[dS=\left(r+\frac{l^{2}}{2r}+3M+\frac{15M^{2}}{2r}+\mathcal{O}(l^{3},a^{3},M^{ 3},...)\right)drd\phi, \tag{18}\]
\[\kappa=\pm\frac{2aM}{r^{3}}+\mathcal{O}(l^{3},a^{3},M^{3},...), \tag{19}\]
and
\[\psi=b\csc^{2}(\phi)+\mathcal{O}(l,M,a). \tag{20}\]
Ultimately, the infinite distance deflection angle of the second order with weak field approximation is obtained as
\[\alpha_{\rm infinite}=\frac{4M}{b}+\frac{\pi\left(l^{2}+15M^{2}\right)}{4b^{ 2}}\pm\frac{4aM}{b^{2}}, \tag{21}\]
where \(\pm\) comes from the spin direction. For the deflection angle \(\alpha_{\rm infinite}\), when \(M\) is zero, it is regained to the deflection angle of the Ellis-Bronnikov wormhole [24] where \(l\) is the wormhole throat. When \(l^{2}\) equals to zero, it is recovered to the Kerr black hole case [23]. Referring to Fig. 2 and 3, we obtain a finite distance deflection angle:
\[\alpha_{\rm finite}=\left[\frac{l^{2}+15M^{2}}{4b^{2}}\phi-\frac{2M}{b}\cos \phi\pm\frac{2aM}{b^{2}}\cos\phi\right]_{\phi=\phi_{O}}^{\phi=\phi_{S}} \tag{22}\]
with
\[\phi\big{|}_{\phi_{O}}^{\phi_{S}}=\pi-\arcsin\frac{b}{D_{LS}}-\arcsin\frac{b} {D_{L}} \tag{23}\]
and
\[\cos\phi\big{|}_{\phi_{O}}^{\phi_{S}}=-\big{(}\sqrt{1-b^{2}/D_{LS}^{2}}+\sqrt{ 1-b^{2}/D_{L}^{2}}\big{)}. \tag{24}\]
These geometric relationships can also be obtained by solving Eq. (6), and then the angular coordinates \(\phi_{S}\) and \(\phi_{O}\) correspond to the distances \(D_{LS}\) and \(D_{L}\), respectively. Furthermore, we neglect the term of \(\mathcal{O}(M)\) in Eq. (23) and Eq. (24). For more details, see [16].
## IV Magnification
Magnification reflects the degree to which light is distorted, which is an observable quantity. We will study the lensing effect from this perspective. Referring to Fig. 3, one can obtain the geometric relationship as
\[\beta=\theta-\frac{D_{LS}}{D_{S}}\alpha. \tag{25}\]
The lens potential \(\Psi\) which can connect deflection angle and magnification is defined as \(\Psi\equiv\frac{2D_{LS}}{D_{D}D_{S}}\int\Phi(\theta D_{L},x)dx\), where \(\Phi\) is Newtonian potential. The relationship between lens potential and deflection angle is as follows
\[\partial_{\theta}\Psi=D_{L}\partial_{b}\frac{2D_{LS}}{D_{L}D_{S}}\int\Phi(b,x) dx=\alpha. \tag{26}\]
On the other hand, the Jacobian metric is also related to the lens potential
\[A_{ij}\equiv\left(\delta_{ij}-\frac{\partial^{2}\Psi}{\partial\theta_{i} \partial\theta_{j}}\right). \tag{27}\]
The magnification is defined as the inverse of the Jacobian determinant and can, in axisymmetric spaces, be expressed as:
\[\mu\equiv\frac{1}{\det A_{ij}}=\left|\frac{\beta}{\theta}\frac{d\beta}{d\theta} \right|^{-1}. \tag{28}\]
Based on Eqs. (28), (21) and (25), one could work out the magnification of light in metric (2):
\[\mu_{\rm finite}=\left|\frac{D_{L}^{2}}{b}\beta\frac{\partial}{\partial b} \bigg{(}\frac{b}{D_{L}}-\frac{D_{LS}}{D_{S}}\alpha_{\rm finite}\bigg{)}\right| ^{-1}. \tag{29}\]
Rather than discussing the analytical expression of magnification in which \(\frac{d\mu}{db}\) and \(\frac{d^{2}\mu}{db^{2}}\) are characteristic curves, we focus on presenting numerical graphs of the magnification effect at finite distances.
We first consider the effect of lens geometry on magnification. (Due to the dimensionless nature of magnification, we adopt a dimensionless specification.) The numerical results presented in Fig. 4 demonstrate that the distance between the source and lens has minimal influence on magnification, while the distance between the observer and lens has a noteworthy impact. As \(D_{L}\) increases, the magnification decreases to one in the interval \(D_{L}\in[55,100]\). and the magnification reduces to zero in the interval \(D_{L}\in[100,\infty)\). In light of these findings, we will continue our discussion using a fixed parameter of \(D_{L}=D_{LS}=10\) in subsequent cases.
If the ADM mass is set to zero, the RSV metric will transform into an Ellis-Bronnikov wormhole. In this scenario, where \(l\) represents the throat radius of the wormhole, the magnification is depicted in Fig. 5. From the figure, we can observe that the Ellis-Bronnikov wormhole exhibits a single peak of magnification. (How to determine the number of peaks? We use a straight line that is parallel to the b-axis to capture the black bars in the graph, and the number of cutoff points is the number of peaks of magnification.)
When both \(l\) and \(a\) are set to zero, the RSV metric reverts to a Schwarzschild black hole. In Fig. 6, it can be observed that there are three peaks of magnification within the interval \(M\in[0.05,0.10]\). As we decrease the mass to the range of \(M\in[0.05,0.01]\), the three peaks coalesce into two. Finally, when the mass is less than \(0.01\), only one peak of magnification remains.
In the case of a rotating black hole or possibly a wormhole, we set \(l\) to zero. It is significant to consider the direction of the black hole's spin determined by parameter \(a\). We use the positive and negative cases of \(a\) to represent different spin directions, as discussed in Fig. 7 and Fig. 8, respectively. Let us first examine the scenario where the spin is positive, as depicted in Fig. 7. In the interval
Figure 4: Contour plot of magnification of RSV. The relevant parameters are fixed as \(M=0.03,\;l=0.05,\;a=0.1,\;b=2\). The black bar in the figure represents the position where the magnification peak appears.
Figure 5: Contour plot of magnification of Ellis-Bronnikov wormhole.
Figure 6: Contour plot of magnification of Schwarzschild black hole.
of \(a\in[0.00,0.05]\), two and three peaks of magnification can be observed. However, for \(a\in[0.05,0.30]\), only one peak of magnification is present. Next, we consider the case of negative spin, as illustrated in Fig. 8. Within the interval of \(a\in,[0.00,0.18]\), three peaks of magnification exist. In the interval of \(a\in[0.18,0.22]\), there are two peaks of magnification. Finally, for \(a\in[0.22,0.30]\), only one peak of magnification is observed. It can be seen in Fig. 7 and 8 that \(a\) equals to zero and corresponds to a Schwarzschild black hole with three peaks, while \(a\) exists in the second-order term, which affects the position of the peaks and causes the two peaks to merge and finally disappear.
## V Conclusions and Outlook
In this paper, we study the gravitational lensing effect of the RSV metric. We obtained the second-order deflection angle of the RSV metric and made finite distance corrections to it. Our analysis of magnification shows that the distance between the lens and the observer has a significant impact on the numerical value of magnification, which also indicates that finite distance correction is meaningful. On this basis, Ellis-Bronnikov Wormhole only exhibits a single peak of magnification, while Schwarzschild's black hole exhibits up to three peaks of magnification as the ADM mass increases. Black holes with negative spin return from three peaks to a single peak as the spin increases, and the same applies to the case of positive spin.
For the galaxy we live in, the mass of its central black hole is about four million solar mass M\({}_{\odot}\). If we assume that its spin is small (Spin slightly affects the position of the peak) and be approximated as the Schwarzschild black hole situation, according to our calculations \(\frac{\rm four\,million\,M_{\odot}}{0.08}=\frac{DL}{10}\), we can observe the phenomenon of three magnification peaks, see Fig. 9, at distance \(D_{L}=7.4\times 10^{8}\) km (The Schwarzschild radius approximately equals to \(7.8\times 10^{6}\) km). The distance from Earth to the center of the Milky Way galaxy is about \(2.4\times 10^{16}\) km, and the magnification effect observed at this distance is equivalent to a small mass situation in Fig. 6, which converge to a single peak situation. (Not even observable, according to Fig. 4.).
Our research provides a phenomenological difference in magnification between black holes and wormholes, and it provides a theoretical basis for further research in the magnification of wormholes and black holes.
Figure 8: Kerr black hole with a negative spin, in which \(M=0.05\).
Figure 7: Kerr black hole with a positive spin, in which \(M=0.05\). According to Fig. 1, when \(a>M=0.05\), RSV will transform into a traversable wormhole.
Figure 9: Schematic diagram of the three peaks of magnification curve of Schwarzschild black hole, corresponding to the case of \(M=0.08\) in Fig. 6. Here, the Schwarzschild radius about is \(0.1\) on the \(b\) axle. |
2309.17095 | Dynamic Interpretability for Model Comparison via Decision Rules | Explainable AI (XAI) methods have mostly been built to investigate and shed
light on single machine learning models and are not designed to capture and
explain differences between multiple models effectively. This paper addresses
the challenge of understanding and explaining differences between machine
learning models, which is crucial for model selection, monitoring and lifecycle
management in real-world applications. We propose DeltaXplainer, a
model-agnostic method for generating rule-based explanations describing the
differences between two binary classifiers. To assess the effectiveness of
DeltaXplainer, we conduct experiments on synthetic and real-world datasets,
covering various model comparison scenarios involving different types of
concept drift. | Adam Rida, Marie-Jeanne Lesot, Xavier Renard, Christophe Marsala | 2023-09-29T09:42:49Z | http://arxiv.org/abs/2309.17095v1 | # Dynamic Interpretability for Model Comparison via Decision Rules
###### Abstract
Explainable AI (XAI) methods have mostly been built to investigate and shed light on single machine learning models and are not designed to capture and explain differences between multiple models effectively. This paper addresses the challenge of understanding and explaining differences between machine learning models, which is crucial for model selection, monitoring and lifecycle management in real-world applications. We propose DeltaXplainer, a model-agnostic method for generating rule-based explanations describing the differences between two binary classifiers. To assess the effectiveness of DeltaXplainer, we conduct experiments on synthetic and real-world datasets, covering various model comparison scenarios involving different types of concept drift.
Keywords:Machine Learning: Explainability/Interpretable Machine Learning Dynamic Explainability AI Ethics, Trust, Fairness: Explainability
## 1 Introduction
Machine learning (ML) models play a crucial role in solving critical real-world problems, such as fraud detection or asset pricing, across various domains. These models are not static objects: the context in which they operate is often subject to change, which can have a direct impact on their performances. Phenomena like adding new instances gathered over time to populate the training set or the presence of concept drift, when the data distribution changes over time [4] can lead to a decrease in prediction quality over time. To ensure constant or improved performances, a deployed ML model has to be updated in response to the evolving context: as with any product, deployed ML models have a lifecycle, from development to update, through monitoring.
Yet, most of the literature in explainable AI (XAI) focuses on explaining the behavior of individual ML models. Now, it has been shown [12] that ML model updates can disrupt the mental models users construct about these ML models to understand their behaviors when they experience different outcomes. This disruption can affect users' decision-making and eventually weaken their trust in the ML model. Thus having _differential explanations_ to explain what has changed between successive versions of an ML model could help users, among
other benefits, to update their mental models. In this paper, we address this problem, which we call _differential explainability_.
To characterize the differences between two ML models, we propose a framework called \(\Delta\)-modelling. The idea consists of a decomposition of the differences between two black-box ML models into a set of interpretable and complementary models, each covering subspaces of the model domain where the two ML models have different behavior. We instantiate this framework with a method called DeltaXplainer that outputs rule-based descriptions of the areas where prediction changes between the two models. We experiment DeltaXplainer on several real-world and synthetic datasets to show its performances to capture the models' differences in an interpretable way.
The paper is organized as follows. After a brief review of related works in Section 2, Section 3 specifies the considered problem setting and the general principle of the proposed approach. Section 4 describes its instantiation, DeltaXplainer, in more detail. Section 5 presents the experiments conducted on real-world and synthetic data, to assess the accuracy and interpretability potential of DeltaXplainer's generated explanations.
## 2 Related Works
Training a machine learning model on a given dataset can lead to numerous models with similar performances but differing underlying patterns, a phenomenon called _Rashomon effect_[17] or model multiplicity [14, 2]. The simultaneous existence of these models presents risks, especially in terms of fairness and robustness, as well as for the transparency and trust users have in the selected model; issues that are not addressed by classical aggregated performance measures [2].
It has been proposed to detect a change in a model statistically [5, 8, 9], but change detection does not give any insight into which subpopulations of the domain have seen a change in model behavior, in particular prediction change. Severity measures for predictive multiplicity have been proposed [14], with the same limitation. Addressing the need for explanations regarding predictive multiplicity or discrepancies among ML models' behaviors has gathered attention in recent research. To overcome the limitations of traditional XAI techniques like LIME [20], Anchor-LIME [21], or SHAP [13], which primarily focus on explaining the behavior of _one_ model, several propositions have emerged. These propositions aim to comprehensively understand and compare differences in ML models' behaviors.
The Discrepancy Interval Generation (DIG) [19] method detects model discrepancies by creating _counterfactual intervals_ to explain where the models of interest, trained on the same dataset, disagree. While this approach provides insights into the differences between models, it does not explain the actual subspace of model discrepancies nor provides a clear identification of the model discrepancies at a global scale. Nair et al. [18] propose a method to compare models by approximating them with interpretable rule sets and grounding to compare explanations. However, this approach relies on user input to generate
explanations that are based on a comparison between the rule-based surrogates built for each of the two models to compare. In contrast, we use rules as our final explanations, built to predict their disagreement.
These methods address the concept of predictive multiplicity or model discrepancies but do not take into account the dynamic nature of the task and the underlying data distribution over time. For instance, they do not consider scenarios where the dataset and data distribution evolve due to concept drift. Recent research proposes to tackle this aspect by observing the evolution of feature importance in models over time [16] or identifying the key features contributing to the observed differences [22].
Recent research has focused on explaining dataset drift [11]. The authors propose training a model to classify instances from the dataset into pre-drift and post-drift periods. By generating explanations for this model, they aim to pinpoint the specific areas where the drift occurred. However, this approach does not specifically address the differences in behaviors between trained models induced by concept drift.
The aim of this paper is to propose a method that generates explanations for the discrepancies in behaviors between two models. The second one is trained on the same task as the first one but using a dataset that has undergone concept drift. Our approach aims to describe and help users understand the specific subspaces or sub-populations where the two models exhibit divergent behaviors. Unlike previous works that primarily focus on feature importance, the DeltaX-plainer method we propose provides insights into the variations of behavior between the models.
## 3 Problem Setting
This section sets the scope of the problem of generating explanations with the aim to describe machine learning model differences in a dynamic setting. It also discusses and motivates the choice of decision rules as explanations.
### \(\boldsymbol{\Delta}\)-modeling and Differential Explainability
We consider two binary black-box classifiers \(f\) and \(g\) with their respective training data sets \((X_{f},y_{f})\) and \((X_{g},y_{g})\). We assume the descriptive features are numerical and both classifiers are defined on \(\Omega\mapsto\{0,1\}\) with \(\Omega\subset\mathbb{R}^{d}\) where \(d\) denotes the number of features and \(\Omega\) is known and bounded (i.e. the minimum and maximum possible values are known for each feature).
We describe a dynamic setting as a scenario involving the evolution of either the data (e.g. drift) or the models (e.g. model update). Therefore we propose to formalize the differential explanation generation issue as the study of a third model trained to predict where the models disagree, as illustrated on Fig. 1. We define a \(\Delta\)-model, as a binary classifier also defined on \(\Omega\mapsto\{0,1\}\), that predicts, for any \(x\in\Omega\):
\[\begin{cases}1&\text{if }f(x)\neq g(x),\\ 0&\text{otherwise}\end{cases} \tag{1}\]
Such a \(\Delta\)-model can then be used to identify the regions of the input space where the predictions of \(f\) and \(g\) disagree. Note that access to the inner structure of \(f\) and \(g\) is not needed: this makes \(\Delta\)-modelling model-agnostic.
The \(\Delta\)-model then contains information regarding what differs between the two models and, similarly to [11], XAI can be leveraged to extract insights from this model. As such, the main challenges consist of modeling properly the differences and simultaneously being able to generate explanations of these differences, which may raise the question of the interpretability-accuracy trade-off. We propose to train an approximation of the true model differences, as illustrated in the right part of Fig. 1, using a decision rule approach.
### Rules as Differential Explanations
We define a rule-based explanation to describe differences between models as a set of differential rules, \(\left\{r_{i}\right\},i=1,...,p\). This explanation is designed to describe exhaustively the areas of differences between models. Each differential rule \(r\) is a set of conjunctions delimiting a subspace of \(\Omega\):
\[r=\bigwedge_{j=1}^{d}x_{j}\in[a_{j},b_{j}] \tag{2}\]
with \(d\) the total number of features \(x_{j}\) the \(j\)-th feature, \([a_{j},b_{j}]\) its associated range of values. Obviously, not all features appear in all rules: for irrelevant features, the interval equals the total feature range, which is assumed to be known. The rule length is defined as the number of relevant features.
We choose rule-based explanations for characterizing the differences between models due to their intuitive and interpretable nature. Each rule represents a specific region of the input space where the predictions of \(f\) and \(g\) disagree, providing insights into the underlying factors causing the disagreement. In comparison, global XAI methods or feature importance-based explanations are not
Figure 1: \(\Delta\)-modelling principle: training a model to predict the differences. (Left) Perfect model, which is only reachable in theory; (right) proposed approximation through decision rules.
designed to capture specific regions in a feature space and usually do not provide information on feature interactions
Still, most rule extraction methods, such as deriving them from a large decision tree or using Rulefit [7], do not guarantee the interpretability of the rule set. Indeed, they may generate a high number of rules with high length, contradicting the interpretability criteria of sparsity and simplicity, e.g. established in [15]. They may also generate overlapping rules, with partial redundancies, as is for instance the case of Sequential Covering [6] or rules derived from Random Forests, again harming interpretability and requiring potentially expensive post-processing. On the other hand, oversimplifying rules might decay their predictive performances and hence provide misleading explanations.
To quantify the appropriateness of a set of rules \(\mathcal{R}=\left\{r_{i}\right\},i=1,...,p\), we propose to use the following criteria as proxy for interpretability:
* Minimize the number of rules, denoted \(\#\mathbf{r}\)
* Minimize the length of each rule, denoted \(\#\mathbf{l}\)
* Maximize the coverage of each rule, denoted \(\mathbf{cov}\)
* Minimize the overlap between rules
The number of rules corresponds to the \(p\) parameter of \(\mathcal{R}\). Coverage is defined as the proportion of data points that it covers, i.e. that trigger it, compared to the total number of data points, it corresponds to its support. The length and coverage criteria are defined for individual rules, they are averaged across the rule set. Other aggregation operators may be considered, depending on the user preferences: a user may be interested in rules with similar coverage or, on the contrary, with rules with a high variance of coverage. Note that these four criteria can be correlated: in cases without overlap between rules, increasing coverage will mechanically reduce the number of rules.
Additionally, we use the following criteria as proxy for the rule set fidelity wrt the models whose difference must be explained, i.e. wrt the classification task defined in Eq. (1):
* Maximize the accuracy
* Maximize the precision of the disagreement class
* Maximize the recall of the disagreement class
## 4 Proposed Method: DeltaXplainer
This section describes DeltaXplainer, the method we propose to generate interpretable rule-based explanations to describe differences between models. As illustrated in Fig. 2, DeltaXplainer is a 4 step procedure taking as input \(f\), \(g\) and their training data \((X_{f},y_{f})\) and \((X_{g},y_{g})\).
**Steps 1 and 2: Constructing the training data (\(X_{\Delta},y_{\Delta}\))** DeltaXplainer builds a surrogate model trained on samples labelled according to the agreement of \(f\) and \(g\). As we consider a dynamic setting, we assume that areas of interest lie within the distribution of the data. Therefore, in Step 1, the training set is defined as \(X_{\Delta}=X_{f}\cup X_{g}\). In Step 2, their labels \(y_{\Delta}\) for \(X_{\Delta}\) are set using \(f\) and \(g\) according to Eq. 1.
#### 3.2.1 Step 3: Fitting a Decision Tree
We propose to consider as a surrogate model a decision tree, due to the possibility to have some control over its interpretability: its hyperparameters have a significant impact on its structure, which steers the interpretability of the explanations that can then be extracted from it. For example, increasing the minimum number of samples per leaf leads to areas with higher coverage and restraining the maximum depth generates fewer branches and hence fewer rules. Another advantage of using a binary decision tree is to prevent rule overlap, as a sample cannot be covered by multiple rules.
In order to ease the choice of a good set of hyperparameters to reach an interpretability-accuracy trade-off acceptable for the user and avoid over-cons-training the model that would limit its performances, we propose to restrict the model parametrization to the minimum number of samples per leaf, without constraining the maximum depth of the tree. As there is no overlap in a tree, both parameters are correlated, preventing uncontrolled growth of the tree.
#### 3.2.2 Step 4: Rule Extraction
The final step consists of translating the tree into a set of rules, associating one rule to each branch, and keeping only the ones predicting class 1 (disagreement class): each of them characterizes an area where \(f\) and \(g\) differ in prediction. In addition, the successive tests performed in each tree node are rewritten to improve interpretability: they are grouped depending on the feature they apply to, to make explicit the value interval they define. For instance, this post-processing step transforms "if (\(A>10\)) and (\(A\geq 30\)) and (\(A\leq 50\))" to "if \(A\in[30,50]\)". This refinement chooses to omit the hierarchical structure of the rule set and to favor a feature-based view, decreasing the length of the rule expression by decreasing the number of conjunction it contains.
Figure 2: Four steps of the DeltaXplainer algorithm
## 5 Experiments
This section describes the experiments conducted to assess DeltaXplainer performance, in particular its ability to (1) capture the differences in prediction between models and (2) provide explanations of the areas with such differences. The experiments are performed in dynamic settings, considering different forms of data drifts, on synthetic and real datasets. After a description of the experimental protocol, the results are discussed in turn qualitatively and quantitatively.
### Experimental Protocol
#### 5.1.1 Datasets & Model Type
The model retraining scenarios we consider are built upon datasets from the concept drift literature: AGRAWAL [1] as synthetic data, COVER TYPE [3] and ELEC2 [10] as real-world data.
We consider these datasets as a stream of data following their time indexation or the order they have been generated for synthetic data. We then sub-sample a sub-stream of 10000 samples. The data, restricted to its numerical features, is normalized before applying the following modifications, and the inverse standardization is applied at the end of the overall process to generate explanations in the original domain. Then, we apply the protocol described in [11] which consists in splitting the sub-stream at a random point between 1/3 and 2/3 of the data set. A perturbation is applied on the second split to simulate a drift, as described in the next paragraph.
The first split is denoted \(X_{f}\) and the union of the first and second splits \(X_{g}\). Model \(g\) can then be seen as a retrained version of model \(f\). Models \(f\) and \(g\) are instantiated with random forests of 100 estimators. We split \(X_{f}\cup X_{g}\) into a train and test set (70%/30%).
#### 5.1.2 Perturbations and Model Comparison Scenarios
We assess DeltaXplainer in three dynamic contexts elaborated from [11] by changing the type of perturbation applied to the second split of the data: Gaussian noise, permutation and shift respectively denoted S1, S2 and S3.
**S1** introduces small and subtle changes between models by adding Gaussian noise to a random number of features. It simulates a context where models are exposed to minor perturbations such as noise in the measurements or sampling. **S2** permutes feature values, simulating a situation where models differ sparsely, with differences challenging to capture. **S3** adds a fixed shift (adding 2 to the feature values in our case) to a random number of features, ranging from 2 to the maximum number of features. It simulates a context where models are exposed to a significant distribution change, with an abrupt drift.
#### 5.1.3 DeltaXplainer Configuration
The \(\Delta\)-model is trained on the training set built from \(X_{f}\cup X_{g}\). As a baseline to assess its fidelity to the differences in prediction between \(f\) and \(g\), the decision tree is set with no constraints to grow. We vary the considered tree hyperparameter, the minimum number of samples
per leaf, from the smallest possible value, setting to 1, to various proportions of the total number of samples, considering the ratios 0.1%, 1%, 2.5%, 10%, and 25%. This ensures that no rule has a coverage lower than this hyperparameter, which allows controlling directly one of the interpretability measures.
### Qualitative Results
We illustrate the output of DeltaXplainer with explanations generated on the AGRAWAL dataset under scenario S3 with a 1% minimum sample per leaf. This synthetic dataset describes a loan approval use-case where features are the applicant's characteristics. The shift perturbation is here applied on the age, salary and education level.
The explanation of the differences in prediction between models \(f\) and \(g\), i.e. before and after the drift, consists in the following 4 rules:
* **R1:** (\(\mathrm{salary}>170824.88\)) and (\(\mathrm{education\ level}>9.17\)) and (\(\mathrm{age}\leq 129.62\)) (199 samples, coverage 2%))
* **R2:** (\(\mathrm{salary}>170824.88\)) and (\(\mathrm{education\ level}\leq 6.17\)) and (\(\mathrm{age}>109.62\)) (182 samples, coverage 1.8%)
* **R3:** (\(\mathrm{salary}>170824.88\)) and (\(\mathrm{education\ level}\in[7.17,9.17]\)) and (\(\mathrm{age}\leq 109.62\)) (158 samples, coverage 1.6%)
* **R4:** (\(\mathrm{salary}>170824.88\)) and (\(\mathrm{education\ level}\in[6.17,7.17]\)) and (\(\mathrm{age}>129.62\)) (101 samples, coverage 1%)
These 4 rules represent where the model predictions differ and the support of each rule (how many samples are affected). Each of these explanations can be used as a local explanation, for instance, to explain why an instance has different predictions across models. In practice, this explanation can be used to investigate the sources of the differences in prediction. We can observe that differences occur for extreme values (as the shift is abrupt on the age, education level and salary) showing that the differences between the two models are actually from areas where the drift happened. The explanation can also be used to investigate whether the difference in performances of the two models is caused by the drift.
### Quantitative Results
For the quantitative assessment, we consider the two categories of metrics as introduced in Section 3.2: regarding the fidelity of the \(\Delta\)-model to capture the differences between \(f\) and \(g\), we consider the recall, accuracy and precision of the global \(\Delta\)-model on the test set. Note that, depending on the scenario, the two classes (agreement/disagreement) may be unbalanced: for S1 and S2 as the changes are small, only a few samples are actually different. For this reason, accuracy is mainly relevant for S3 (balanced data set).
Regarding the interpretability of the generated set of rules, we consider the number of rules, their average length and their average coverage. Because we are using these metrics as a proxy for interpretability, we are interested in measuring them across all \(\Omega\). This is particularly true for the coverage, we want to
approximate the area covered by each rule. Therefore, for these metrics, we use the whole data set, training and testing.
All measures are computed only in the case where at least one rule characterizing the disagreement area can be extracted, depending on the value set to the hyperparameter. Table 1 provides the average and standard deviation on 10 runs for the ELEC2 data set, the appendix shows the results for the other data sets, that globally lead to the same analyses and conclusions.
It shows that coverage is low. We study in this experiment how it evolves and correlates to the minimum sample hyperparameter. The idea is to observe whether increasing the minimum sample per leaf indeed increases interpretability (through our proxy metrics). We also want to observe how big the drop in performance is.
#### 4.2.2 Fidelity
With the baseline instantiation (minimal sample per leaf = 1) we observe that the DeltaXplainer performances to capture differences vary significantly across the 3 considered scenarios. Changes induced by S1 and S2 are the most difficult to capture, which is expected, as these two scenarios trigger sparse and subtle distortions of the decision boundary: areas of differences are small and difficult to capture. Even if accuracies on S1 are better than on S2, models on S1 struggle significantly more to capture all the correct differences (lower recall). On the other hand, the changes due to an abrupt shift in the data (S3) are almost perfectly captured. This behavior is also expected, as an abrupt shift in the data triggers a much easier-to-apprehend change in the decision boundary: clear changes in the decision boundaries and grouped affected samples.
We also observe that increased interpretability constraints on the DeltaXplainer (higher support per leaf) decrease almost all performance metrics in the three scenarios. For cases with a value higher than 10%, we observe a significant
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Scenario & Min samples & Acc & Prec & Rec & \#r & Mean \#1 & Mean **cov** \\ \hline \multirow{8}{*}{S1} & 1 sample & 0.93 \(\pm\) 0.03 & 0.39 \(\pm\) 0.14 & 0.39 \(\pm\) 0.07 & 61.5 \(\pm\) 10.32 & 7.64 \(\pm\) 0.36 & 0.1\% \(\pm\) 0.05 \\ & 0.1 \% & 0.94 \(\pm\) 0.01 & 0.44 \(\pm\) 0.1 & **0.4**\(\pm\) 0.08 & 31.9 \(\pm\) 6.1 & 6.91 \(\pm\) 0.4 & 0.15\% \(\pm\) 0.02 \\ & 1 \% & **0.95**\(\pm\) 0.01 & **0.52**\(\pm\) 0.12 & 0.25 \(\pm\) 0.15 & 2.0 \(\pm\) 0.82 & 4.55 \(\pm\) 0.73 & 1.2\% \(\pm\) 0.26 \\ & 2.5 \% & 0.38 \(\pm\) 0.49 & 0.21 \(\pm\) 0.27 & 0.16 \(\pm\) 0.22 & 0.5 \(\pm\) 0.71 & 1.2 \(\pm\) 1.62 & **1.4\%**\(\pm\) 1.87 \\ & 10 \% & - & - & - & 0.0 & - & - \\ & 25 \% & - & - & - & 0 & - & - \\ \hline \multirow{8}{*}{S2} & 1 sample & 0.59 \(\pm\) 0.02 & 0.55 \(\pm\) 0.03 & 0.51 \(\pm\) 0.03 & 549.6 \(\pm\) 57.91 & 8.69 \(\pm\) 0.85 & 0.08\% \(\pm\) 7e-03 \\ & 0.1 \% & 0.6 \(\pm\) 0.02 & 0.55 \(\pm\) 0.02 & **0.54**\(\pm\) 0.02 & 302.6 \(\pm\) 28.3 & 7.95 \(\pm\) 0.71 & 0.15\% \(\pm\) 0.01 \\ & 1 \% & 0.6 \(\pm\) 0.03 & 0.57 \(\pm\) 0.04 & 0.51 \(\pm\) 0.05 & 30.3 \(\pm\) 5.46 & 5.46 \(\pm\) 0.32 & 1.36\% \(\pm\) 0.18 \\ & 2.5 \% & 0.62 \(\pm\) 0.03 & 0.59 \(\pm\) 0.03 & 0.51 \(\pm\) 0.09 & 12.1 \(\pm\) 2.88 & 4.19 \(\pm\) 0.27 & 3.29\% \(\pm\) 0.27 \\ & 10 \% & **0.62**\(\pm\) 0.03 & **0.6**\(\pm\) 0.03 & 0.47 \(\pm\) 0.08 & 2.6 \(\pm\) 0.52 & 2.23 \(\pm\) 0.33 & 13.82\% \(\pm\) 1.6 \\ & 25 \% & 0.6 \(\pm\) 0.03 & 0.59 \(\pm\) 0.05 & 0.45 \(\pm\) 0.11 & 1.2 \(\pm\) 0.42 & **1.1**\(\pm\) 0.21 & **29.83\%**\(\pm\) 4.97 \\ \hline \multirow{8}{*}{S3} & 1 sample & 0.97 \(\pm\) 0.01 & **0.86**\(\pm\) 0.05 & 0.84 \(\pm\) 0.05 & 45.0 \(\pm\) 7.15 & 6.88 \(\pm\) 0.55 & 0.25\% \(\pm\) 0.039 \\ & 0.1 \% & **0.97**\(\pm\) 0.01 & 0.86 \(\pm\) 0.04 & 0.85 \(\pm\) 0.05 & 34.2 \(\pm\) 6.97 & 6.17 \(\pm\) 0.73 & 0.33\% \(\pm\) 0.06 \\ \cline{1-1} & 1 \% & 0.96 \(\pm\) 0.01 & 0.86 \(\pm\) 0.05 & 0.82 \(\pm\) 0.06 & 7.5 \(\pm\) 1.96 & 4.41 \(\pm\) 0.7 & 1.5\% \(\pm\) 0.303 \\ \cline{1-1} & 2.5 \% & 0.96 \(\pm\) 0.01 & 0.84 \(\pm\) 0.07 & 0.81 \(\pm\) 0.09 & 3.4 \(\pm\) 0.97 & 3.51 \(\pm\) 0.64 & 3.33\% \(\pm\) 0.424 \\ \cline{1-1} & 10 \% & 0.83 \(\pm\) 0.29 & 0.56 \(\pm\) 0.21 & **0.88**\(\pm\) 0.31 & 0.9 \(\pm\) 0.32 & 1.0 \(\pm\) 0.47 & **16.68%**\(\pm\) 6.541 \\ \cline{1-1} & 25 \% & - & - & - & 0.0 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Experimental results for ELEC2, avg and std over 10 runs
drop of performances across the scenarios. The only exception is the recall for S3 with 10% minimum samples. In cases where no rule has been found, we don't compute evaluation metrics. This can be explained by the fact that precision decreases. This means that the rules cover more samples, increasing the likelihood of capturing points with disagreeing predictions, hence the good recall.
#### 5.5.2 Interpretability
As desired, controlling the minimum sample per leaf allows for a controlled interpretability of the rules. S3 (abrupt shift) obtains overall the best fidelity but is also the most interpretable according to the considered metrics: it has on average the shortest rules and the lowest number of rules generated, It also has a higher coverage. As already mentioned, the differences in prediction between \(f\) and \(g\) induced by the abrupt shift of S3 are grouped in the feature space, which is easier to localize and capture for the DeltaXplainer.
S2 (permutation) leads to the worst interpretability performance. The cause probably lies in the random permutations that generate a lot of local changes that are hard to model: many specific rules are required to capture them.
Interestingly, the number of rules generated with the noise scenario S1 reaches 0 when the minimum sample per leaf is above 10%, which happens faster than for the two other scenarios. This can probably be explained by small differences harder to capture for DeltaXplainer. In contrast, permutation (S2) is the only scenario that still generates a rule when this parameter is set to 25%.
#### 5.5.3 Conclusion
We observe that DeltaXplainer performs differently across the various scenarios S1, S2 and S3. In situations involving simple drifts, such as abrupt data shifts, DeltaXplainer generates accurate and interpretable explanations. In more complex scenarios, such as small and sparse drifts, additional efforts are necessary to achieve satisfactory explanations, with hyperparameters favoring fidelity at the expense of lower interpretability.
## 6 Conclusion and Future Works
In this paper, we introduced and tested a model-agnostic method for generating human-interpretable explanations of model differences in binary classification problems with tabular data in a dynamic setting involving the evolution of the data (e.g., drift). We proposed a new framework for studying differences between models, called \(\Delta\)-modelling. We instantiated this framework with DeltaXplainer, an algorithm to generate refined rule-based explanations for finding disagreement areas. The experiments showed the effectiveness of DeltaXplainer to capture these areas and the influence of the single user-defined parameter on the interpretability-fidelity trade-off. We illustrated with examples the explanations generated with DeltaXplainer.
Future works include conducting human-in-the-loop evaluations of generated explanation interpretability, experimenting with other dynamic settings, and using the proposed model at a local scale to explain specific instance prediction changes. |
2309.06826 | Circuit QED with a Giant Atom Coupling to Left-handed Superlattice
Metamaterials | Giant atoms, where the dipole approximation ceases to be valid, allow us to
observe unconventional quantum optical phenomena arising from interference and
time-delay effects. Most previous studies consider giant atoms coupling to
conventional materials with right-handed dispersion. In this study, we first
investigate the quantum dynamics of a giant atom interacting with left-handed
superlattice metamaterials. Different from those right-handed counterparts, the
left-handed superlattices exhibit an asymmetric band gap generated by anomalous
dispersive bands and Bragg scattering bands. First, by assuming that the giant
atom is in resonance with the continuous dispersive energy band, spontaneous
emission will undergo periodic enhancement or suppression due to the
interference effect. At the resonant position, there is a significant
discrepancy in the spontaneous decay rates between the upper and lower bands,
which arises from the differences in group velocity. Second, we explore the
non-Markovian dynamics of the giant atom by considering the frequency of the
emitter outside the energy band, where bound states will be induced by the
interference between two coupling points. By employing both analytical and
numerical methods, we demonstrate that the steady atomic population will be
periodically modulated, driven by variations in the size of the giant atom. The
presence of asymmetric band edges leads to diverse interference dynamics.
Finally, we consider the case of two identical emitters coupling to the
waveguide and find that the energy within the two emitters undergoes exchange
through the mechanism Rabi oscillations. | Zhao-Min Gao, Jia-Qi Li, Zi-Wen Li, Wen-Xiao Liu, Xin Wang | 2023-09-13T09:16:40Z | http://arxiv.org/abs/2309.06826v2 | # Circuit QED with a Giant Atom Coupling to Left-handed Superlattice Metamaterials
###### Abstract
Giant atoms, where the dipole approximation ceases to be valid, allow us to observe unconventional quantum optical phenomena arising from interference and time-delay effects. Most previous studies consider giant atoms coupling to conventional materials with right-handed dispersion. In this study, we first investigate the quantum dynamics of a giant atom interacting with left-handed superlattice metamaterials. Different from those right-handed counterparts, the left-handed superlattices exhibit an asymmetric band gap generated by anomalous dispersive bands and Bragg scattering bands. First, by assuming that the giant atom is in resonance with the continuous dispersive energy band, spontaneous emission will undergo periodic enhancement or suppression due to the interference effect. At the resonant position, there is a significant discrepancy in the spontaneous decay rates between the upper and lower bands, which arises from the differences in group velocity. Second, we explore the non-Markovian dynamics of the giant atom by considering the frequency of the emitter outside the energy band, where bound states will be induced by the interference between two coupling points. By employing both analytical and numerical methods, we demonstrate that the steady atomic population will be periodically modulated, driven by variations in the size of the giant atom. The presence of asymmetric band edges leads to diverse interference dynamics. Finally, we consider the case of two identical emitters coupling to the waveguide and find that the energy within the two emitters undergoes exchange through the mechanism Rabi oscillations.
## I Introduction
In recent years, there has been considerable research interest in the study of giant atoms due to their ability to produce peculiar phenomena in quantum optics. Unlike small atoms, which are typically treated as point-like particles, giant atoms have sizes much larger than or comparable to the wavelength of the propagating field, indicating that the dipole approximation is not valid. [1, 2, 3, 4, 5, 6, 7, 8]. Under these conditions, it becomes essential to consider the phase accumulation between different coupling points [9, 10, 11], which leads to a variety of intriguing phenomena, such as frequency-dependent couplings [12, 13, 14, 15], decoherence-free interactions [16, 17, 18, 19], unconventional bound states [20, 21, 22, 23, 24, 25, 26] and chiral quantum optics[27, 28, 29, 30, 31]. In experimental setups, giant atoms are typically realized in circuit quantum electrodynamics (circuit-QED) platforms [32, 33, 34, 35, 36, 37, 38].
The interaction between giant atoms and conventional waveguides has been extensively explored in previous studies (e.g., see [39, 40, 41, 42, 43, 44, 45, 46, 47]). In addition to conventional waveguides and cavities, microwave photons can also exist in artificial environments. An emblematic example is circuit-QED metamaterials, where the dispersion properties and vacuum eigenmodes can be freely tailored in experiments. The structured spectra and asymmetric band gaps can be realized in such metamaterials, providing an intriguing platform for exploring QED phenomena with no analog in traditional circuit-QED setups [48, 49, 50, 51, 52]. For instance, by spatiotemporally modulating the effective impedance, a superconducting quantum interference device metamaterial can be designed as a chiral quantum waveguide [53]. When combined with transmission lines, it can achieve multimode strong coupling in circuit QED [54]. The left-handed superlattice metamaterial (LHSM) in circuit QED possesses a negative index of refraction [54, 55, 56, 57, 58]. In LHSM, the capacitance and inductance are interchanged when compared to right-handed materials [59, 60, 61]. When the impedance of the LHSM is modulated periodically, there will be an asymmetric band gap generated by anomalous dispersion band and Bragg scattering band. These unique spectral features may allow to observe unusual dynamics phenomena of giant emitters [62, 63, 64].
In this paper, we find several intriguing phenomena in the circuit QED system composed of giant atoms and LHSM. Firstly, we derive the dispersion relation of LHSM and explain the mechanism behind the band gap generated by the left-handed dispersion band and a band caused by Bragg scattering. By considering a transmon coupled to the proposed LHSM waveguide, we derive the Hamiltonian of the system. When assuming that the emitter is resonant with the upper (lower) band, spontaneous emission is enhanced and suppressed periodically due to the interference effect. Given that the emitter's frequency is outside the continuous dispersion band, an atom-photon bound state forms at each coupling point. Due to the asymmetric band edges, the interference dynamics inside the two continuous dispersion energy bands exhibit significant differences, with the atomic steady population in the upper band
being much larger than that in the lower band. Lastly, we consider the case of two giant atoms and explore how the dipole-dipole interaction can be modulated by the interference effect.
## II Left-handed superlattice metamaterial
The model is depicted in Fig. 1, where a giant atom couples to the LHSM, which can be regarded as a one-dimensional waveguide. The LHSM consists of two alternating left-handed inductor-capacitor (LC) cells, each formed by series capacitors and grounded inductors [56]. The ratio of capacitance (inductance) between neighboring cells is denoted as \(\epsilon\). The length of one LC cell is designed as \(\Delta x\). We consider two adjacent LC cells as a superlattice unit with a length of \(\Delta X=2\Delta x\). The Lagrangian of the LHSM is [54; 65]
\[\mathcal{L} = \frac{1}{2}\sum_{n}\Big{[}C(\dot{\Phi}_{n}-\dot{\Phi}_{n-1})^{2}+ \epsilon C(\dot{\Phi}_{n}-\dot{\Phi}_{n+1})^{2}\Big{]} \tag{1}\] \[- \frac{1}{2}\sum_{n}\bigg{[}\frac{1}{\epsilon L}\Phi_{n}^{2}+ \frac{1}{L}\Phi_{n-1}^{2}\bigg{]},\]
with \(C\) (_L_) represents the capacitance (inductance) of the LHSM. Assuming that the field takes the form of a plane wave, denoted as \(\Phi_{n}=e^{i(kn\Delta x-\omega t)}\), we obtain the dispersion relation of the LHSM by deriving the Euler-Lagrange equation (see details in Appendix A)
\[\omega_{\pm}=\frac{\omega_{r}}{\sqrt{\frac{\left(1+\epsilon\right)^{2}}{2} \pm\sqrt{\frac{\left(1+\epsilon\right)^{4}}{4}+\epsilon^{2}\left[2\cos\left(k \Delta X\right)-2\right]}}}, \tag{2}\]
where the resonance frequency of an individual LC cell is denoted as \(\omega_{r}=1/\sqrt{CL}\), with \(k\) being the wave vector. In our study, we set the values of \(C=2.5\times 10^{-11}\)F, \(L=2\times 10^{-10}\)H, respectively. Under these conditions, we plot the dispersion relation \(\omega_{\pm}(k)\) as a function of \(k\), as shown in Fig. 2(a), while taking \(\epsilon=1.4\). We find that for \(k=0\), the upper band exhibits divergence, while the lower band converges toward the infrared cutoff frequency. As \(k\) increases, the frequency \(\omega_{+}(k)\) gradually decreases to a finite value, corresponding to the left-hand characteristic inherent in this model. Simultaneously, due to the Bragg scattering, \(\omega_{-}(k)\) increases to a finite value. The resulting band gap, \([\omega_{-}(\pm\pi),\omega_{+}(\pm\pi)]\), displays asymmetry arsing from
Figure 1: The sketch of a superconducting giant atom coupled to the left-handed superlattice metamaterial. The superlattice cell is composed of two substructures with differing capacitance \(C\) (\(\epsilon\)_C_) and inductance \(L\) (\(\epsilon\)_L_), represented by cell a (b).
Figure 2: (a) Dispersion relations for two energy bands of the left-handed superlattice metamaterials with \(\epsilon=1.4\). (b) The width of the lower band, \(W_{-}\), and the band gap \(\Delta_{G}\), as functions of the superlattice parameter \(\epsilon\). Parameters of the system are \(C=2.5\times 10^{-11}\)F, \(L=2\times 10^{-10}\).
distinct underlying mechanisms.
In Fig. 2(b), we plot the relationship between the superlattice parameter \(\epsilon\) and two important quantities: the band gap width \(\Delta_{G}\) and the width of lower band \(W_{-}\), i.e.,
\[\Delta_{G}=\omega_{+}(\pm\pi)-\omega_{-}(\pm\pi),\quad W_{-}=\omega_{-}(\pm\pi) -\omega_{-}(0). \tag{3}\]
The LHSM is constructed from two periodic substructures with distinct refractive indices. Within the system, the band gap arises as a result of destructive interference in Bragg scattering occurring at the interface of cell a and b. Specifically, when \(\epsilon=1\), the band gap reaches its maximum width. In this case, all cells have the same index of refraction, rendering the LHSMis isotropic. Therefore, the band gap disappears due to lack of Bragg scattering, resulting in a band gap width of zero. The phenomenon has been previously investigated in Ref. [46]. When \(\epsilon\) deviates from \(\epsilon=1\), the difference in the refractive indices between neighboring cells increases. This amplifies the strength of Bragg scattering at cell boundaries. In this work, for the sake of generality, we take the superlattice parameter \(\epsilon=1.4\).
## III Giant atom interacting with LHSM
As shown in Fig. 1, the giant atom interacts with the LHSM at two distinct points through capacitances [15; 57; 22; 25]. The giant atom takes the form of, for example, a transmon qubit consisting of two identical Josephson junctions. The Hamiltonian of the transmon qubit is expressed in terms of the charge operator \(\hat{n}\) and the phase operator \(\hat{\phi}\)[66; 67; 68; 69; 32; 70]
\[\hat{H}_{T}=4E_{C}\left(\hat{n}-n_{g}\right)^{2}-E_{J}\cos\frac{2\pi}{\Phi_{0} }\hat{\Phi}, \tag{4}\]
where \(E_{J}\)\(\left(E_{C}=e^{2}/(2C_{\Sigma})\right)\) represents the energy of the Josephson junction (charge) in the superconducting qubit. The total capacitance includes the junction's capacitance \(C_{J}\) and the shunt capacitance \(C_{q}\), i.e., \(C_{\Sigma}=C_{J}^{q}+2C_{q}\), and \(n_{g}=\frac{Q_{g}}{2e}\) accounts for the bias from the external electric field. The charge operator and phase operator can be denoted by the creation and annihilation operator \(\hat{b}^{\dagger}\) and \(\hat{b}\) as
\[\hat{\Phi}=\left(\frac{2E_{C}}{E_{J}}\right)^{\frac{1}{4}}(\hat{b}^{\dagger}+ \hat{b}),\quad\hat{n}=\frac{i}{2}\left(\frac{E_{J}}{2E_{C}}\right)^{\frac{1}{4 }}(\hat{b}^{\dagger}-\hat{b}). \tag{5}\]
By considering the two lowest energy levels of the emitter, the Hamiltonian of giant atom can be approximately written as [71; 65] (see details in Appendix B)
\[H_{q}=\frac{1}{2}\omega_{q}\sigma_{z},\quad\omega_{q}=\sqrt{8E_{C}E_{J}^{q}}- E_{C}. \tag{6}\]
As derived in Refs. [72; 22], the Hamiltonian of the LHSM can be quantized as (see details in Appendix B)
\[\hat{H}_{0}=\sum_{k=1}^{N}\hbar\omega_{k}\left(a_{k}^{\dagger}a_{k}+\frac{1}{ 2}\right), \tag{7}\]
where \(a_{k}(a_{k}^{\dagger})\) is the annihilation (creation) operator of the photonic modes with wave vector \(k\).
In the rotating-wave approximation, the interaction Hamiltonian between transmon qubit and the LHSM is expressed as
\[H_{int}=\sum_{k}g_{k}(\hat{a}_{k}^{\dagger}\hat{\sigma}_{-}+\hat{a}_{k}\hat{ \sigma}_{+}), \tag{8}\]
where \(\sigma_{+}=(\sigma_{-})^{\dagger}=|e\rangle\langle g|\), with \(|e\rangle(\langle g|)\) being the excited (ground) state of the emitter. The coupling strength is given by [22]
\[g_{k}=\frac{e}{\hbar}\frac{C_{J}^{q}}{C_{\Sigma}}\sqrt{\frac{\hbar\omega_{k}} {C_{W}}}, \tag{9}\]
with \(C_{W}\) denoting the total capacitance of the LHSM waveguide. Finally, by setting \(\hbar=1\), the Hamiltonian of the system can be described by
\[\hat{H}=H_{0}+H_{q}+H_{int}=\frac{1}{2}\omega_{q}\sigma_{z}\] \[+\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k}+\sum_{k}g_{k}(\hat{a}_{k} ^{\dagger}\hat{\sigma}_{-}+\hat{a}_{k}\hat{\sigma}_{+}). \tag{10}\]
## IV The dynamics of the system
### Quantum dynamics in the dispersive band
When the emitter resonates with the upper (lower) band, there will be a significant number of modes with non-zero group velocity coupled to the emitter. This coupling phenomenon leads to an exponential emission of photons by the emitter. However, as the emitter's frequency approaches the band edge, the Wigner-Weisskopf approximation will break down, leading to the non-Markovian dynamics [73; 74; 37]. We first explore the spontaneous decay of the giant atom when its frequency is significantly removed from the band edges.
In the rotating frame of atomic frequency \(\omega_{q}\), the total Hamiltonian, as given in Eq. (10), is derived as [75]
\[H=\sum_{k\in\text{BZ}}\Delta_{k}a_{k}^{\dagger}a_{k}+\sum_{k\in\text{BZ}}(g_{ k}a_{k}^{\dagger}\sigma_{-}+g_{k}^{*}a_{k}\sigma_{+}), \tag{11}\]
with the frequency detuning is \(\Delta_{k}=\omega_{k}-\omega_{q}\) (within the first Brillouin zone (BZ)). The system's state can be expanded in the single excitation subspace as
\[\ket{\psi\left(t\right)}=\sum_{k}c_{g,k}\left(t\right)\ket{g,_{k}}+c_{e} \left(t\right)\ket{e,0}, \tag{12}\]
where \(\left|g,1_{k}\right\rangle\) corresponds to the state where the giant atom is in the ground state, and a single photon is excited at mode \(k\). We assume that the giant atom (waveguide) is initially in the excited (vacuum) state, i.e., \(\left|\psi\left(t=0\right)\right\rangle=\left|e,0\right\rangle\). According to Schrodinger equation, we obtain the following differential equations
\[\dot{c}_{g,k}\left(t\right)=-i\left[\Delta_{k}c_{g,k}\left(t \right)+g_{k}c_{e}\left(t\right)\right], \tag{13}\] \[\dot{c}_{e}\left(t\right)=-i\sum_{k}g_{k}^{*}c_{g,k}\left(t \right). \tag{14}\]
By defining \(\tilde{c}_{g,k}\left(t\right)=c_{g,k}\left(t\right)e^{i\Delta_{k}t}\) and substituting its integral form into Eq. (14), we obtain
\[\dot{c}_{e}\left(t\right)=\sum_{k}g_{k}^{2}\int_{0}^{t}c_{e,0}\left(t^{\prime }\right)e^{i\Delta_{k}\left(t-t^{\prime}\right)}\mathrm{d}t^{\prime}. \tag{15}\]
Note that \(g_{k}\) is the coupling strength in \(k\) space [22]. We consider the giant atom coupling to the waveguide at two points \(x_{1}=0\) and \(x_{2}=d_{s}\). The separation distance \(d_{s}\) corresponds to the giant atom's size. Unlike the setup with small atom where \(g_{k}\) is a constant, the coupling strength \(g_{k}\) for giant atoms exhibits dependence on the parameter \(d_{s}\), i.e.,
\[g_{k}=g\left(1+e^{ikd_{s}}\right). \tag{16}\]
The summation over \(k\) can be replaced with an integral, i.e., \(\sum_{k}\rightarrow\frac{N}{2\pi}\int_{-\pi}^{\pi}\mathrm{d}k\). In the Born-Markovian regime, the coupling strength is much smaller than the bandwidth around \(k\), allowing us to extend the integral \(\pm\pi\) bound to infinity. Consequently, Eq. (15) can be rewritten as
\[\dot{c}_{e}\left(t\right)=-\frac{Ng^{2}}{\pi}\int_{-\pi}^{\pi} \left(1+\cos\left(kd_{s}\right)\right)\mathrm{d}k\int_{0}^{t}c_{e}\left(t^{ \prime}\right)e^{i\Delta_{k}\left(t-t^{\prime}\right)}\mathrm{d}t^{\prime}. \tag{17}\]
We consider that the emitter is resonant with the upper (lower) band at \(k_{r}\) (\(k_{r}>0\)), i.e., \(\omega_{q}=\omega_{k_{r}}\). As depicted in Fig. 2, since the resonant frequency is significantly separated from the band edges, the dispersion relation around \(k_{r}\) can be approximated at linear, i.e., \(\omega_{k}\simeq v_{g}k\). By calculating \(v_{k_{r}}^{\pm}=\frac{\mathrm{d}\omega_{k}\left(k\right)}{\mathrm{d}k}\mid_{k =k_{r}}\), we obtain the group velocity \(v_{g}\) at \(k_{r}\)
\[v_{k_{r}}^{\pm}=\frac{-\epsilon^{2}\sin\left(k_{r}\right)}{2 \sqrt{\frac{\left[\left(\epsilon+1\right)\right]^{4}}{4}+\epsilon^{2}\cdot \left(2\cos\left(k_{r}\right)-2\right)}\left[\frac{\left[\left(\epsilon+1 \right)\right]^{2}}{2}\mp\sqrt{\frac{\left[\left(\epsilon+1\right)\right]^{4} }{4}+\epsilon^{2}\cdot\left(2\cos\left(k_{r}\right)-2\right)}\right]^{\frac{ 3}{2}}}, \tag{18}\]
where the group velocity \(v_{kr}^{+}\) of the upper band is of the left-handed characteristic.
In the emission spectrum, atomic spontaneous radiation is centered around the transition frequency \(\omega_{q}\). Therefore, we can substitute \(\cos\left(kd_{s}\right)\) with \(\cos\left(k_{r}d_{s}\right)\). Consequently, the probability amplitude \(c_{e}(t)\) is derived as
\[\dot{c}_{e}\left(t\right)=-\frac{2g^{2}N}{v_{kr}}\left(1+\cos \left(k_{r}d_{s}\right)\right)c_{e}\left(t\right). \tag{19}\]
We solve the equation for \(c_{e}\left(t\right)\) under the Weisskopf-Wigner approximation and obtain
\[c_{e}\left(t\right)=e^{-\frac{\Gamma}{2}t},\quad\Gamma=-\frac{2g^{2}N}{v_{kr} }\left(1+\cos\left(k_{r}d_{s}\right)\right), \tag{20}\]
where \(\Gamma\) is the spontaneous decay rate of the giant atom. Note that \(\Gamma\) is contingent upon the size of the giant atom \(d_{s}\).
By setting \(\omega_{q}=\omega(k_{r}=\pi/2)\), we depict the spontaneous decay rate as a function of the giant atom's size in Fig. 3(b). The color-coding of the curves corresponds to the varying coupling strengths. It can be verified from Eq. (20) that the spontaneous decay rate exhibits periodic behavior in response to changes in the emitter's size. Given that \(\Gamma=0\), when \(d_{s}=2M\) with M being an odd integer, the emitter is trapped in its excited state without decaying. We demonstrate the decay rates, calculated through dynamic evolution for various values of \(k_{r}\), in Fig. 3(c), which match well with the analytical results in Eq. (20). Furthermore, the presence of asymmetric energy bands gives rise to distinct spontaneous decay dynamics when the atom couples to its respective continuum. At the coupling position \(k_{r}\), the spontaneous decay rate \(\Gamma\) within the lower energy band greatly exceeds that within the upper band due to the substantial disparity in group velocities.
### Quantum dynamics in the asymmetric band gap
In this work, we explore the behavior of bound state of a single giant atom by considering \(\omega_{q}\) inside the asymmetric band gap [76].There is no continuum mode resonant with the giant emitter. As a result, spontaneous emission is suppressed, leading to the confinement of energy in the form of a bound state [77; 24; 74].
To derive the evolution analytically, we utilize the Laplace transform
\[\tilde{c}_{k\left(e\right)}\left(s\right)=\int_{0}^{\infty}c_{k\left(e\right)} \left(t\right)e^{-st}\mathrm{d}t, \tag{21}\]
Eqs. (13,14) are respectively derived as [74]
\[s\tilde{c}_{e}\left(s\right)-s\tilde{c}_{e}\left(0\right)=-i\sum_{k}g_{k}\tilde{c }_{k}\left(s\right), \tag{22}\]
\[s\tilde{c}_{k}\left(s\right)-s\tilde{c}_{k}\left(0\right)=-i\Delta_{k}\tilde{c }_{k}\left(s\right)-ig_{k}\tilde{c}_{e}\left(s\right). \tag{23}\]
Under the initial condition \(c_{e}\left(0\right)=1\) and \(c_{k}\left(0\right)=0\), Eq. (23) can be simplified as
\[\tilde{c}_{k}\left(s\right)=\frac{-ig_{k}\tilde{c}_{e}\left(s\right)}{\left(s +i\Delta_{k}\right)}. \tag{24}\]
By substituting Eq. (24) into Eq. (22), we obtain [78; 79]
\[\tilde{c}_{e}\left(s\right)=\frac{1}{s+\sum_{e}\left(s\right)}\, \tag{25}\]
\[\sum_{e}\left(s\right)=\sum_{k}\frac{\left|g_{k}\right|^{2}}{s+i\Delta_{k}}, \tag{26}\]
where \(\sum_{e}\left(s\right)\) is the so-called self-energy. Then we can take the inverse Laplace transform of Eq. (25) in the complex space to get the time-dependent evolution \(c_{e}\left(t\right)\) and obtain
\[c_{e}\left(t\right)=\frac{1}{2\pi i}\underset{E\rightarrow\infty}{\lim} \int_{\gamma-iE}^{\gamma+iE}\tilde{c}_{e}\left(s\right)e^{st}\mathrm{d}s, \tag{27}\]
where \(\gamma\) (\(\gamma>0\)) is a real number that makes the path integral of \(\tilde{c}_{k\left(e\right)}\left(s\right)\) in the domain of convergence. As depicted in Fig. 4(a), we assume the emitter's frequency to be \(\omega_{q}=\omega_{3}\) [refer to Fig. 2], and only the modes with \(k=0\) contribute significantly to the system's dynamics. When the frequency resides within the asymmetric band gap, denoted as \(\omega_{q}=\omega_{1}\) (\(\omega_{2}\)), we confine our analysis to modes around \(k=\pi\). Consequently, around \(k=0\) or \(k=\pi\), the dispersion relation can be effectively approximated by a quadratic function, i.e.,
\[\begin{cases}E_{+}\left(k\right)=E_{+\min}+\alpha_{+}\left(k\pm\pi\right)^{2},&\omega_{q}=\omega_{1},\\ E_{-}\left(k\right)=E_{-\max}-\alpha_{-}\left(k\pm\pi\right)^{2},&\omega_{q}= \omega_{2},\\ E_{-}\left(k\right)=E_{-\min}+\alpha_{0}\left(k-0\right)^{2},&\omega_{q}= \omega_{3},\end{cases} \tag{28}\]
At the band edges, we denote the curvature \(\alpha_{\pm}\) and \(\alpha_{0}\) as the second-order derivatives, which is expressed as
\[\alpha=\frac{\mathrm{d}^{2}E_{\pm}\left(k\right)}{\mathrm{d}k^{2}}\Big{|}_{k =k_{0}}. \tag{29}\]
In this case, by setting \(\delta k=k-k_{0}\), the interaction strength is written as
\[g_{k}=g\left[1+e^{id_{x}\left(k_{0}+\delta k\right)}\right]. \tag{30}\]
By replacing \(\sum_{k}\) as the integral form \(\frac{N}{2\pi}\int\mathrm{d}k\), we rewrite Eq. (26) as
\[\sum_{e}\left(s\right)\simeq\frac{N}{2\pi}\int_{-\pi}^{\pi}\frac{\left|g_{k} \right|^{2}}{s+i\Delta_{k}}\mathrm{d}k. \tag{31}\]
Finally by inserting Eqs. (28,29,30) into Eq. (31), we obtain
\[\sum_{e}\left(s\right) = \frac{Ng^{2}}{\pi}\Bigg{\{}\int_{-\pi}^{0}\frac{1+\cos\left[d_{s }\left(\delta k+k_{0}\right)\right]}{s+i\left[\Delta_{0}+\alpha_{\pm\left(0 \right)}\left(k+k_{0}\right)^{2}\right]}\mathrm{d}\left(\delta k\right) \tag{32}\] \[+ \int_{0}^{\pi}\frac{1+\cos\left[d_{s}\left(\delta k-k_{0}\right) \right]}{s+i\left[\Delta_{0}+\alpha_{\pm\left(0\right)}\left(k-k_{0}\right)^{ 2}\right]}\mathrm{d}\left(\delta k\right)\Bigg{\}}.\]
Due to the emitter's frequency is close to the edge of the upper (lower) band, we limit our consideration to modes around \(k_{0}=0\) (\(k_{0}=\pi\)) when calculating the self-energy. As a result, the self-energy is derived as
\[\sum_{e}\left(s\right)=\frac{-iNg^{2}}{\sqrt{\alpha\left(\Delta_{0}-x\right)}} \left[1+\cos\left(d_{s}k_{0}\right)e^{-d_{s}\sqrt{\frac{\Delta_{0}-x}{\alpha} }}\right]. \tag{33}\]
We can use the residue theorem to obtain the steady-state probability
\[|c_{e}(t=\infty)|^{2}=|\mathrm{Res}(s_{0})|^{2}, \tag{34}\] \[\mathrm{Res}(s_{0})=\frac{1}{1+\partial_{s}\Sigma_{e}(s)}\Big{|}_ {s=s_{0}}, \tag{35}\]
Figure 3: (a) The spontaneous decay rate of the giant atom changes with giant atom’s size \(d_{s}\). We fix \(\omega_{q}=\omega_{-}(k=\pi/2)\). (b) Dynamical evolution obtained via numerical simulation for various \(d_{s}\). (c) The spontaneous decay rate of a giant atom resonating with mode \(k_{r}\) of the lower (upper) band. The coupling strength is set as \(g=0.001\). Other parameters remain consistent with those in Fig. 2.
where \(\mathrm{Res}(s_{0})\) is the steady population for giant atom, and \(s_{0}\) is the purely imaginary pole of the transcendental equation, which can be obtained by
\[s+\sum_{e}\left(s_{0}\right)=0. \tag{36}\]
Given that giant atom is coupled to the LHSM waveguide at two distinct points, static bound states are formed at each of these coupling locations. As the separation between these coupling points diminishes, the two bound states interfere, giving rise to a periodic interference pattern in the dynamical evolution of the giant atom. As depicted in Fig. 4(a,b), we observe that the dynamics evolution of the emitter's population, \(\left|c_{e}\left(t\right)\right|^{2}\) varies with \(d_{s}\). Due to the asymmetric nature of the band gap, the curvatures \(\alpha_{\pm}\), which correspond to different mode densities, exhibit dissimilarity. Therefore, the interference patterns at the upper (lower) band edges exhibit disparities. When \(k_{0}d_{s}=2\pi\) with \(d_{s}\) being even, it leads to a dominant destructive interference, causing the coupling strength to nearly vanish. Consequently, the majority of the energy remains confined within the emitter, with minimal escape into the waveguide.
Conversely, for odd values of \(d_{s}\), constructive interference prevails, resulting in a significantly reduced trapped atomic population, as depicted in Fig. 4(a,b). In cases where \(d_{s}\) is comparable to or excess the size of the bound state, the interference effect diminishes, and the steady-state atomic population asymptotically reaches its stable value.
In Fig. 5, we depict the steady-state population as a function of detuning \(\Delta\) for the upper (lower) band. It is note that due to the distinct mode densities in these two bands, the amplitude of the steady state in the upper band consistently exceeds that of the lower band. Moreover, as \(\omega_{q}\) is tuned towards the lower-bound of the lower band [see \(\omega_{3}\) in Fig. 2(a)], the oscillating interference effect no longer exists, since only the modes around \(k=0\) are excited (satisfying the condition \(kd_{s}=0\)). In cases where the two fields do not significantly overlap for large values of \(d_{s}\), the steady-state population converges to a constant value.
## V Two emitters
As shown in Fig. 7(a), we now consider two identical giant atoms with frequency \(\omega_{q}\) interacting with the LHSM separated by a distance \(D_{q}\). When the separation distance \(D_{q}\) between atoms is relatively small, the bound states of two atoms will overlap, leading to a strong interaction between them [80; 81]. As \(D_{q}\) increases, the overlap area of the two fields diminishes,and the dipole-dipole interaction becomes weak. Similar to the case when a single emitter couples to the waveguide, in the rotating frame, the interaction Hamiltonian is written as [18; 19; 82; 83]
\[H_{I}=\sum_{i=1,2}\sum_{k}g_{ki}a_{k}^{\dagger}e^{i\Delta_{k}t}\sigma_{i}^{-}+ H.c. \tag{37}\]
At the initial state, one atom is excited and the other is in the ground state, with the basis \(\left|e,g,1_{k}\right\rangle\) and \(\left|g,e,1_{k}\right\rangle\). Employing the framework of effective Hamiltonian theory [84], the effective Hamiltonian can be expressed in the form
\[H_{\mathrm{eff}}\left(t\right) = \sum_{m,n}\frac{1}{\bar{\omega}_{mn}}\left[\hat{A}_{m}^{\dagger},\hat{A}_{n}\right]e^{i\left(\omega_{m}-\omega_{n}\right)t}, \tag{38}\] \[\frac{1}{\bar{\omega}_{mn}} = \frac{1}{2}\left(\frac{1}{\omega_{m}}+\frac{1}{\omega_{n}}\right), \tag{39}\]
where \(\bar{\omega}_{mn}\) is the average of \(\omega_{m}\) and \(\omega_{n}\), with \(\Delta_{k}=\omega_{k}-\omega_{q}\). We make the identification \(A_{1}^{\dagger}=g_{k1}a_{k}^{\dagger}\sigma_{1}^{-}\), \(A_{2}^{\dagger}=g_{k2}a_{k}^{\dagger}\sigma_{2}^{-}\). Substituting these conditions into Eq. (38), we obtain the system's effective Hamiltonian as
\[H_{\mathrm{eff}}=\sum_{i=1,2}\sum_{k}\frac{g_{ki}g_{ki}^{*}}{ \Delta_{k}}\left(\sigma_{i}^{-}a_{k}^{\dagger}\sigma_{i}^{+}a_{k}-\sigma_{i} ^{+}a_{k}\sigma_{1}^{-}a_{k}^{\dagger}\right)\] \[+\sum_{k}\frac{g_{k1}g_{k2}^{*}}{\Delta_{k}}\left(\sigma_{1}^{-} a_{k}^{\dagger}\sigma_{2}^{+}a_{k}-\sigma_{2}^{+}a_{k}\sigma_{1}^{-}a_{k}^{ \dagger}\right)+\mathrm{H.c.} \tag{40}\]
The first two terms correspond to the exchange interaction between the two atoms, while the second and third pair of terms account for the atomic frequency shift. Since the two emitters are alternately excited, the waveguide can be approximated to be in the vacuum state. Therefore, under the approximation
\[\langle a_{k}^{\dagger}a_{k}\rangle\simeq 0,\quad\langle a_{k}a_{k}^{\dagger} \rangle\simeq 1. \tag{41}\]
The dipole-dipole interaction Hamiltonian can be simplified as
\[H_{\mathrm{eff,d}}=-\sum_{k}\frac{g_{k1}g_{k2}^{*}}{\Delta_{k}}\sigma_{2}^{+} a_{k}\sigma_{1}^{-}a_{k}^{\dagger}+\mathrm{H.c.} \tag{42}\]
We can obtain the interaction strength
\[J_{12}=\sum_{k}\frac{g_{k1}g_{k2}^{*}}{\Delta_{k}}, \tag{43}\]
where the coupling strengths of two giant atoms are respectively given as
\[g_{k1}=g\left(1+e^{ikd_{s}}\right),\quad g_{k2}=g_{k1}e^{ikD_{q}}. \tag{44}\]
Substituting Eq. (44) into Eq. (43) and replacing the sum with integral form, we obtain
\[J_{12}=\frac{N}{2\pi}\int_{-\pi}^{\pi}\frac{2g^{2}\left(1+\cos\left(kd_{s} \right)\right)e^{ikD_{q}}}{\Delta_{k}}\mathrm{d}k, \tag{45}\]
which can be expressed as
\[J_{12} = \frac{Ng^{2}}{\pi}\Bigg{\{}\int_{-\pi}^{0}\frac{\cos\left(kD_{q} \right)+\cos\left(kD_{q}\right)\cos\left(kd_{s}\right)}{\Delta_{0}+\alpha\left( k+k_{r}\right)^{2}}\mathrm{d}k \tag{46}\] \[+ \int_{0}^{\pi}\frac{\cos\left(kD_{q}\right)+\cos\left(kD_{q} \right)\cos\left(kd_{s}\right)}{\Delta_{0}+\alpha\left(k-k_{r}\right)^{2}} \mathrm{d}k\Bigg{\}},\]
where the dispersion relation is approximated as a quadratic form. Finally, we derive the dipole-dipole interaction strength as
\[J_{12}=\frac{Ng^{2}}{\alpha\beta}e^{-D_{q}\beta}\left[\cos\left(D _{q}\pi\right)+\cos\left(\left(D_{q}+d_{s}\right)\pi\right)\cos\left(d_{s} \beta\right)\right],\] \[\beta=\sqrt{\frac{\Delta_{0}}{\alpha}}. \tag{47}\]
In Fig. 7(b), we depict the dynamics of the two emitters through numerical simulations and observe that the two atoms can coherently exchange excitation without decaying. Subsequently, Figure. 7(c) shows a numerical depiction of the variation of \(J_{12}\) with the separation distance \(D_{q}\), which is approximately described by an exponential form in Eq. (47). Finally, Figure. 7(d) demonstrates the size-dependent characteristic of the dipole-dipole interaction, which arises from the periodic modulation of the bound state by the giant atom's size \(d_{s}\).
## VI Conclusion
In this paper, we explore the quantum dynamics by considering giant atoms interacting with LHSM. The emergence of an asymmetric band gap by the left-handed dispersion band and the Bragg scattering band leads to several unconventional phenomena in quantum optics. We consider the giant atom in resonance with either the upper or lower band. Through the analysis of the interference phenomena induced by these giant atoms, we find that the spontaneous decay rate changes periodically with the giant atom's size. The asymmetric band structure results in distinct quantum dynamics for giant atoms resonating with different bands. As a consequence, spontaneous emission can be modulated by adjusting either the size or the resonant frequency of the giant atom, allowing us to enhance or suppress the process as needed.
Figure 5: The trapped atomic population changes with detuning \(\Delta\) to the upper (lower) band edges, respectively.
Figure 6: Two giant emitters with sizes \(d_{s}\), coupling to the LHSM. The separation between two emitters is denoted as \(D_{q}\).
Most remarkably, when confining the emitter's frequency within the asymmetric band gap, we find that the dynamics dramatically depends on the band edge's properties. By calculating the steady population, we observe a periodic modulation in the dynamical evolution, a consequence of the interference effects caused by variations in the giant atom's size. Moreover, the asymmetric band edges lead to different interference amplitudes for the upper (lower) band edges. Similarly, the dipole-dipole interaction between two giant atoms depends on their respective sizes and distance that separates them. This mechanism reveals that our work provides a method to engineer the interaction between giant atoms and metamaterial environment in future studies.
## VII Acknowledgments
The quantum dynamical simulations are based on open source code QuTiP [85; 86]. X.W. is supported by the National Natural Science Foundation of China (NSFC) (Grant No. 12174303 and No. 11804270), and China Postdoctoral Science Foundation (No. 2018M631136).
## Appendix A Deriving the dispersion relation of the Left-handed superlattice metamaterial
In this Appendix, we derive the dispersion relation of the LHSM. The Lagrangian form is given in main text Eq. (1). The structure of the LHSM is shown in Fig. 3. According to Euler-Lagrange equation \(\frac{d}{dt}\frac{\partial\mathcal{E}}{\partial\phi}-\frac{\partial\mathcal{E }}{\partial\phi}=0\), we obtain the following equations for motions
\[\epsilon C\left[\ddot{\Phi}_{n+1}-\ddot{\Phi}_{n}\right]-C\left[ \ddot{\Phi}_{n}-\ddot{\Phi}_{n-1}\right]=\frac{1}{\epsilon L}\Phi_{n}, \tag{11}\] \[C\left[\ddot{\Phi}_{n+2}-\ddot{\Phi}_{n+1}\right]-\epsilon C \left[\ddot{\Phi}_{n+1}-\ddot{\Phi}_{n}\right]=\frac{1}{L}\Phi_{n+1}. \tag{12}\]
By adopting Helmholtz equation \(\ddot{\Phi}=-\omega^{2}\Phi\), we rewrite Eq. (11) and Eq. (12) as
\[\omega^{2}C\left[\left(\Phi_{n}-\Phi_{n-1}\right)+\epsilon\left( \Phi_{n}-\Phi_{n+1}\right)\right]=\frac{1}{\epsilon L}\Phi_{n}, \tag{13}\] \[\omega^{2}C\left[\epsilon\left(\Phi_{n+1}-\Phi_{n}\right)+\left( \Phi_{n+1}-\Phi_{n+2}\right)\right]=\frac{1}{L}\Phi_{n+1}, \tag{14}\]
which result in
\[\Phi_{n+1}=\frac{\epsilon\Phi_{n}+\Phi_{n+2}}{\left(\epsilon+1- \frac{1}{\omega^{2}LC}\right)},\quad\Phi_{n-1}=\frac{\epsilon\Phi_{n-2}+\Phi_{ n}}{\left(\epsilon+1-\frac{1}{\omega^{2}LC}\right)}. \tag{15}\]
After substituting Eq. (15) into Eq. (14), we obtain
\[C \left(1+\epsilon-\frac{1}{\omega^{2}\epsilon LC}\right)\Phi_{n}-C ^{2}\frac{\epsilon\Phi_{n-2}+\Phi_{n}}{\left[\left(\epsilon+1\right)C-\frac{1} {\omega^{2}L}\right]} \tag{16}\] \[-\epsilon C^{2}\frac{\epsilon\Phi_{n}+\Phi_{n+2}}{\left[\left( \epsilon+1\right)C-\frac{1}{\omega^{2}L}\right]}=0.\]
By adopting the plane-wave form \(\Phi_{n}=e^{i(\mathbf{k}n\Delta x-\omega t)}\) (\(\Delta X=2\Delta x\)), the dispersion relation can be derived as
\[\omega=\frac{\omega_{sl}}{\sqrt{\frac{(1+\epsilon)^{2}}{2}\pm \sqrt{\frac{(1+\epsilon)^{4}}{4}+\epsilon^{2}\left[2\cos\left(k\Delta X\right) -2\right]}}}. \tag{17}\]
## Appendix B Deriving the Hamiltonian of the waveguide
We now calculate the Hamiltonian of the superlattice metamaterial. The Lagrangian of the LHSM can be written as [72; 22]
\[\mathcal{L}=\frac{1}{2}\ddot{\Phi}^{T}\dot{C}\ddot{\Phi}-\frac{1}{2}\ddot{ \Phi}^{T}\dot{L}^{-1}\ddot{\Phi}, \tag{18}\]
where the flux vector \(\ddot{\Phi}\) is
\[\ddot{\Phi}^{T}=\left(\Phi_{0},\Phi_{1},\cdots\Phi_{N}\right). \tag{19}\]
Figure 7: (a) Rabi oscillations between two giant emitters. The parameters are set as \(d_{s}=3\), \(D_{q}=6\), and \(g=0.0001\). (b) The Rabi frequency of two giant two emitters varies with \(D_{q}\). (c) Rabi oscillations as a function of \(d_{s}\). Other parameters remain consistent with those in Fig. 4.
According to Eq. (1) in main text, the capacitance and inductance matrices are
\[\hat{C}=C\begin{pmatrix}1&-1&0&0&\cdots\\ -1&(\varepsilon+2)&-\left(\varepsilon+1\right)&0&\cdots\\ 0&-\left(\varepsilon+1\right)&\left(2\varepsilon+2\right)&-\left(\varepsilon+ 1\right)&\cdots\\ 0&0&-\left(\varepsilon+1\right)&\left(2\varepsilon+2\right)&\cdots\\ \vdots&\ddots&\ddots&\ddots&\ddots\end{pmatrix} \tag{3}\]
and
\[\hat{L}^{-1}=\frac{1}{L}\begin{pmatrix}1&0&0&0&\cdots\\ 0&\frac{1+\varepsilon}{\varepsilon}&0&0&\cdots\\ 0&0&\frac{1+\varepsilon}{\varepsilon}&0&\cdots\\ 0&0&0&\frac{1+\varepsilon}{\varepsilon}&\cdots\\ \vdots&0&\ddots&\ddots&\ddots\end{pmatrix}. \tag{4}\]
Based on Euler-Lagrange equation, the Hamiltonian of LHSM is given as
\[H_{0}=\vec{Q}^{T}\dot{\vec{\Phi}}-\mathcal{L}=\frac{1}{2}\vec{Q}^{T}\hat{C}^ {-1}\vec{Q}+\frac{1}{2}\vec{\Phi}^{T}\hat{L}^{-1}\vec{\Phi}, \tag{5}\]
where the charge vector is defined as \(\vec{Q}=\hat{C}\dot{\vec{\Phi}}\). As derived in Refs. [72, 22], the Hamiltonian of the LHSM is quantized as
\[\hat{H}_{0}=\sum_{k=1}^{N}\hbar\omega_{k}\left(a_{k}^{\dagger}a_{k}+\frac{1}{ 2},\right) \tag{6}\]
where \(a_{k}\) (\(a_{k}^{\dagger}\)) is the annihilation (creation) operator of the photonic mode with wave vector \(k\). Note that the eigenfrequency \(\omega_{k}\) and the eigenvectors \(\vec{\psi}_{k}^{s}=\hat{C}^{\frac{1}{2}}\vec{\Phi}\) satisfy the equation \(\hat{C}^{-\frac{1}{2}}\hat{L}^{-1}\hat{C}^{-\frac{1}{2}}\vec{\psi}_{k}=\omega_ {k}^{2}\vec{\psi}_{k}\).
|
2303.17970 | A note on weak existence for SDEs driven by fractional Brownian motion | We are interested in existence of solutions to the $d$-dimensional equation
\begin{equation*} X_t=x_0+\int_0^t b(X_s)ds + B_t, \end{equation*} where $B$ is
a (fractional) Brownian motion with Hurst parameter $H\leqslant 1/2$ and $b$ is
an $\mathbb{R}^d$-valued measure in some Besov space. We exhibit a class of
drifts $b$ such that weak existence holds. In particular existence of a weak
solution is shown for $b$ being a finite $\mathbb{R}^d$-valued measure for any
$H<1/(2d)$. | Lukas Anzeletti | 2023-03-31T11:14:19Z | http://arxiv.org/abs/2303.17970v2 | # A note on weak existence for SDEs driven by fractional Brownian motion
###### Abstract
We are interested in existence of solutions to the \(d\)-dimensional equation
\[X_{t}=x_{0}+\int_{0}^{t}b(X_{s})ds+B_{t},\]
where \(B\) is a (fractional) Brownian motion with Hurst parameter \(H\leqslant 1/2\) and \(b\) is an \(\mathbb{R}^{d}\)-valued nonnegative distribution in some Besov space. We exhibit a class of drifts \(b\) such that weak existence holds. In particular existence of a weak solution is shown for \(b\) being a finite \(\mathbb{R}^{d}\)-valued measure for any \(H<1/(2d)\).
_Keywords and phrases: Regularization by noise, fractional Brownian motion._
**MSC2020 subject classification:** 60H50, 60H10, 60G22, 34A06.
## 1 Introduction
Throughout the paper we consider the \(d\)-dimensional stochastic differential equation (SDE)
\[X_{t}=x_{0}+\int_{0}^{t}b(X_{s})ds+B_{t}, \tag{1.1}\]
where \(x_{0}\in\mathbb{R}^{d}\), \(B\) is a fractional Brownian motion with Hurst parameter \(H\leqslant 1/2\) and \(b\) is a nonnegative \(\mathbb{R}^{d}\)-valued distribution (i.e. each component is a nonnegative distribution). For the moment (1.1) is to be interpreted formally.
In the Brownian case (i.e. \(H=1/2\)), there is an extensive literature for SDEs with irregular drift which we will not describe thoroughly. Let us mention the early work of Veretennikov [27] for bounded measurable drifts and the more general \(L^{p}-L^{q}\) criterion of Krylov and Rockner [20] for which the authors proved strong existence and pathwise uniqueness (both works allowing for time-dependent drift). For the case of possibly distributional drifts see [6, 10, 11, 12]. We also point out the work of Le Gall [22] on a \(1\)-dimensional SDE involving the local time at \(0\) of the solution, which formally corresponds to a drift \(b=a\delta_{0}\), for some \(a\in[-1,1]\) and \(\delta_{0}\) being the Dirac distribution. The solution to such an equation is the so-called skew Brownian motion, see [23] for more details and various constructions. Note also that the case \(a=1\) corresponds to reflection above \(0\).
This leads to a second class of interesting problems, namely solving Equation (1.1) when \(b\) is a distribution and \(B\) is a fractional Brownian motion with sufficiently small Hurst parameter \(H\). A first attempt in this direction is due to Nualart and Ouknine [24], who proved existence and uniqueness for some non-Lipschitz drifts. For \(d=1\) and \(b=a\delta_{0}\) with \(a\in\mathbb{R}\), the well-posedness of this equation was established for \(H<1/4\) by Catellier and Gubinelli [8] (who also consider multidimensional drifts in negative Holder spaces) using nonlinear Young integrals. Hence, for
\(d=1\), we observe a gap between the Brownian case (\(H=1/2\)), with well posedness for \(|a|\leqslant 1\) proven in [22], and the aforementioned result for fractional Brownian motion with \(H<1/4\). This gap was partially closed in [2]. The aim of this paper is to fully close this gap in terms of weak existence. For a more detailed summary of the literature for (1.1) in the fractional Brownian motion case as well as references to related works we refer the reader to Appendix A.
During the preparation of this manuscript, the authors in [7] closed the gap mentioned above as a special case of a result for any dimension \(d\geqslant 1\) (see [7, Theorem 2.11]). Therein the authors prove existence of a weak solution to (1.1) for \(b\) being a finite positive measure and \(H<\frac{1}{d+1}\). In particular this implies the following statement that can be deduced from Theorem 3.2.
**Theorem 1.1**.: _Let \(b\) be a finite \(\mathbb{R}^{d}\)-valued measure and \(H<1/(2d)\). Then there exists a weak solution to (1.1)._
Nevertheless, due to the simplicity of the proof, we state and prove Theorem 3.2. The general idea therein is to make use of the nonnegativity of the drift, giving certain _a priori_ estimates of the solution under loose assumptions on the regularity of the drift, respectively the Hurst parameter \(H\), which is eventually leading to existence of a weak solution. The idea of the proof of Theorem 3.2 resembles the ones in [3] and [2] and is heavily relying on the stochastic sewing Lemma with random controls.
## 2 Notations and definitions
Throughout the paper, we use the following notations and conventions:
* Constants \(C\) might vary from line to line.
* For topological spaces \(X,Y\) we denote the set of continuous function from \(X\) to \(Y\) by \(\mathcal{C}(X,Y)\).
* For \(x=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\), let \(|x|=\sum_{i=1}^{d}|x_{i}|\).
* Let \(s<t\) be two real numbers and \(\Pi=(s=t_{0}<t_{1}<\cdots<t_{n}=t)\) be a partition of \([s,t]\), we denote \(|\Pi|=\sup_{i=1,\cdots,n}(t_{i}-t_{i-1})\) the mesh of \(\Pi\).
* For \(s,t\in\mathbb{R}\) with \(s\leqslant t\), we denote \(\Delta_{[s,t]}:=\{(u,v):s\leqslant u\leqslant v\leqslant t\}\).
* For any function \(f\) defined on \([s,t]\), we denote \(f_{u,v}:=f_{v}-f_{u}\) for \((u,v)\in\Delta_{[s,t]}\).
* For any function \(g\) defined on \(\Delta_{[s,t]}\) and \(s\leqslant r\leqslant u\leqslant v\leqslant t\), we denote \(\delta g_{r,u,v}:=g_{r,v}-g_{r,u}-g_{u,v}\).
* For a probability space \(\Omega\) and \(p\in[1,\infty]\), the norm on \(L^{p}(\Omega)\) is denoted by \(\|\cdot\|_{L^{p}}\).
* We denote by \((B_{t})_{t\geqslant 0}\) a fractional Brownian motion with Hurst parameter \(H\leqslant 1/2\).
* The filtration \((\mathcal{F}_{t})_{t\geqslant 0}\) is denoted by \(\mathbb{F}\).
* All filtrations are assumed to fulfill the usual conditions.
* The conditional expectation \(\mathbb{E}[\cdot\mid\mathcal{F}_{s}]\) is denoted by \(\mathbb{E}^{s}[\cdot]\).
* Let \(\mathbb{G}\) be a filtration. We call \((W_{t})_{t\geqslant 0}\) a \(\mathbb{G}\)-Brownian motion if \((W_{t})_{t\geqslant 0}\) is a Brownian motion adapted to \(\mathbb{G}\) and for \(0\leqslant s\leqslant t\), \(W_{t}-W_{s}\) is independent of \(\mathcal{G}_{s}\).
For any \(t>0\) and \(x\in\mathbb{R}^{d}\), let \(g_{t}(x):=\frac{1}{(2\pi t)^{d/2}}e^{-\frac{|x|^{2}}{2t}}\). For \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\), let
\[G_{t}\phi(x):=(g_{t}*\phi)(x). \tag{2.1}\]
A continuous function \(w:\Delta_{[s,t]}\to[0,\infty)\) is a control function if, for \(s\leqslant r\leqslant u\leqslant v\leqslant t\),
\[w(r,u)+w(u,v)\leqslant w(r,v),\]
and \(w(r,r)=0\) for all \(r\in[s,t]\).
We call a measurable function \(\lambda\colon\Delta_{[s,t]}\times\Omega\to\mathbb{R}_{+}\) a random control if there exists a set \(\Omega^{\prime}\) of full measure such that for \(\omega\in\Omega^{\prime}\), \(\lambda(\cdot,\omega)\) is a control.
We denote the nonhomogeneous Besov spaces by \(\mathcal{B}^{s}_{p}\). For a precise definition of these spaces that contain \(\mathbb{R}^{d}\)-valued tempered distributions see Section B. For \(s\in\mathbb{R}_{+}\setminus\mathbb{N}\) and \(p=\infty\), Besov spaces coincide with Holder spaces (see [5, p.99]).
**Definition 2.1**.: Let \(\beta\in\mathbb{R}\). We say that \((f_{n})_{n\in\mathbb{N}}\) converges to \(f\) in \(\mathcal{B}_{\infty}^{\beta-}\) as \(n\to\infty\) if \(\sup_{n\in\mathbb{N}}\|f_{n}\|_{\mathcal{B}_{\infty}^{\beta}}<\infty\) and
\[\forall\beta^{\prime}<\beta,\quad\lim_{n\to\infty}\|f_{n}-f\|_{ \mathcal{B}_{\infty}^{\beta^{\prime}}}=0.\]
Stochastic sewing.Stochastic sewing was originally introduced in [21]. In Lemma 2.2 we recall a recent extension (see [3, Theorem 4.7]) involving random controls. This version of stochastic sewing allows us to make use of the nonnegativity of the drift as the corresponding integral will have nonnegative increments in each component. In particular it will give rise to a random control.
**Lemma 2.2**.: _Let \(m\in[2,\infty)\) and \(0\leqslant S<T\). Let \(A:\Delta_{[S,T]}\to L^{m}\) be \(\mathbb{R}^{d}\)-valued such that \(A_{s,t}\) is \(\mathcal{F}_{t}\)-measurable for any \((s,t)\in\Delta_{[S,T]}\). Let \(\lambda\) be a random control. Assume that there exist constants \(\Gamma_{1},\Gamma_{2},\alpha_{1},\beta_{1}\geqslant 0\) and \(\varepsilon>0\) such that \(\alpha_{1}+\beta_{1}>1\) and_
\[|\mathbb{E}^{u}\delta A_{s,u,t}| \leqslant\Gamma_{1}|t-s|^{\alpha_{1}}\lambda(s,t)^{\beta_{1}}\text { a.s.,} \tag{2.2}\] \[\|\delta A_{s,u,t}\|_{L^{m}} \leqslant\Gamma_{2}(t-s)^{1/2+\varepsilon}, \tag{2.3}\]
_for every \((s,t)\in\Delta_{[S,T]}\) and \(u\coloneqq(s+t)/2\). Assume there exists a process \((\mathcal{A}_{t})_{t\in[S,T]}\) such that, for any \(t\in[S,T]\) and any sequence of partitions \(\Pi_{k}=\{t_{i}^{k}\}_{i=0}^{N_{k}}\) of \([S,t]\) with mesh size going to zero, we have_
\[\mathcal{A}_{t}=\lim_{k\to\infty}\sum_{i=0}^{N_{k}}A_{t_{i}^{k}, t_{i+1}^{k}}\text{ in probability.} \tag{2.4}\]
_Then there exists a map \(D\colon\Delta_{[S,T]}\to L^{m}\) and a constant \(C>0\), such that for all \((s,t)\in\Delta_{[S,T]}\),_
\[|\mathcal{A}_{t}-\mathcal{A}_{s}-A_{s,t}|\leqslant C\Gamma_{1}|t-s|^{\alpha_ {1}}\lambda(s,t)^{\beta_{1}}+D_{s,t}\text{ a.s.}\]
_and_
\[\|D_{s,t}\|_{L^{m}}\leqslant C\Gamma_{2}|t-s|^{1/2+\varepsilon}.\]
Link between Bm and fBm.For \(H\in(0,\frac{1}{2})\), there exist operators \(\mathcal{A}\) and \(\bar{\mathcal{A}}\), which can both be given in terms of fractional integrals and derivatives (see [25, Th. 11]), such that
\[\text{if $B$ is a fractional Brownian motion, $W=\mathcal{A}(B)$ is a Brownian motion,} \tag{2.5}\] \[\text{if $W$ is a Brownian motion, $B=\bar{\mathcal{A}}(W)$ is a fractional Brownian motion.} \tag{2.6}\]
Furthermore \(B\) and \(W\) generate the same filtration.
**Lemma 2.3**.: _[_2_, Lemma 7.2]_ _The operator \(\mathcal{A}\) continuously maps \((\mathcal{C}_{[0,T]},\|\cdot\|_{\infty})\) to itself._
**Definition 2.4**.: Let \(\mathbb{G}\) be a filtration. We say that \(B\) is a \(\mathbb{G}\)-fractional Brownian motion if \(W=\mathcal{A}(B)\) is a \(\mathbb{G}\)-Brownian motion.
Definition of a solution.As the drift in (1.1) is distributional, it is _a priori_ not clear how to define a solution. In the literature this is either done as below in Definition 2.5 (see [2, 3, 14]) or via nonlinear Young integrals (see Appendix A). The two definitions coincide as long as the solution fulfills a certain regularity (see [14, Remark 8.5] and [2, Theorem 2.14]).
**Definition 2.5**.: Let \(\beta\in\mathbb{R}\), \(b\in\mathcal{B}_{\infty}^{\beta}\), \(T>0\) and \(x_{0}\in\mathbb{R}^{d}\). We call a couple \(((X_{t})_{t\in[0,T]},(B_{t})_{t\in[0,T]})\) defined on some filtered probability space \((\Omega,\mathcal{G},\mathbb{G},\mathbb{P})\) a _weak solution_ to (1.1) on \([0,T]\), with initial condition \(x_{0}\), if
* \(B\) is a \(\mathbb{G}\)-fBm;
* \(X\) is adapted to \(\mathbb{G}\);
* there exists a process \((K_{t})_{t\in[0,T]}\) such that, a.s., \[X_{t}=x_{0}+K_{t}+B_{t}\text{ for all }t\in[0,T];\] (2.7)
* for every sequence \((b^{n})_{n\in\mathbb{N}}\) of smooth bounded functions converging to \(b\) in \(\mathcal{B}_{\infty}^{\beta-}\), we have that \[\sup_{t\in[0,T]}\left|\int_{0}^{t}b^{n}(X_{r})dr-K_{t}\right|_{n\to\infty}^{ \mathbb{P}}0.\] (2.8)
If the couple is clear from the context, we simply say that \((X_{t})_{t\in[0,T]}\) is a weak solution. If \(X\) is adapted to the filtration generated by \(B\), we call it a _strong solution_.
## 3 Main result and proof
Throughout this section we work under the following assumption:
**Assumption 3.1**.: _We say that \((b,\beta)\) satisfies Assumption 3.1 if \(b\) is a nonnegative distribution with_
\[b\in\mathcal{B}_{\infty}^{\beta}\text{ for }\beta\in\mathbb{R}\text{ with } \beta>-1/(2H). \tag{3.1}\]
Our main result reads as follows:
**Theorem 3.2**.: _Assume \((b,\beta)\) satisfies Assumption 3.1. Then there exists a weak solution to Equation (1.1)._
Combining Remark B.4 and Theorem 3.2, we get Theorem 1.1.
The remainder of the section is dedicated to proving Theorem 3.2. This will be done by regularizing the drift, considering the sequence of strong solutions to the corresponding approximated equations and proceeding via a tightness-stability argument. In order to do so we state two _a priori_ estimates in Lemma 3.3 below. This will be crucial in proving tightness and stability for the approximation scheme. For a proof of Lemma 3.3(a) for \(d=1\), see [2, Proposition 5.3]. Therein we use nonnegativity of the drift in order to obtain an increasing process. The same can be done here for dimension \(d\geqslant 1\) obtaining a process that is increasing in each component and therefore we omit the proof (for a similar argument see the proof of Lemma 3.3(b)).
**Lemma 3.3**.:
* _Assume that_ \((b,\beta)\) _satisfies Assumption_ 3.1_. Let_ \(m\geqslant 2\)_. Then there exists a constant_ \(C>0\) _such that for any_ \((s,t)\in\Delta_{[0,T]}\) _any weak solution_ \(X\) _to (_1.1_) fulfills_ \[\|X_{s,t}-B_{s,t}\|_{L^{m}}\leqslant C\,(1+\|b\|_{\mathcal{B}_{\infty}^{\beta} }^{2})(t-s)^{1+\beta H}.\]
* _Let_ \(\phi,h\in\mathcal{C}_{b}^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\cap\mathcal{ B}_{\infty}^{\beta}\) _such that_ \((\phi,\beta)\) _and_ \((h,\beta)\) _satisfy Assumption_ 3.1_. Let_ \(X\) _be the unique strong solution to (_1.1_) with drift_ \(\phi\)_. Let_ \(\delta\in(0,1+H\beta)\)_. Then there exists a constant_ \(C>0\) _which does not depend on_ \(x_{0}\)_,_ \(\phi\) _and_ \(h\)_, and there exists a nonnegative random variable_ \(Z\) _which satisfies_ \(\mathbb{E}[Z]\leqslant C\|h\|_{\mathcal{B}_{\infty}^{\beta}}(1+\|\phi\|_{ \mathcal{B}_{\infty}^{\beta}}^{2})\) _such that_ \[\left|\int_{s}^{t}h(X_{r})dr\right|\leqslant Z\,|t-s|^{\delta}.\] (3.2)
The following lemma is crucial in the proof of the above _a priori_ estimates, since it captures the regularization effect of the fBm. For a slightly more general statement and the proof see [17, Lemma 3.4]. The intuition behind stating the lemma is that the regularization effect of any solution \(X\) will be similar to the one of a fBm, since it mantains the oscillations of the fBm. This is because, by the _a priori_ estimates above, \(X-B\) is more regular than \(B\). For another perspective on this, note that any solution \(X\) has a jointly continuous local time by [7, Theorem 2.16].
**Lemma 3.4**.: _Let \(\gamma\in(-1/(2H),0)\), \(m\in[2,\infty)\) and \(d,e\in\mathbb{N}\). Then there exists a constant \(C>0\) such that for any \(0\leqslant S\leqslant T\), any \(\mathcal{F}_{S}\)-measurable random variable \(\Xi\) in \(\mathbb{R}^{e}\) and any bounded measurable function \(f:\mathbb{R}^{d}\times\mathbb{R}^{e}\to\mathbb{R}^{d}\) fulfilling_
1. \(\mathbb{E}\left[\left\|f(\cdot,\Xi)\right\|_{\mathcal{C}^{1}}^{m}\right]<\infty\)_,_
2. \(\mathbb{E}\left[\left\|f(\cdot,\Xi)\right\|_{\mathcal{B}^{\infty}_{\infty}}^{m} \right]<\infty\)_,_
_we have for any \(t\in[S,T]\) that_
\[\left\|\int_{S}^{t}f(B_{r},\Xi)\,dr\right\|_{L^{m}}\leqslant C\left|\left| \left|f(\cdot,\Xi)\right|_{\mathcal{B}^{\infty}_{\infty}}\right\|_{L^{m}}(t-S) ^{1+H\gamma}. \tag{3.3}\]
Proof of Lemma 3.3(b).: By nonnegativity of \(\phi\), \(K=X-B\) is monotone in each component. In particular \((v,w)\mapsto|K_{w}-K_{v}|\) defines a random control. For \((s,t)\in\Delta_{[0,T]}\) let
\[A_{s,t}:=\int_{s}^{t}h(B_{r}+K_{s})\,dr.\]
We apply Lemma 2.2 for \(K^{h}=\mathcal{A}\) defined by \(K_{\cdot}^{h}=\int_{0}^{\cdot}h(X_{r})dr\). In order to see that all conditions are fulfilled, we will show the following for \(u\in[s,t]\) and some constant \(C>0\) independent of \(s,t\) and \(u\):
1. \(\|A_{s,t}\|_{L^{m}}\leqslant C\|h\|_{\mathcal{B}^{\infty}_{\infty}}(t-s)^{1+H\beta}\);
2. \(|\mathbb{E}^{u}[\delta A_{s,u,t}]|\leqslant C\|h\|_{\mathcal{B}^{\infty}_{ \infty}}|K_{u}-K_{s}|(t-u)^{H(\beta-1)+1}\);
3. \(\sum_{i=0}^{N_{n}-1}A_{t_{i}^{n},t_{i+1}^{n}}\xrightarrow{a.s}K_{t}^{h}\) along any sequence of partitions \(\Pi_{n}=\{t_{i}^{n}\}_{i=0}^{N_{n}}\) of \([0,t]\) with mesh converging to \(0\).
Notice that \(1+H\beta>1/2\) and hence (i) gives (2.3). Furthermore, (ii) gives (2.2) for \(\alpha_{1}=H(\beta-1)+1>1/2-H\geqslant 0\), \(\beta_{1}=1\) and \(\lambda(s,t):=|K_{t}-K_{s}|\).
Assume, for the moment, that (i)-(ii)-(iii) hold true. Then by Lemma 2.2, there exists a process \(D\) such that
\[|K_{t}^{h}-K_{s}^{h}-A_{s,t}|\leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}| K_{t}-K_{s}|(t-s)^{H(\beta-1)+1}+D_{s,t}, \tag{3.4}\]
with \(\|D_{s,t}\|_{L^{m}}\leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}(t-s)^{1+H\beta}\). Hence, by (i) and Lemma 3.3(a) and as \(H(\beta-1)+1>0\)
\[\|K_{t}^{h}-K_{s}^{h}\|_{L^{m}} \leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}\|K_{t}-K_{s}\|_{L ^{m}}(t-s)^{H(\beta-1)+1}+C\|h\|_{\mathcal{B}^{\beta}_{\infty}}(t-s)^{1+H\beta}\] \[\leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}(1+\|\phi\|_{ \mathcal{B}^{\beta}_{\infty}}^{2})(t-s)^{1+H\beta}.\]
The result now follows from Kolmogorov's continuity theorem.
Let us now verify (i)-(ii)-(iii).
Proof of (i): By Lemma 3.4 applied to \(\Xi=K_{s}\) and \(f(z,x)=h(z+x)\), we have
\[\|A_{s,t}\|_{L^{m}}\leqslant C\big{\|}\|h(\cdot+K_{s})\|_{\mathcal{B}^{\beta} _{\infty}}\big{\|}_{L^{m}}(t-s)^{1+H\beta}.\]
Using that \(\|h(\cdot+K_{s})\|_{\mathcal{B}^{\beta}_{\infty}}=\|h\|_{\mathcal{B}^{\beta} _{\infty}}\) (see [3, Lemma A.2] for \(d=1\), which easily generalizes to \(d>1\)), we thus get
\[\|A_{s,t}\|_{L^{m}}\leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}(t-s)^{1+H \beta}. \tag{3.5}\]
Proof of (ii): From the local nondeterminism property of fractional Brownian motion (see Lemma 7.1 in [26]), we have that
\[\sigma_{s,t}^{2}\geqslant C(t-s)^{2H}, \tag{3.6}\]
where
\[\sigma_{s,t}^{2}I_{d}=\text{Var}(B_{t}-\mathbb{E}^{s}[B_{t}]).\]
Then by [17, Lemma 3.3] applied to \(\Xi=(K_{s},K_{u})\) and \(f(z,(x_{1},x_{2}))=h(z+x_{1})-h(z+x_{2})\), we obtain
\[|\mathbb{E}^{u}[\delta A_{s,u,t}]| =\Big{|}\int_{u}^{t}\mathbb{E}^{u}[h(B_{r}+K_{s})-h(B_{r}+K_{u})] dr\Big{|}\] \[=\Big{|}\int_{u}^{t}G_{\sigma^{2}_{u,r}}h(\mathbb{E}^{u}[B_{r}]+K _{s})-G_{\sigma^{2}_{u,r}}h(\mathbb{E}^{u}[B_{r}]+K_{u})dr\Big{|}\] \[\leqslant\int_{u}^{t}\|G_{\sigma^{2}_{u,r}}h\|_{\mathcal{C}^{1}} |K_{u}-K_{s}|dr.\]
We now use [3, Lemma A.3 (iv)] which again easily generalizes to \(d>1\). Note that it can be used as we can assume w.l.o.g. that \(\beta<0\). Hence
\[\|G_{\sigma^{2}_{u,r}}h\|_{\mathcal{C}^{1}}\leqslant C\|h\|_{\mathcal{B}^{ \beta}_{\infty}}(\sigma^{2}_{u,r})^{(\beta-1)/2}.\]
The above, (3.6) and using that \(H(\beta-1)>-1\) to ensure integrability gives
\[|\mathbb{E}^{u}[\delta A_{s,u,t}]| \leqslant C\int_{u}^{t}|r-u|^{H(\beta-1)}\|h\|_{\mathcal{B}^{ \beta}_{\infty}}|K_{u}-K_{s}|dr\] \[\leqslant C\|h\|_{\mathcal{B}^{\beta}_{\infty}}|K_{u}-K_{s}|(t-u )^{H(\beta-1)+1}.\]
Proof of (iii): For a sequence \(\Pi_{n}=\{t^{n}_{i}\}_{i=0}^{N_{u}}\) of partitions of \([0,t]\) with mesh size going to \(0\), we have
\[|K^{h}_{t}-\sum_{i=0}^{N_{u}-1}A_{t^{n}_{i},t^{n}_{i+1}}| \leqslant\sum_{i}\int_{t^{n}_{i}}^{t^{n}_{i+1}}|h(B_{r}+K_{r})-h(B _{r}+K_{t^{n}_{i}})|dr\] \[\leqslant\sum_{i}\int_{t^{n}_{i}}^{t^{n}_{i+1}}\|h\|_{\mathcal{C} ^{1}}|K_{r}-K_{t^{n}_{i}}|dr\] \[\leqslant\sum_{i}\|h\|_{\mathcal{C}^{1}}(t^{n}_{i+1}-t^{n}_{i})| K_{t^{n}_{i+1}}-K_{t^{n}_{i}}|\] \[\leqslant\|h\|_{\mathcal{C}^{1}}\|\Pi_{n}\|K_{t}-K_{0}|\stackrel{{ n\to\infty}}{{\longrightarrow}}0\text{ a.s.}\]
Below we state the two propositions that ensure tightness and stability of the approximation scheme. The proofs are similar to the ones of [2, Proposition 7.5 and Proposition 7.7]. The only differences are allowing for \(d\geqslant 1\) instead of \(d=1\) and that Assumption 3.1 is weaker than the corresponding assumption in there. The latter is due to the crucial Lemma 3.3(b) being stated under Assumption 3.1. For the reader's convenience we prove both statements.
**Proposition 3.5** (Tightness).: _Assume \((b,\beta)\) fulfills Assumption 3.1. Let \((b^{n})_{n\in\mathbb{N}}\) be a sequence of smooth bounded functions converging to \(b\) in \(\mathcal{B}^{\beta-}_{\infty}\). For \(n\in\mathbb{N}\), let \(X^{n}\) be the strong solution to (1.1) with initial condition \(x_{0}\) and drift \(b^{n}\). Then there exists a subsequence \((n_{k})_{k\in\mathbb{N}}\) such that \((X^{n_{k}},B)_{k\in\mathbb{N}}\) converges weakly in the space \([\mathcal{C}_{[0,T]}]^{2}\)._
**Proposition 3.6** (Stability).: _Assume \((b,\beta)\) fulfills Assumption 3.1. Let \((\tilde{b}^{k})_{k\in\mathbb{N}}\) be a sequence of smooth bounded functions converging to \(b\) in \(\mathcal{B}^{\beta-}_{\infty}\). Let \(\tilde{B}^{k}\) have the same law as \(B\). We consider \(\tilde{X}^{k}\) to be the unique strong solution to (1.1) for \(B=\tilde{B}^{k}\), initial condition \(x_{0}\) and drift \(\tilde{b}^{k}\). We assume that there exist stochastic processes \(\tilde{X},\tilde{B}:[0,T]\to\mathbb{R}^{d}\) such that \((\tilde{X}^{k},\tilde{B}^{k})_{k\in\mathbb{N}}\) converges to \((\tilde{X},\tilde{B})\) on \([\mathcal{C}_{[0,T]}]^{2}\) in probability. Then \(\tilde{X}\) fulfills (2.7) and (2.8) from Definition 2.5._
Proof of Proposition 3.5.: Let \(K^{n}_{t}:=\int_{0}^{t}b^{n}(X^{n}_{r})dr\). For \(M>0\), let
\[A_{M}:=\{f\in\mathcal{C}_{[0,T]}:f(0)=0,\ |f(t)-f(s)|\leqslant M(t-s)^{1+H \beta},\ \forall(s,t)\in\Delta_{[0,T]}\}.\]
By Arzela-Ascoli's Theorem, \(A_{M}\) is compact in \(\mathcal{C}_{[0,T]}\). Applying Lemma 3.3(a) and Markov's inequality we get
\[\mathbb{P}(K^{n}\notin A_{M}) \leqslant\mathbb{P}(\exists(s,t)\in\Delta_{[0,T]}:|K^{n}_{s,t}|>M (t-s)^{1+H\beta})\] \[\leqslant C\sup_{n\in\mathbb{N}}\|b^{n}\|_{\mathcal{B}^{\beta}_{ \infty}}\left(1+\sup_{n\in\mathbb{N}}\|b^{n}\|_{\mathcal{B}^{\beta}_{\infty}}^ {2}\right)M^{-1}.\]
Hence, the sequence \((K^{n})_{n\in\mathbb{N}}\) is tight in \(\mathcal{C}_{[0,T]}\). So \((K^{n},B)_{n\in\mathbb{N}}\) is tight in \([\mathcal{C}_{[0,T]}]^{2}\). Thus by Prokhorov's Theorem, there exists a subsequence \((n_{k})_{k\in\mathbb{N}}\) such that \((K^{n_{k}},B)_{k\in\mathbb{N}}\) converges weakly in the space \([\mathcal{C}_{[0,T]}]^{2}\), and so does \((X^{n_{k}},B)_{k\in\mathbb{N}}\).
Proof of Proposition 3.6.: Assume w.l.o.g. that \(X_{0}=0\) and let \(\tilde{K}:=\tilde{X}-\tilde{B}\), so that (2.7) is automatically verified. Let now \((b^{n})_{\eta\in\mathbb{N}}\) be any sequence of smooth bounded functions converging to \(b\) in \(\mathcal{B}^{\beta-}_{\infty}\). In order to verify that \(\tilde{K}\) and \(\tilde{X}\) fulfill (2.8) from Definition 2.5, we have to show that
\[\lim_{n\to\infty}\sup_{t\in[0,T]}\left|\int_{0}^{t}b^{n}(\tilde{X}_{r})dr- \tilde{K}_{t}\right|=0\text{ in probability}. \tag{3.7}\]
By the triangle inequality we have that for \(k,n\in\mathbb{N}\) and \(t\in[0,T]\),
\[\left|\int_{0}^{t}b^{n}(\tilde{X}_{r})dr-\tilde{K}_{t}\right| \leqslant\left|\int_{0}^{t}b^{n}(\tilde{X}_{r})dr-\int_{0}^{t}b^{ n}(\tilde{X}_{r}^{k})dr\right|+\left|\int_{0}^{t}b^{n}(\tilde{X}_{r}^{k})dr- \int_{0}^{t}\tilde{b}^{k}(\tilde{X}_{r}^{k})dr\right|\] \[\quad+\left|\int_{0}^{t}\tilde{b}^{k}(\tilde{X}_{r}^{k})dr- \tilde{K}_{t}\right|=:A_{1}+A_{2}+A_{3}. \tag{3.8}\]
We show that all summands on the right hand side of (3.8) converge to \(0\) uniformly on \([0,T]\) in probability as \(n\to\infty\), choosing \(k=k(n)\) accordingly.
\(\mathbf{A_{1}}\): Notice that
\[\left|\int_{0}^{t}b^{n}(\tilde{X}_{r})dr-\int_{0}^{t}b^{n}(\tilde {X}_{r}^{k})dr\right| \leqslant\|b^{n}\|_{\mathcal{C}^{1}}\int_{0}^{t}|\tilde{X}_{r}- \tilde{X}_{r}^{k}|dr\] \[\leqslant\|b^{n}\|_{\mathcal{C}^{1}}\,T\sup_{t\in[0,T]}|\tilde{X} _{t}-\tilde{X}_{t}^{k}|.\]
The result follows as for any \(\varepsilon>0\), by assumption, we can choose a sequence \((k(n))_{n\in\mathbb{N}}\) such that
\[\mathbb{P}\Big{(}\|b^{n}\|_{\mathcal{C}^{1}}\,T\sup_{t\in[0,T]}|\tilde{X}_{t}- \tilde{X}_{t}^{k(n)}|>\varepsilon\Big{)}<\frac{1}{n},\ \forall n\in\mathbb{N}.\]
\(\mathbf{A_{2}}\): Let \(\beta^{\prime}\in(-1/(2H),\beta)\). By Lemma 3.3(b) applied to \(\tilde{X}^{k}\), \(h=b^{n}-\tilde{b}^{k}\) and \(\beta^{\prime}\) instead of \(\beta\), we know that there exists a random variable \(Z_{n,k}\) such that
\[\mathbb{E}[Z_{n,k}] \leqslant C\,\|b^{n}-\tilde{b}^{k}\|_{\mathcal{B}^{\beta^{\prime} }_{\infty}}(1+\|\tilde{b}^{k}\|_{\mathcal{B}^{\beta^{\prime}}_{\infty}}^{2})\] \[\leqslant C\,(\|b^{n}-b\|_{\mathcal{B}^{\beta^{\prime}}_{\infty}} +\|\tilde{b}^{k}-b\|_{\mathcal{B}^{\beta^{\prime}}_{\infty}})\,(1+\sup_{m\in \mathbb{N}}\|\tilde{b}^{m}\|_{\mathcal{B}^{\beta^{\prime}}_{\infty}}^{2}), \tag{3.9}\]
for \(C\) independent of \(k,n\) and that we have
\[\sup_{t\in[0,T]}\left|\int_{0}^{t}b^{n}(\tilde{X}_{r}^{k})dr-\int_{0}^{t} \tilde{b}^{k}(\tilde{X}_{r}^{k})dr\right|\leqslant Z_{n,k}(1+T).\]
Using Markov's inequality and (3.9) we obtain that
\[\mathbb{P}\left(\sup_{t\in[0,T]}\left|\int_{0}^{t}b^{n}(\tilde{X}_ {r}^{k})dr-\int_{0}^{t}\tilde{b}^{k}(\tilde{X}_{r}^{k})dr\right|>\varepsilon \right)\leqslant\varepsilon^{-1}\,\mathbb{E}[Z_{n,k}]\,(1+T)\] \[\leqslant C\,\varepsilon^{-1}\,(1+T)\,(\|b^{n}-b\|_{\mathcal{B}^{ \beta^{\prime}}_{\infty}}+\|\tilde{b}^{k}-b\|_{\mathcal{B}^{\beta^{\prime}}_{ \infty}})\,(1+\sup_{m\in\mathbb{N}}\|\tilde{b}^{m}\|_{\mathcal{B}^{\beta^{ \prime}}_{\infty}}^{2}).\]
Choosing \(k=k(n)\) as before gives the convergence.
\(\mathbf{A_{3}}\): Recall that \(\tilde{X}_{t}^{k}=\int_{0}^{t}\tilde{b}^{k}(\tilde{X}_{r}^{k})dr+\tilde{B}_{t}^{k}\). Hence, we get that
\[\sup_{t\in[0,T]}\left|\int_{0}^{t}\tilde{b}^{k}(\tilde{X}_{r}^{k})dr-\tilde{K}_ {t}\right|\leqslant\sup_{t\in[0,T]}(|\tilde{X}_{t}^{k}-\tilde{X}_{t}|+|\tilde {B}_{t}^{k}-\tilde{B}_{t}|).\]
Since by assumption \((\tilde{X}^{k},\tilde{B}^{k})_{k\in\mathbb{N}}\) converges to \((\tilde{X},\tilde{B})\) on \([\mathcal{C}_{[0,T]}]^{2}\) in probability, we get the result.
Proof of Theorem 3.2.: Let \((b^{n})_{n\in\mathbb{N}}\) be a sequence of smooth bounded functions converging to \(b\) in \(\mathcal{B}_{\infty}^{\beta-}\). By Proposition 3.5, there exists a subsequence \((n_{k})_{k\in\mathbb{N}}\) such that \((X^{n_{k}},B)_{k\in\mathbb{N}}\) converges weakly in \([\mathcal{C}_{[0,T]}]^{2}\). W.l.o.g., we assume that \((X^{n},B)_{n\in\mathbb{N}}\) converges weakly. By the Skorokhod representation Theorem, there exists a sequence of random variables \((Y^{n},\hat{B}^{n})_{n\in\mathbb{N}}\) defined on a common probability space \((\hat{\Omega},\hat{\mathcal{F}},\hat{P})\), such that
\[\text{Law}(Y^{n},\hat{B}^{n})=\text{Law}(X^{n},B),\ \forall n\in \mathbb{N}, \tag{3.10}\]
and \((Y^{n},\hat{B}^{n})\) converges a.s. to some \((Y,\hat{B})\) in \([\mathcal{C}_{[0,T]}]^{2}\). As \(X^{n}\) solves the SDE (1.1) with drift \(b^{n}\), we know by (3.10) that \(Y^{n}\) also solves (1.1) with drift \(b^{n}\) and \(\hat{B}^{n}\) instead of \(B\). Since \(X^{n}\) is a strong solution, we have that \(X^{n}\) is adapted to \(\mathbb{F}^{B}\). Hence by (3.10), we know that \(Y^{n}\) is adapted to \(\mathbb{F}^{B^{n}}\) as the conditional laws of \(Y^{n}\) and \(X^{n}\) agree and therefore \(Y^{n}\) is a strong solution to (1.1) with \(\hat{B}^{n}\) instead of \(B\).
By Proposition 3.6, we know that \(Y\) fulfills (2.7) and (2.8) from Definition 2.5 with \(\hat{B}\) instead of \(B\). Clearly \(Y\) is adapted to the filtration \(\hat{\mathbb{F}}\) defined by \(\hat{\mathcal{F}}_{t}:=\sigma(Y_{s},\hat{B}_{s},s\in[0,t])\). By (2.6) and (2.5), we have
\[\hat{B}^{n} =\bar{\mathcal{A}}(W^{n})\ \text{a.s. and} \tag{3.11}\] \[W^{n} =\mathcal{A}(\hat{B}^{n})\ \text{a.s.,} \tag{3.12}\]
where \(W^{n}\) is a sequence of Brownian motions with \(\mathbb{F}^{\hat{B}^{n}}=\mathbb{F}^{W^{n}}\). By definition of a Brownian motion with respect to a filtration, for \((s,t)\in\Delta_{[0,T]}\), \(W^{n}_{t}-W^{n}_{s}\) is independent of \(\mathcal{F}^{W^{n}}_{s}=\mathcal{F}^{\hat{B}^{n}}_{s}=\sigma(Y^{n}_{r},\hat{B }^{n}_{r},r\in[0,s])\). By (3.11), (3.12), Lemma 2.3 and the a.s.-convergence of \(\hat{B}^{n}\), we know that \(W^{n}\) converges a.s. uniformly on \([0,T]\) to a Bm \(W\) such that \(\hat{B}=\bar{\mathcal{A}}(W)\). Moreover \(\hat{B}\) and \(W\) generate the same filtration. Hence, we deduce that \(W_{t}-W_{s}\) is independent of \(\hat{\mathcal{F}}_{s}\) and so \(W\) is an \(\mathbb{F}\)-Bm. Therefore, \(\hat{B}\) is a \(\hat{\mathbb{F}}\)-fBm and \(Y\) is adapted to \(\hat{\mathbb{F}}\). Hence \(Y\) is a weak solution.
## Appendix A Previous research on (1.1)
In this section we provide an overview of the results in recent years for Equation (1.1) in case of \(B\) being a fractional Brownian motion with \(H\neq 1/2\) (see Theorem A.1). A by now classical approach is to rewrite (1.1) as a nonlinear Young integral. Hence, we briefly describe this approach first.
Consider the rewritten SDE
\[X_{t}=x_{0}+\int_{0}^{t}b(X_{s})ds+B_{t}\iff\tilde{X}_{t}=x_{0}+\int_{0}^{t}b( \tilde{X}_{s}+B_{s})ds,\ \ \ t\in[0,T],x_{0}\in\mathbb{R}^{d}.\] (A.1)
For a continuous bounded vector valued function \(b:\mathbb{R}^{d}\to\mathbb{R}^{d}\), define the averaging operator \(T^{B}\) by
\[T^{B}_{t}b(x):=\int_{0}^{t}b(x+B_{r})\,dr,\ \ \text{for}\ (t,x)\in[0,T]\times \mathbb{R}^{d}.\] (A.2)
Then can rewrite the integral on the right hand side of (A.1) by
\[\int_{0}^{t}b(\tilde{X}_{r}+B_{r})\,dr =\lim_{n\to\infty}\sum_{i=1}^{N_{n}-1}\int_{t_{i}^{n}}^{t_{i+1}^{n} }b(\tilde{X}_{t_{i}^{n}}+B_{r})\,dr\] \[=\lim_{n\to\infty}\sum_{i=1}^{N_{n}-1}T_{t_{i}^{n},t_{i+1}^{n}}^{ B}b(\tilde{X}_{t_{i}}^{n})\] \[=\int_{0}^{t}T_{dr}^{B}b(\tilde{X}_{r}),\] (A.3)
where the final equality is only formal at this point. One can give a rigorous definition of this so-called nonlinear Young integral. For a detailed review see [13].
The averaging operator \(T^{B}b\) can also be written as a convolution against the occupation measure of the noise. This viewpoint was taken in [2, 18]. Either way, for \(H\) sufficiently small (i.e. \(B\) oscillating sufficiently fast) one can extend the definition of the averaging operator \(T^{B}b\) to singular or even distributional \(b\). Then, given that expression in (A.3) is well-defined, a solution to (1.1) can be defined to be a solution to the corresponding nonlinear Young integral equation. Note that the above has a deterministic flavor as properties for each fractional Brownian path are needed. Stochastic sewing might still be useful to obtain regularity of the averaging operator (see [14, 18]). If the solution fulfills a regularity condition, the definition of a solution via nonlinear Young integrals and Definition 2.5 coincide (see [14, Remark 8.5] and [2, Theorem 2.14]).
The following Theorem gives an overview of the developments for Equation (1.1) and related equations in recent years. In (1) as well as partly in (2) and (3) a nonlinear Young integral as described above was followed. To ensure readability, some results do not represent the full scope of the actual results proven. Below we partly also allow for time-dependent drift \(b\).
**Theorem A.1**.:
1. _[_8_, Theorem 1.9]_ _combined with_ _[_15_, Theorem 3.13]__: Let_ \(b\in\mathcal{B}_{\infty}^{\beta}\) _for_ \(\beta>1-1/(2H)\)_. Then there exists a strong solution to (_1.1_) and path-by-path uniqueness holds (i.e. uniqueness to the integral equation for almost every realization of the noise, giving a stronger notion of uniqueness than the classical notion of pathwise uniqueness)_
2. _[_2_, Theorem 2.8, Theorem 2.9 and Corollary 2.5]__: For_ \(d=1\) _and_ \(H<\sqrt{2}-1\)_, there exists a weak solution for nonnegative finite measures. Additionally there exists a weak solution for_ \(b\in\mathcal{B}_{\infty}^{\beta}\) _for_ \(\beta>1/2-1/(2H)\)_. Pathwise uniqueness and strong existence is shown for_ \(\beta\geqslant 1-1/(2H)\)_._
3. _[_14_, Theorem 1.4]__: Strong existence, path-by-path uniqueness, Malliavin differentiability and existence of a flow for time-dependent drift_ \(b\in L^{q}([0,T],\mathcal{B}_{\infty}^{\beta})\) _for_ \(q\in(1,2]\) _and_ \(\beta>1-(q-1)/(Hq)\)_. Additionally weak existence for_ \(b\in L^{q}([0,T],\mathcal{B}_{\infty}^{\beta})\) _for_ \(q\in(2,\infty]\) _and_ \(\beta>(1-(q-1)/(Hq))\vee(1/2-1/(2H))\) _is shown._
4. _[_7_, Theorem 2.11 and Theorem 2.14]__: For a finite measure_ \(b\)_: Existence of a weak solution for_ \(H<1/(1+d)\) _for any_ \(d\in\mathbb{N}\)_; pathwise uniqueness and existence of a strong solution for_ \(H<(\sqrt{13}-3)/2\) _and_ \(d=1\)_._
_Remark A.2_.: In particular, Theorem 1.1 can be seen as an extension of the existence result in (2) and as a a slightly weaker result than (4).
Finally let us mention that also similar equations were investigated, such as the case of local time drift [1, 4], distribution-dependent drift [15], multiplicative fractional noise [9], Levy-noise [19], "infinitely regularizing" noises [18] and regular noise [16].
## Appendix B Besov spaces
In this section we briefly recall the definition of Besov spaces.
**Definition B.1** (Partition of unity).: Let \(\chi,\rho\in C^{\infty}(\mathbb{R}^{d},\mathbb{R})\) be radial functions and for \(j\geqslant 0\), \(\rho_{j}(x)=\rho(2^{-j}x)\). We assume that \(\chi\) is supported on a ball around \(0\) and \(\rho\) is supported on an annulus. Moreover, we have
\[\chi+\sum_{j\geqslant 0}\rho_{j} \equiv 1,\] (B.1) \[\operatorname{supp}(\chi)\cap\operatorname{supp}(\rho_{j}) =\emptyset,\ \forall j\geqslant 1,\] (B.2) \[\operatorname{supp}(\rho_{j})\cap\operatorname{supp}(\rho_{i}) =\emptyset,\ \text{if}\ |i-j|\geqslant 2.\] (B.3)
Then we call the pair \((\chi,\rho)\) a partition of unity.
Existence of a partition of unity is proven in [5, Prop. 2.10]. Throughout the paper we fix such a partition.
**Definition B.2** (Littlewood-Paley blocks).: Let \(f\) be an \(\mathbb{R}^{d}\)-valued tempered distribution. We define its \(j\)-th Littlewood-Paley block by
\[\boldsymbol{\Delta}_{j}f=\begin{cases}\mathcal{F}^{-1}(\rho_{j}\mathcal{F}(f) )\ \text{ for }j\geqslant 0\,,\\ \mathcal{F}^{-1}(\chi\mathcal{F}(f))\ \text{ for }j=-1\,,\\ 0\ \text{ for }j\leqslant-2,\end{cases}\]
where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the Fourier transform and its inverse.
**Definition B.3**.: For \(s\in\mathbb{R}\) and \(p\in[1,\infty]\), let the nonhomogeneous Besov space \(\mathcal{B}^{s}_{p}\) be the space of \(\mathbb{R}^{d}\)-valued tempered distributions \(u\) such that
\[\|u\|_{\mathcal{B}^{s}_{p}}:=\sup_{j\in\mathbb{Z}}2^{js}\|\boldsymbol{\Delta}_ {j}u\|_{L^{p}(\mathbb{R}^{d})}<\infty.\]
_Remark B.4_.: Note that any finite nonnegative measure lies in \(\mathcal{B}^{0}_{1}\) by similar computations as in [5, Proposition 2.39]. Therefore, after an embedding of Besov spaces, it lies in \(\mathcal{B}^{-d}_{\infty}\) as well.
|
2309.12853 | Classifying Tractable Instances of the Generalized Cable-Trench Problem | Given a graph $G$ rooted at a vertex $r$ and weight functions, $\gamma ,
\tau: E(G) \to \mathbb{R}$, the generalized cable-trench problem (CTP) is to
find a single spanning tree that simultaneously minimizes the sum of the total
edge cost with respect to $\tau$ and the single-source shortest paths cost with
respect to $\gamma$. Although this problem is provably $NP$-complete in the
general case, we examine certain tractable instances involving various graph
constructions of trees and cycles, along with quantities associated to edges
and vertices that arise out of these constructions. We show that given a path
decomposition oracle, for graphs in which all cycles are edge disjoint, there
exists a fast method to determine the cable-trench tree. Further, we examine
properties of graphs which contribute to the general intractability of the CTP
and present some open questions in this direction. | Mya Davis, Carl Hammarsten, Siddarth Menon, Maria Pasaylo, Dane Sheridan | 2023-09-22T13:26:58Z | http://arxiv.org/abs/2309.12853v2 | # Classifying Tractable Instances of the Generalized
### Abstract
Given a graph \(G\) rooted at a vertex \(r\) and weight functions, \(\gamma,\tau:E(G)\rightarrow\mathbb{R}\), the generalized cable-trench problem (CTP) is to find a single spanning tree that simultaneously minimizes the sum of the total edge cost with respect to \(\tau\) and the single-source shortest paths cost with respect to \(\gamma\)[6, 7]. Although this problem is provably \(NP\)-complete in the general case, we examine certain tractable instances involving various graph constructions of trees and cycles, along with quantities associated to edges and vertices that arise out of these constructions. We show that given a path decomposition oracle, for graphs in which all cycles are edge disjoint, there exists a fast method to determine the cabl-trench tree. Further, we examine properties of graphs which contribute to the general intractability of the CTP and present some open questions in this direction.
## 1 Introduction
We begin with a connected graph \(G=(V,E)\), where \(V\) is the set of vertices and \(E\) is the set of edges. Each edge is represented by pairs of the vertices they adjoin along with a weight function \(\gamma:E\rightarrow\mathbb{R}\). The _minimum spanning tree_ problem and the _single-source shortest paths_ problem are problems in the study of combinatorial algorithms with efficient and well-studied solutions.
The _minimum spanning tree_ problem is the problem of finding a connected acyclic graph \(T=(V^{\prime},E^{\prime})\) such that \(V^{\prime}=V\) and \(E^{\prime}\subseteq E\) (referred to as a spanning tree) such that the sum of the weight of each edge in \(E^{\prime}\) is minimized over all possible spanning trees. The problem is efficiently solved by Prim's algorithm, which builds the tree by placing vertices in a priority queue and adding edges between vertices already inside the tree and vertices adjacent to vertices inside the tree.
The _single-source shortest paths_ problem specifies a distinguished vertex, called the root \(r\), and asks to find a spanning tree \(T\) for which the distance of a root path from \(r\) to every vertex \(w\in V\) with respect to weight function \(\gamma\) is minimized for each \(w\) (by convention, the length of the path from \(r\) to \(r\) is \(0\)). The single-source shortest paths problem is efficiently solved by Dijkstra's algorithm. Similarly, vertices are placed in a priority queue and vertices are added to the shortest paths tree as the distance of the corresponding root path is updated.
The _single constraint cable-trench problem_ (SCTP) asks for a spanning tree \(T\) that minimizes some linear combination of the cost of a minimum spanning tree \(W_{MST}(T)\) and the cost of the single-source shortest paths tree \(W_{SPT}(T)\) with respect to a root \(r\). The cable-trench cost \(cost_{SCTP}(T)\) is the linear combination with real coefficients \(\alpha\) and \(\beta\) defined by the following equation:
\[cost_{SCTP}(T)=\alpha W_{MST}(T)+\beta W_{SPT}(T)\]
This problem was shown to be \(NP\)-complete by noting that given any vertices \(s,t\), finding a minimum spanning tree for which the \(s-t\) path length is minimal is \(NP\)-hard [6]. Consequently, the cable-trench problem which minimizes the length of all \(r-w\) paths is certainly \(NP\)-complete. In this case however, there are reasonably efficient and effective algorithms that approximate optimal solutions depending on the ratio \(\frac{\alpha}{\beta}\).
In this article, we focus on the _generalized cable-trench problem_ which is a variation of the cable-trench problem in which there are two independent weights assigned to each edge. Essentially, there are functions, \(\gamma:E\rightarrow\mathbb{R}\), \(\tau:E\rightarrow\mathbb{R}\), and the problem is to find a spanning tree which minimizes the sum of the weights of the single-source shortest paths tree with respect to \(\gamma\) and the minimum spanning tree with respect to \(\tau\). Colloquially, \(\gamma\) is called the cable cost function and \(\tau\) is called the trench cost function. Because edge weights can always be normalized with respect to \(\alpha\) and \(\beta\), we may omit the coefficients entirely. Subsequently, we aim to minimize the generalized cable-trench cost \(cost(T)\) function:
\[cost(T)=W_{MST}(T,\tau)+W_{SPT}(T,\gamma)\]
It immediately follows that the generalized cable-trench problem is also \(NP\)-complete because of the complexity of the single constraint variation.
Vasko et al. [7] formulated the mixed integer linear program for the cable-trench problem as follows:
Minimize:
\[\sum\sum\gamma_{ij}x_{ij}+\sum\sum\tau_{ij}y_{ij}\]
subject to
\[\sum x_{1j}=n-1 i=1\] \[\sum x_{ij}-\sum x_{ki}=-1 i=2,3,\ldots\] \[\sum y_{ij}=n-1 all edges \ i<j\] \[(n-1)y_{ij}-x_{ij}-x_{ji}\geq 0 \forall i,j\] \[y_{ij}\in\{0,1\} \forall i,j.\]
Here, \(x_{ij}\) denotes the number of cables between vertices \(i,j\), and \(y_{ij}\) is \(1\) if and only if a trench is drug between vertices \(i,j\). The first set of constraints ensures that exactly \(n-1\) cables leave the root. The second set ensures that a root path terminates at every root vertex \(i\) (i.e. every vertex has a cable laid _to it_ from the root). The third set of constraints ensures that \(n-1\) trenches are drug. The fourth set of constraints ensure that cables are laid where trenches are drug, and the last two constraints guarantee positivity and integrality of the linear programming LP variables.
### Heuristic algorithms and inapproximability
Significant research has been done into heuristic algorithms that can approximate optimal spanning tree solutions to the cable-trench problem. In the general case, given the similarity of Dijkstra's solution to the single-source shortest paths problem and Prim's solution to the minimum spanning tree problem, a natural idea is to modify the algorithm such that vertices are added to a spanning tree with priority relative to the cable-trench cost that is added to the tree. Vasko et al. [7] analyzed such a modified algorithm in the case of large cable-trench networks and found that it reasonably approximated efficient solutions.
In [4], Khuller, Raghavachari, and Young describe an algorithm CREW-PRAM which computes a spanning tree with a continuous tradeoff between minimum spanning tree cost and single-source shortest paths tree cost. For a given \(\lambda>0\), the algorithm approximates a minimum spanning tree up to a factor of \(1+\sqrt{2}\lambda\) and a single-source shortest paths tree up to a factor of \(1+\frac{\sqrt{2}}{\lambda}\). Further work done by Benedito, Pedrosa, and Rosado [1] show that there exists an \(\epsilon>0\) for which approximating an optimal solution up to a factor of \(1+\epsilon\) is NP-hard.
## 2 Definitions
**Definition 2.1**.: Recall that a **spanning tree** of a graph \(G=(V(G),E(G))\) is a connected, acyclic graph \(T=(V(T),E(T))\) such that \(V(T)=V(G)\) and \(E(T)\subseteq E(G)\).
**Definition 2.2**.: Given a path \(P\), we define the **trench length**\(\tau(E)\) as the weighted size of \(P\) with respect to the edge-weight function \(\tau:E(P)\rightarrow\mathbb{R}\).
\[\tau(P)=\sum_{e\in P}\tau(e)\]
**Definition 2.3**.: Given a path \(P\), we define the **cabling length**\(L(P)\) as the weighted length of a path \(P\) with respect to the edge-weight function \(\gamma:E(P)\rightarrow\mathbb{R}\).
\[L(P)=\sum_{e\in P}\gamma(e)\]
**Definition 2.4**.: Given a path \(P\), we define the **cabling cost**\(C(P)\) as the \(\gamma\)-weight of all subpaths along path \(P\) with respect to some orientation of edges on \(P:(e_{1},\ldots,e_{n})\).
\[C(P)=\sum_{i=1}^{n}\sum_{j=1}^{i}\gamma(e_{j})\]
**Lemma 2.5**.: _Given paths \(A,B\), we can define the cabling cost of the path concatenation \(AB\) as follows:_
\[C(AB)=C(A)+L(A)|B|+C(B)\]
Proof.: Assume path \(AB\) comes with oriented edge list \((e_{1},\ldots,e_{n})\) where path \(A=(e_{1},\ldots,e_{k})\) and path \(B=(e_{k+1},\ldots,e_{n})\). The costs of cabling path \(A\) and path \(B\) are
\[C(A)=\sum_{i=1}^{k}\sum_{j=1}^{i}\gamma(e_{j})\]
\[C(B)=\sum_{i=k+1}^{n}\sum_{j=k+1}^{i}\gamma(e_{j})\]
and lengths of paths \(A\) and \(B\) are
\[L(A)=\sum_{i=1}^{k}\gamma(e_{i})\]
\[L(B)=\sum_{i=k+1}^{n}\gamma(e_{i})\]
By observation,
\[L(AB)=L(A)+L(B)\]
Then,
\[C(AB) =\sum_{i=1}^{n}\sum_{j=1}^{i}\gamma(e_{j})\] \[=\sum_{i=1}^{k}\sum_{j=1}^{i}\gamma(e_{j})+(n-k)\sum_{i=1}^{k} \gamma(e_{j})+\sum_{i=k+1}^{n}\sum_{j=k+1}^{i}\gamma(e_{j})\] \[=C(A)+|B|L(A)+C(B)\] \[=C(A)+L(A)|B|+C(B)\]
Essentially, we account for the cabling cost of \(A\) in the first term \(C(A)\). Every cable that starts at \(A\) and ends in \(B\) must pay the entire cabling cost of \(A\), after which it is cabled normally, and is accounted for by the cabling cost of \(B\) in the last term \(C(B)\). We thus think of the \(L(A)|B|\) term as **pre-cabling** path \(B\) through \(A\), which yields the recursive structure shown in Figure 1.
Informally, we use both **cable-trench solution** and **cable-trench tree** to refer to the minimum cost spanning tree with respect to edge weight functions \(\gamma\) and \(\tau\).
Lastly, we define the graph operation central to our study of the cable-trench problem:
**Definition 2.6**.: Given graphs \(G,H\), and vertices \(v_{G}\) on \(G\) and \(v_{H}\) on \(H\), we define the **wedge**\(G\wedge H\) (omitting the vertex information for brevity) as the graph formed by combining \(G,H\) such that vertices \(v_{G},v_{H}\) are identified as the same vertex.
## 3 Results on Wedging
Let \(G\) be a tree. The optimal cable-trench tree is the minimum cost spanning tree of \(G\). Since \(G\) is already a tree, then the minimum cost cable-trench tree for \(G\) is \(G\) itself. We can easily compute the cable-trench tree for the wedge of two graphs \(G,H\) through a shared root of these two graphs with known cable-trench solution:
**Proposition 3.1**.: _Given graphs \(G,H\) rooted at \(r_{G},r_{H}\), with known spanning trees \(T_{G}\), \(T_{H}\) that minimize total cable-trench cost, then the cable-trench tree for \(G\wedge H\) formed by identifying \(r_{G},r_{H}\) is the tree formed from wedging \(T_{G}\wedge T_{H}\) by identifying \(r_{G},r_{H}\)._
Proof.: Since \(G\wedge H\) is edge disjoint and its root is the same as \(G,H\), the cost of a spanning tree \(T\) can be decomposed as the sum of the costs of \(G\)'s spanning tree \(T_{G}\) and \(H\)'s spanning tree \(T_{H}\):
\[cost(T)=cost(T_{G})+cost(T_{H})\]
The cost of each subtree \(T_{G},T_{H}\) is independent of each other. In order to minimize the total sum, we minimize each subtree. Thus, the weight of the minimum cost spanning tree is \(cost(T_{G})+cost(T_{H})\). Therefore, the cable-trench tree for \(G\wedge H\) is \(T_{G}\wedge T_{H}\).
To further extend the complexity of graphs for which the cable-trench solution is tractably computable, we can consider the case in a graph with known cable-trench solution wedged at the root onto a cycle \(G\) at a vertex \(v\in V(G)\), where \(v\neq r\).
Given the diagram for cycle \(G\) in Figure 2, we define the following components of \(G\) for an edge \(e\):
* \(T\) is the spanning tree of \(G\) rooted at \(r\) that does not contain edge \(e\).
* \(v\) is the wedge vertex on \(G\) which is identified with the root of \(H\) in the graph \(G\wedge H\). We enforce \(v\neq r\). In the case where \(v=r\), refer to Proposition 3.1.
* \(P\) is the path from \(r\) to \(e\) that includes edge \(e\). \(Q\) is the path from \(e\) to \(v\) that does not include edge \(e\). \(R\) is the path from \(r\) to \(v\) that does not include \(e\). Note that all three paths \(P,Q,R\) are edge disjoint, and that \(P\cup Q\cup R=G\).
* All paths are assumed to originate from the root. When the direction of a path is ambiguous, notation \(Q^{+}\) indicates the path is oriented clockwise, and \(Q^{-}\) indicates the path is oriented counter-clockwise.
For notation purposes, if an edge has a subscript, the corresponding components in the decomposition in Figure 2 associated to this edge all share the same subscript (i.e. \(e_{i}\) specifies spanning tree \(T_{i}\), and splits \(G\) into paths \(P_{i},Q_{i}\), and \(R_{i}\).
Figure 1: Cabling \(B\) through \(A\) requires laying \(k\) cables of length \(\sum_{i=1}^{n}e_{i}\) to reach the endpoint of \(A\), as well as the additional cabling cost of path \(B\). The cabling cost of concatenating paths \(A\) and \(B\) requires pre-cabling of path \(B\) through path \(A\).
When comparing two trees \(T_{1}\) and \(T_{2}\), we denote the path \(Q_{1}\cup Q_{2}\) by \(\tilde{Q}\). Note, the orientation for \(Q_{1}\) and \(Q_{2}\) must be the same for \(\tilde{Q}\) to be an oriented path. So, if we assume \(e_{1}\) is on the lower path and \(e_{2}\) is on the upper path, then the clockwise orientation from \(e_{2}\) to \(e_{1}\) on \(\tilde{Q}\) arises as \(\tilde{Q}^{+}=Q_{2}^{+}\cup Q_{1}^{+}\). Similarly, the counter-clockwise orientation from \(e_{1}\) to \(e_{2}\) arises as \(\tilde{Q}^{-}=Q_{1}^{-}\cup Q_{2}^{-}\). See Figure 3 for an example labeling.
The following results holds for wedging \(H\) with known cable-trench solution onto cycle \(G\):
**Lemma 3.2**.: _Let \(G\) be an \(n\)-cycle and \(H\) be a graph with known cable-trench tree \(T_{H}\). Consider the cable-trench tree \(T\) of \(H\wedge G\) formed by identifying vertices \(v\), \(r_{H}\). The subgraph of \(T\) induced by \(H\) is precisely \(T_{H}\)._
Proof.: Assume \(H\) has an cable-trench solution \(T_{H}\) and \(G\) is an n-cycle. Since \(G\wedge T_{H}\) is edge disjoint and \(T_{H}\) is wedged onto \(G\) at \(r_{H}\), the cable-trench solution of \(T_{H}\) is independent of \(G\) and \(G\) is dependent on the number of edges in \(T_{H}\). Thus, the cable-trench tree of \(H\) remains the same.
**Lemma 3.3**.: _Given the cable-trench tree \(T_{1}\) that deletes \(e_{1}\) as shown in cycle \(G\) in Figure 2, for any \(e_{2}\neq e_{1}\) on the path \(P_{1}\cup Q_{1}\), the \(cost(T_{2}\wedge H)\geq cost(T_{1}\wedge H)\)_
Proof.: Suppose that \(cost(T_{2}\wedge H)<cost(T_{1}\wedge H)\). Given that deleting \(e_{1}\) was the optimal edge to delete internal to \(G\), we know \(cost(T_{2})\geq cost(T_{1})\). In order to satisfy our assumption, the cost of wedging \(H\) must be higher when wedging onto \(T_{1}\) compared to \(T_{2}\). However, observe that \(R_{1}=R_{2}\) because \(e_{1}\) and \(e_{2}\) are on the same path. Thus, we refer to it generically as \(R\), and the additional cost of wedging \(H\) at \(v\) in both cases is \(L(R)|H|\). Therefore, the \(cost(T_{2}\wedge H)\geq cost(T_{1}\wedge H)\) which contradicts our initial assumption.
Before using these lemmas to present an algorithm for computing the cable-trench solution for wedging a graph \(H\) with known cable-trench solution onto a cycle graph \(G\), we define the subroutine **CycleCTP** that takes a cycle graph \(G\) as an input and returns the edge \(e_{1}\) for which \(T_{1}\) is the cable-trench solution for \(G\) via a brute force minimization over all \(n\) spanning trees of \(G\). Consider the following algorithm that takes the following as input: a cycle \(G\), a vertex \(v\in G\), and a cable-trench
Figure 2: The cycle \(G\) and distinguished edge \(e\) with labeled components \(P,Q\), and \(R\).
solution \(T_{H}\) for graph \(H\) with root \(r_{H}\) to be wedged onto \(v\). Note that computing the cost of a given tree can be done in \(O(n)\) time.
```
\(e_{1}\)\(\leftarrow\) CycleCTP\((G)\) \(\min\gets cost(T_{1}\wedge T_{H})\)\(\triangleright\) Graphs wedged at \(v\) for edge \(e_{2}\in R_{1}\)do if\(cost(T_{2}\wedge T_{H})<\min\)then\(\triangleright\) Graphs wedged at \(v\) \(\min\gets cost(T_{2}\wedge T_{H})\) endif endfor return min
```
**Algorithm 1** CycleWedgeCTP
Notice by Lemma 3.3, we must only loop over edges on path \(R_{1}\), thus performing fewer computations than a naive brute force approach. Based on the given cost function and algorithm as defined in Algorithm 1, we present the following proposition as a proof of correctness.
**Proposition 3.4**.: _Assume \(G\) is a cycle rooted at \(r\) with cable-trench solution \(T_{1}\) that excludes edge \(e_{1}\). Assume \(H\) is an arbitrary graph with known cable-trench solution \(T_{H}\). If we wedge \(H\) at its root onto \(G\) at a non-root vertex \(v\), the cable-trench solution for \(G\wedge H\) is \(T_{1}\wedge T_{H}\) as long as for all edges \(e_{2}\in V(G)\) are not on path \(R_{1}\), we have:_
\[0\geq\tau(e_{2})-\tau(e_{1})+L(P_{2})-L(P_{1})+(L(P_{2})-L(P_{1}))|\tilde{Q}| +C(\tilde{Q}^{+})-C(\tilde{Q}^{-})+(L(R_{1})-L(R_{2}))|H|\]
Proof.: Following the convention of Figure 2, the total cable-trench cost of the spanning tree \(T_{1}\) in \(G\) is:
\[\tau(E(G))-\tau(e_{1})+C(P_{1}^{-})-L(P_{1})+C(R_{1}^{+})+L(R_{1})|Q_{1}|+C(Q_{ 1}^{+})\]
So, when we consider the spanning tree \(T_{1}\wedge T_{H}\) in \(G\wedge H\) the resulting total cable-trench cost may be computed as:
\[\tau(E(G))-\tau(e_{1})+C(P_{1}^{-})-L(P_{1})+C(R_{1}^{+})+L(R_{1})|Q_{1}|+C(Q_ {1}^{+})+L(R_{1})|H|+cost(T_{H})\]
Here, the \(L(R_{1})|H|\) term arises as pre-cabling all of the cables that are internal to \(T_{H}\) backward from the root of \(H\) to the root of \(G\).
By Lemmas 3.2 and 3.3, we know only an edge \(e_{2}\) which is not on \(R_{1}\) could possibly result in a lower cost cable-trench tree \(T_{2}\wedge T_{H}\). Similarly, we compute the total cable-trench cost of \(T_{2}\wedge T_{H}\) as follows:
\[\tau(E(G))-\tau(e_{2})+C(P_{2}^{+})-L(P_{2})+C(R_{2}^{-})+L(R_{2})|Q_{2}|+C(Q_ {2}^{-})+L(R_{2})|H|+cost(T_{H})\]
If the cable-trench cost of excluding edge \(e_{1}\) were to be cheaper than when excluding edge \(e_{2}\), the difference of the above total cable-trench costs must be non-positive. Hence, after some direct
cancellations and rearranging of terms, we have the following inequality:
\[0 \geq\tau(e_{2})-\tau(e_{1})\] \[+\ L(P_{2})-L(P_{1})\] \[+\ [C(R_{1}^{+})-C(P_{2}^{+})]-[C(R_{2}^{-})-C(P_{1}^{-})]\] \[+\ L(R_{1})|Q_{1}|-L(R_{2})|Q_{2}|\] \[+\ C(Q_{1}^{+})-C(Q_{2}^{-})\] \[+\ L(R_{1})|H|-L(R_{2})|H|\]
Now, observe that \(R_{1}^{+}\) is the concatenation of \(P_{2}^{+}\) and \(Q_{2}^{+}\). That is, \(R_{1}^{+}=P_{2}^{+}Q_{2}^{+}\). Similarly, \(R_{2}^{-}=P_{1}^{-}Q_{1}^{-}\). Hence, by Lemma 2.5 we have:
\[C(R_{1}^{+})-C(P_{2}^{+}) =L(P_{2})|Q_{2}|+C(Q_{2}^{+})\] \[C(R_{2}^{-})-C(P_{1}^{-}) =L(P_{1})|Q_{1}|+C(Q_{1}^{-})\]
And so, after plugging these in and more rearranging of terms, the inequality becomes:
\[0 \geq\tau(e_{2})-\tau(e_{1})+\] \[+\ L(P_{2})-L(P_{1})\] \[+\ [L(P_{2})|Q_{2}|+L(R_{1})|Q_{1}|]-[L(P_{1})|Q_{1}|+L(R_{2})|Q_{2 }|]\] \[+\ C(Q_{2}^{+})-C(Q_{1}^{-})\] \[+\ C(Q_{1}^{+})-C(Q_{2}^{-})\] \[+\ L(R_{1})|H|-L(R_{2})|H|\]
Now, by the additivity of \(L\), we have that \(L(R_{1})=L(P_{2})+L(Q_{2})\). So, recalling that we use \(\tilde{Q}=Q_{1}\cup Q_{2}\) (as sets of edges), it follows that:
\[L(P_{1})|Q_{1}|+L(R_{2})|Q_{2}| =L(P_{1})|Q_{1}|+L(P_{1})|Q_{2}|+L(Q_{1})|Q_{2}|\] \[=L(P_{1})|\tilde{Q}|+L(Q_{1})|Q_{2}|\]
Similarly, we also get:
\[L(P_{2})|Q_{2}|+L(R_{1})|Q_{1}|=L(P_{2})|\tilde{Q}|+L(Q_{2})|Q_{1}|\]
And hence the inequality becomes:
\[0 \geq\tau(e_{2})-\tau(e_{1})\] \[+\ L(P_{2})-L(P_{1})\] \[+\ L(P_{2})|\tilde{Q}|-L(P_{1})|\tilde{Q}|\] \[+\ [C(Q_{2}^{+})+L(Q_{2})|Q_{1}|+C(Q_{1}^{+})]\] \[+\ [C(Q_{1}^{-})+L(Q_{1})|Q_{2}|+C(Q_{2}^{-})]\] \[+\ L(R_{1})|H|-L(R_{2})|H|\]
Finally, noting that our orientation convention gives \(\tilde{Q}^{+}=Q_{2}^{+}Q_{1}^{+}\) and \(\tilde{Q}^{-}=Q_{1}^{-}Q_{2}^{-}\), we can use Lemma 2.5 again to get:
\[C(\tilde{Q}^{+})=C(Q_{2}^{+})+L(Q_{2})|Q_{1}|+C(Q_{1}^{+})\]
\[C(\tilde{Q}^{-})=C(Q_{1}^{-})+L(Q_{1})|Q_{2}|+C(Q_{2}^{-})\]
So our inequality becomes:
\[0 \geq \tau(e_{2})-\tau(e_{1})\] \[+ L(P_{2})-L(P_{1})\] \[+ L(P_{2})|\tilde{Q}|-L(P_{1})|\tilde{Q}|\] \[+ C(\tilde{Q}^{+})-C(\tilde{Q}^{-})\] \[+ L(R_{1})|H|-L(R_{2})|H|\]
**Remark:** The inequality can be interpreted as being divided into pairs of terms that represent the difference in cost between \(T_{2}\) and \(T_{1}\) of each pair:
* \(\tau(e_{2})-\tau(e_{1})\) represents the difference in trench length. \(T_{2}\) does not have to 'dig the trench' through \(e_{2}\), and \(T_{1}\) does not have to dig the trench through \(e_{1}\).
* In both \(T_{1}\) and \(T_{2}\), paths \(P_{1}\) and \(P_{2}\) have to be cabled, and they are both cabled from the same direction (originating from root \(r\)). However, in \(T_{1}\), the last cable along edge \(e_{1}\) does not need to be laid, as the edge is excluded from the tree. Similarly, \(T_{2}\) does not include the last edge on its path. Therefore, while the cost of cabling \(P_{1}\) and \(P_{2}\) cancel, \(L(P_{2})-L(P_{1})\) remains as the costs arising from including edges \(e_{1}\) and \(e_{2}\) on paths \(P_{1}\) and \(P_{2}\) respectively.
* The term \((L(P_{2})-L(P_{1}))|\tilde{Q}|\) represents the difference in the cost of pre-cabling \(\tilde{Q}\). In \(T_{1}\), we lay cables to \(\tilde{Q}\) with respect to one orientation and thus pre-cable it through \(P_{2}\). While in \(T_{2}\), we lay cables to \(\tilde{Q}\) with respect to the opposite orientation and thus pre-cable it through \(P_{1}\).
* \(C(\tilde{Q}^{+})-C(\tilde{Q}^{-})\) accounts for the difference in cabling region \(\tilde{Q}\) with respect to the two different possible orientations arising from \(T_{2}\) and \(T_{1}\).
* Finally, \(L(R_{1})|H|-L(R_{2})|H|\) represents the difference in the pre-cabling costs for graph \(H\) through either \(R_{1}\) or \(R_{2}\). Note that the cost of the continuation of these cables internal to \(H\) cancels out as they are the same in either case of the cable-trench solution restricted to \(G\) being \(T_{1}\) or \(T_{2}\).
Observe that our particular choice of \(v\) only really impacts the last element of the inequality. The remainder of the inequality is largely dependent on our choices of \(e_{1},e_{2}\). As such, if we were to generalize the above argument to wedging multiple graphs onto a cycle at multiple points, we would expect the majority of the contribution to come from this \(\tilde{Q}\) path in between our choices of \(e_{1}\) and \(e_{2}\).
Now, consider a case where we wedge multiple graphs with cable-trench solution at distinct vertices \(v_{1},v_{2},...,v_{k}\) of \(n\)-cycle G where \(v_{i}\neq r\) for all \(i\).
In the general case, let \(\mathcal{H}=\{H_{1},\ldots,H_{k}\}\) be a set of graphs for which each graph \(H_{i}\) has known cable-trench solution. Let \(\mathcal{V}=\{v_{1},\ldots,v_{k}\}\) be a set of vertices, where \(v_{i}\) is the vertex on \(G\) which is identified with the root of \(H_{i}\) in the graph \(G\wedge H_{i}\). We define the set \(H(\tilde{Q})\subseteq\mathcal{V}\) as the set of vertices on path \(\tilde{Q}\) that are contained in \(\mathcal{V}\).
Adapting notation conventions specified earlier, for an edge \(e\in G\), denote by \(R^{(v)}\) as the path from root \(r\) to \(v\) that does not include \(e\).
**Theorem 3.5**.: _Assume \(G\) with root \(r\) has cable-trench solution \(T_{1}\) for some \(e_{1}\). We wedge graphs \(H_{v_{1}},H_{v_{2}},\ldots,H_{v_{n}}\) at distinct vertices \(v_{1},v_{2},\ldots,v_{n}\). \(H_{i}\) has cable-trench solution \(S_{i}\). The minimum cable-trench tree for \(G\wedge H_{v_{1}}\wedge H_{v_{2}}\wedge\cdots\wedge H_{v_{n}}\) is \(T_{1}\wedge S_{1}\wedge S_{2}\wedge\cdots\wedge S_{n}\) if for all edges \(e_{2}\neq e_{1}\):_
\[0\geq\tau(e_{2})-\tau(e_{1})+ L(P_{2})-L(P_{1})+(L(P_{2})-L(P_{1}))|\tilde{Q}|+C(\tilde{Q}^{+})-C( \tilde{Q}^{-}) \tag{1}\] \[+\sum_{v\in H(\tilde{Q})}(L(R_{2}^{(v)})-L(R_{1}^{(v)}))|H_{v}|\]
Proof.: The details of the proof are essentially parallel to the proof of Proposition 3.4. Recall the decomposition of \(G\) specified by \(e_{1}\) and our arbitrary choice of \(e_{2}\neq e_{1}\).
In these terms, construct the cost (internal to \(G\)) for the cable-trench tree deleting \(e_{1}\) is:
\[cost(T_{1})=\tau(E(G))-\tau(e_{1})+C(P_{1})-L(P_{1})+C(P_{2})+L(P_{2})|Q|+C(Q^{+})\]
The analogous cost for the cable-trench tree deleting \(e_{2}\) is:
\[cost(T_{2})=\tau(E(G))-\tau(e_{2})+C(P_{2})-L(P_{2})+C(P_{1})+L(P_{1})|Q|+C(Q^{-})\]
The cost of each tree \(T_{1}\) and \(T_{2}\), internal to just \(G\), is done analogously to Proposition 3.4, with the only difference being the contribution of the costs of trees \(S_{1},\ldots,S_{n}\).
We will not switch to the tree deleting \(e_{2}\) if \(cost(T_{1}\wedge S_{1}\wedge\cdots\wedge S_{n})\leq cost(T_{2}\wedge S_{1} \wedge\cdots\wedge S_{n})\), as doing so will increase the cost of our cable-trench tree. When we take the difference of these two quantities, observe that any graph \(H_{i}\) for which \(v_{i}\) is not on path \(\tilde{Q}\) is cabled the same way, and therefore its cable-trench cost will cancel in the difference \(cost(T_{1}\wedge S_{1}\wedge\cdots\wedge S_{n})-cost(T_{2}\wedge S_{1}\wedge \cdots\wedge S_{n})\). The only meaningful contributions arise from vertices \(v_{i}\in\tilde{Q}\). Thus, the contribution of these graphs to the difference is precisely:
\[\sum_{v\in H(\tilde{Q})}(L(R_{2}^{(v)})-L(R_{1}^{(v)}))|H_{v}|\]
Figure 3: Wedging multiple graphs onto non-root vertices on the cycle \(G\).
As for each graph \(H_{i}\) such that \(v_{i}\in\tilde{Q}\), we must take the difference between pre-cabling \(|H_{i}|\) cables along path \(R_{2}^{(v_{i})}\) versus along path \(R_{1}^{(v_{i})}\).
The same algebraic manipulations as in the proof of Proposition 3.4 yield that \(0\leq cost(T_{1})-cost(T_{2})\) if and only if the desired inequality (1) holds, in which case the minimum cable-trench tree is \(T_{1}\wedge S_{1}\wedge S_{2}\wedge\dots\wedge S_{n}\)
In each case, finding the particular edge \(e_{2}\) for which deleting \(e_{2}\) would yield a cheaper total cost of the cable-trench tree involves (in the worst case) verifying the inequality over all possible values of \(e_{2}\in R_{1}\). This is captured by the loop condition in Algorithm 1.
## 4 The Strength Index of a Graph
Notice, when wedging a graph \(H\) onto \(G\), the only extra cost within \(G\) is the cost to cable from the root to the wedge vertex \(v\).
**Lemma 4.1**.: _If \(L(R_{2})>L(R_{1})\), then there is no \(e_{2}\) such that \(cost(T_{2})<cost(T_{1})\)._
Proof.: By assumption, \(T_{1}\) is the cable-trench solution internal to the graph \(G\), the following inequality must hold:
\[0\leq\tau(e_{1})-\tau(e_{2})+ C(R_{2})-C(R_{1})+(C(P_{2})-L(P_{2}))-(C(P_{1})-L(P_{1}))\] \[+L(R_{2})|Q_{2}|-L(R_{1})|Q_{1}|+C(Q_{2})-C(Q_{1})\]
Suppose there exists an edge \(e_{2}\) satisfying the condition in the statement of Lemma 4.1. Then, once we wedge graph \(H\) onto \(G\), if \(cost(T_{2})<cost(T_{1})\), then from Proposition 3.4:
\[0>\tau(e_{1})-\tau(e_{2}) +C(R_{2})-C(R_{1})+(C(P_{2})-L(P_{2}))-(C(P_{1})-L(P_{1}))+(L(R_{ 2})-L(R_{1}))|\tilde{Q}|\] \[+C(Q_{2})-C(Q_{1})+(L(R_{2})-L(R_{1}))|H|\]
Thus, \(L(R_{2})-L(R_{1})<0\) must be true if \(cost(T_{2})<cost(T_{1})\) will be true for a given \(e_{2}\).
In essence, if \(L(R_{2})>L(R_{1})\), then no matter how large the graph \(H\) is, the cable-trench solution internal to \(G\) will never change. Intuitively, switching to \(T_{2}\) would mean that graph \(H\) is cabled through path \(R_{2}\). If \(L(R_{2})>L(R_{1})\), the cost of adding \(H\) increases with no savings to the spanning tree internal to \(G\). We can expand this into an idea of the _strength_ of a cycle \(G\).
**Definition 4.2**.: The **strength** of an edge-vertex pair is denoted \(\sigma(v,e)\). The vertex \(v\) is the wedge vertex and the edge \(e\in R_{1}\) is the edge which we will compare against \(e_{1}\). The value of \(\sigma(v,e)\) is the size of the vertex set for the largest graph \(H\) that can be wedged onto \(G\) such that the cable-trench solution remains \(T_{1}\wedge T_{H}\).
As shown in Lemma 4.1, if \(L(R_{2})>L(R_{1})\), then \(H\) can be any size, so we define \(\sigma(v,e)=\infty\) for the \(v,e\) pair.
**Definition 4.3**.: The **vertex strength** of a wedge vertex \(v\) is \(\sigma(v)=\min\{\sigma(v,e):e\in R\}\).
The vertex strength is essentially the size of the largest tree that can be wedged onto \(G\) such that \(e_{1}\) remains the best edge to exclude in a cable-trench solution for the composite graph.
**Definition 4.4**.: The **breaking edge** is the edge \(e\) on \(R_{1}\) with \(\sigma(v,e)=\sigma(v)\).
The breaking edge is called as such since it is the first edge to "break" when enough weight is added onto \(H\).
**Lemma 4.5**.: _Given cycle \(G\) with cable-trench solution \(T_{1}\) obtained via deletion of edge \(e_{1}\). If the wedge vertex \(v\) is chosen so that \(L(R_{2})<L(R_{1})\), then there exists a breaking edge \(e_{2}\) on the path \(R_{1}\) (the path from the root to the the wedge vertex not through \(e_{1}\))._
Proof.: For arbitrary graph \(H\) with internal cable-trench solution \(T_{H}\), consider the inequality comparing \(T_{1}\wedge T_{H}\) and \(T_{2}\wedge T_{H}\):
\[0\leq\tau(e_{1})-\tau(e_{2}) +C(R_{2})-C(R_{1})+(C(P_{2})-L(P_{2}))-(C(P_{1})-L(P_{1}))+(L(R_{ 2})-L(R_{1}))|\tilde{Q}|\] \[+C(Q_{2})-C(Q_{1})+(L(R_{2})-L(R_{1}))|H|\]
This inequality is the case when \(cost(T_{2}\wedge T_{H})\leq cost(T_{1}\wedge T_{H})\) All of the terms except the last are fixed for our choice of \(e_{1}\) and \(e_{2}\).
When \(L(R_{2})<L(R_{1})\), then \((L(R_{2})-L(R_{1}))|H|\) is negative, so eventually for a large enough \(|H|\):
\[0>\tau(e_{1})-\tau(e_{2})+L(P_{1})-L(P_{2})+(L(P_{1})-L(P_{2}))|\tilde{Q}|+C(Q ^{-})-C(Q^{+})+(L(R_{2})-L(R_{1}))|H|\]
and thus \(T_{2}\wedge T_{H}\) is the better cable-trench solution.
**Theorem 4.6**.: _Once a breaking edge \(e_{1}\) is determined, then there will never be a distinct edge \(e_{2}\) on \(P_{1}\cup Q_{1}\) such that \(cost(T_{2}\wedge H)<cost(T_{1}\wedge H)\)_
Proof.: Follows immediately from Lemma 3.3.
In the broader algorithmic context, the strength indices and breaking edge of a graph \(G\) are internal properties of \(G\), and thus for a fixed graph \(G\), can be precomputed. When wedging graph \(H\) onto \(G\), the spanning tree computation reduces to a simple check of whether or not \(|H|\) exceeds the strength of the graph, and if so, we can immediately determine the optimal spanning tree of the wedge via the breaking edge. In essence, these internal graph properties offer more savings in Algorithm 1 as the cost computation is drastically simplified.
## 5 The Theta Graph
To demonstrate the _local-to-global_ notion that we established in Section 3, we will broaden the classes of graphs for which we can efficiently solve the generalized cable-trench problem. We will consider a variant of a cycle graph which we will refer to as a \(\theta\) graph.
Such a graph consists of two vertices (a root vertex \(r\) and another vertex \(v\)) connected through 3 edge-disjoint paths (which, for now, we refer to as \(X_{1},X_{2},X_{3}\). Observe that in such a graph, any spanning tree is uniquely specified removing an edge from two distinct paths, i.e. \((e_{i},e_{j})\) where \(e_{i}\in X_{i}\) and \(e_{j}\in X_{j}\) for \(i\neq j\). Figure 4 demonstrates the analogous decomposition for \(\theta\)-graphs as we had in Figure 2.
Suppose, without loss of generality, that we are certain to exclude an edge \(e_{3}\) on path \(P_{3}\cup Q_{3}\). By Proposition 3.1, we can essentially ignore \(P_{3}\) in the computation that follows. We can simply
determine the cable-trench solution for the cycle \(R_{1,3}\cup R_{2,3}\), and determine whether \(|Q_{3}|\) exceeds its strength. Iterating this procedure over all \(e_{3}\in P_{3}\cup Q_{3}\) and repeating the procedure for (again without loss of generality) \(e_{2}\in P_{2}\cup Q_{2}\), then taking the minimum over all computed costs yields the best solution for this class of graphs.
Further, we can attempt to extend the notions of strength to this class of graphs to extend our local-to-global principle further.
### \(\theta\)-analogs of the strength index and breaking edges
Without loss of generality, for some \(\theta\)-graph \(G\), suppose that the internal cable-trench solution excludes some fixed edges \(e_{1}\in P_{1}\cup Q_{1}\) and \(e_{2}\in P_{2}\cup Q_{2}\).
**Definition 5.1**.: For a vertex \(v\) and an edge \(e_{3}\in R_{1,2}\), define the **first edge strength**\(\sigma_{1}(v,e_{3})\) is the largest \(h\) such that there exists a graph \(H\) wedged onto \(G\) at \(v\) with \(|H|=h\) for which \(cost(T_{1,2}\wedge H)<cost(T_{1,3}\wedge H)\) and \(cost(T_{1,2}\wedge H)<cost(T_{2,3}\wedge H)\) both hold.
**Definition 5.2**.: For a vertex \(v\) and edge \(e_{3}\in R_{1,2}\), define the **second edge strength**\(\sigma_{2}(v,e_{3})\) is the largest \(h\) such that there exists a graph \(H\) wedged onto \(G\) at \(v\) with \(|H|=h\) for which exactly _one_ of \(cost(T_{1,2}\wedge H)<cost(T_{1,3}\wedge H)\) and \(cost(T_{1,2}\wedge H)<cost(T_{2,3}\wedge H)\) holds.
Note that \(\sigma_{1}(v,e)\leq\sigma_{2}(v,e)\). We use this to define the strengths of a vertex \(v\) by minimizing over all edges \(e\in R_{1,2}\), hence \(\sigma_{1}(v)=\min_{e\in R_{1,2}}\sigma_{1}(v,e)\) and \(\sigma_{2}(v)=\min_{e\in R_{1,2}}\sigma_{2}(v,e)\). With these in hand, we obtain a generalization of Lemma 4.1.
**Lemma 5.3**.: _As in Lemma 4.1:_
1. _If both_ \(L(R_{2,3})>L(R_{1,2})\) _and_ \(L(R_{1,3})>L(R_{1,2})\)_, then_ \(\sigma_{1}(v)=\sigma_{2}(v)=\infty\)_._
2. _If exactly one of_ \(L(R_{2,3})>L(R_{1,2})\) _or_ \(L(R_{1,3})>L(R_{1,2})\) _holds, then_ \(\sigma_{2}(v)=\infty\) _and_ \(\sigma_{1}(v)<\infty\)_._
Proof.: We first present a proof of (1). We partition our \(\theta\) graph into the following path segments, fixing some choices of edges \(e_{1}\in R_{2,3}\), \(e_{2}\in R_{1,3}\) and \(e_{3}\in R_{1,2}\).
Without loss of generality, we assume that the cable-trench solution internal to this graph removes (fixed) edges \(e_{1},e_{2}\). Again, without loss of generality, we will show that \(L(R_{1,3})>L(R_{1,2})\) implies that there exists no edge \(e_{3}\in R_{1,2}\) that \(cost(T_{2,3}\wedge H)\) is cheaper than \(cost(T_{1,2}\wedge H)\) (a similar argument will show that if \(L(R_{2,3})>L(R_{1,2})\) then there is no \(e_{3}\in R_{1,2}\) such that \(cost(T_{1,3}\wedge H)\) is cheaper than \(cost(T_{1,2}\wedge H)\).
The total cost of \(T_{1,2}\wedge H\) can be computed as:
\[(\tau(E(G))-\tau(e_{1})-\tau(e_{2})) +C(P_{1})+C(P_{2})+C(R_{1,2})+L(R_{1,2})(|H|+|Q_{1}|+|Q_{2}|)\] \[+C(Q_{1}^{-})+C(Q_{2}^{-})+cost(H)\]
We can further decompose \(C(R_{1,2})\) as follows:
\[C(R_{1,2})=C(P_{3})+L(P_{3})+C(e_{3})+(L(P_{3})+L(e_{3}))|Q_{3}|+C(Q_{3}^{+})\]
The total cost of \(T_{1,3}\wedge H\) can similarly be computed as:
\[(\tau(E(G))-\tau(e_{1})-\tau(e_{3})) +C(P_{1})+C(P_{3})+C(R_{1,3})+L(R_{1,3})(|H|+|Q_{1}|+|Q_{3}|)\] \[+C(Q_{1}^{-})+C(Q_{3}^{-})+cost(H)\]
We can further decompose \(C(R_{1,3})\) as follows:
\[C(R_{1,3})=C(P_{2})+L(P_{2})+C(e_{2})+(L(P_{2})+L(e_{2}))|Q_{2}|+C(Q_{2}^{+})\]
If \(H\) is sufficiently large, such that \(T_{1,3}\) is eventually cheaper in the above expression, then we expect the difference of the two expressions to be positive (i.e. \(cost(T_{1,2}\wedge H)-cost(T_{1,3}\wedge H)>0\)). After cancelling out terms in both expressions, this assumption implies that for arbitrary \(|H|\):
\[(\tau(e_{3})-\tau(e_{2})) \tag{2}\] \[+L(P_{3})+L(e_{3})+L(P_{3})|Q_{3}|+L(e_{3})|Q_{3}|+L(R_{1,2})(|H| +|Q_{1}|+|Q_{2}|)\] (3) \[-L(P_{2})-L(e_{2})-L(P_{2})|Q_{2}|-L(e_{2})|Q_{2}|-L(R_{1,3})(|H| +|Q_{1}|+|Q_{3}|)\] (4) \[+C(Q_{2}^{-})-C(Q_{2}^{+})+C(Q_{3}^{+})-C(Q_{3}^{-})>0 \tag{5}\]
Again, as in Proposition 3.4, (2) accounts for the difference in trench lengths, (3) and (4) account for the difference in contributions from paths \(R_{1,3}\) and \(R_{1,2}\), and (5) accounts for the disoriented regions \(Q_{2}\) and \(Q_{3}\).
We are assuming that \(T_{1,2}\) is the internal cable-trench solution for \(G\) (i.e. the cable-trench solution when \(|H|=0\)). Therefore, when \(|H|=0\), we have that the difference is actually \(\leq 0\). Thus, if the inequality is satisfied, we _must_ have that:
\[L(R_{1,2})|H|-L(R_{1,3})|H|>0\]
Since \(|H|>0\), in order for there to exist a suitable \(e_{3}\), we must have that \(L(R_{1,2})>L(R_{1,3})\). If not, then no such edge exists. If we apply the same reasoning to the tree \(T_{1,3}\), we get that in order for there to exist a suitable \(e_{3}\), we must have that \(L(R_{1,2})>L(R_{2,3})\). Thus, if \(L(R_{2,3})>L(R_{1,2})\) and \(L(R_{1,3})>L(R_{1,2})\), for all \(|H|\), there is no cheaper tree. Thus, \(\sigma_{1}(v)=\sigma_{2}(v)=\infty\).
In order to prove the second case of the lemma, we simply note that we can apply the above rationale to exactly one of the pairs \(R_{2,3},R_{1,2}\) or \(R_{1,3},R_{1,2}\). Without loss of generality say that
Figure 4: The \(\theta\) graph \(G\) along with distinguished edges \(e_{1},e_{2}\) along with labeled components for the corresponding decomposition.
\(L(R_{2,3})>L(R_{1,2})\) and \(L(R_{1,3})\leq L(R_{1,2})\). Then, for some choice of \(e_{3}\in R_{1,2}\) we can just solve for \(|H|\) to find a graph where \(T_{1,3}\) is less costly than \(T_{1,2}\), and then minimize over all \(e_{3}\in R_{1,2}\).
**Lemma 5.4**.: _Let \(G\) be a \(\theta\) graph with disjoint \(r,v\) paths \(R_{2,3}\), \(R_{1,3}\) and \(R_{1,2}\). Without loss of generality, suppose that the cable-trench solution excludes edges \(e_{1}\in R_{2,3}\) and \(e_{2}\in R_{1,3}\). Then for all graphs \(H\) wedged onto \(G\) at \(v\), if the cable-trench solution of \(G\wedge H\) excludes edges \(e_{1}^{\prime}\in R_{2,3}\) and \(e_{2}^{\prime}\in R_{1,3}\), then \(e_{1}^{\prime}=e_{1}\) and \(e_{2}^{\prime}=e_{2}\)._
Proof.: The proof is analogous to the proof of Lemma 3.3. Given that the cable-trench solution for \(G\wedge H\) excludes edges \(e_{1}^{\prime}\in R_{2,3}\) and \(e_{2}^{\prime}\in R_{1,3}\), the contribution of wedging graph \(H\) at vertex \(v\) is
\[L(R_{1,2})|H|+cost(H) \tag{6}\]
as root paths from \(R\) to the vertices in \(H\) must be cabled through \(R_{1,2}\). We observe that in any spanning tree which excludes an edge on both \(R_{2,3}\) and \(R_{1,3}\) (thus, routes cables to \(H\) in (6) through \(R_{1,2}\)) the contribution of \(H\) is the same. Thus, the choices of \(e_{1}^{\prime}\) and \(e_{2}^{\prime}\) must be the optimal internal cable-trench solution to \(G\), as the contribution of \(H\) is equal for all pairs of edges in \(R_{2,3}\times R_{1,3}\).
By assumption, \(e_{1}\in R_{2,3}\) and \(e_{2}\in R_{1,3}\) are edges removed in the cable-trench solution of \(G\), and therefore are optimal to remove internal to \(G\). Therefore, we must have that \(e_{1}^{\prime}=e_{1}\) and \(e_{2}^{\prime}=e_{2}\).
Note that Lemma 5.4 makes no claims on the specific edges excluded in a cable-trench solution of \(G\wedge H\)_if_ the paths \((R_{2,3},R_{1,3},\) or \(R_{1,2})\) of the excluded edges differ from the paths of excluded edges from \(G\).
In essence, the fact that \(\theta\) graphs admit tractable solutions is a direct consequence of the results of Sections 3 and 4. By showing that similar constructions exist for \(\theta\) graphs, we expect that computing the cable-trench solution for a graph in which two vertices are connected by 4 edge-disjoint paths should be tractable as well, by a similar argument as presented at the beginning of the section. The strength discussion enables us to further extend the class of graphs for which we can quickly compute the cable-trench solution.
## 6 Intractability of CTP in General Graphs
The generalization of the strength precomputation to \(\theta\) graphs demonstrates that the 'inductive' use of strength in order to increase the class of graphs that can be extended. Although, the increased complexity presents itself in the increasingly expensive precomputation of strength statistics about the graph. Further, the class of graphs attainable using the above methods is quite delicate; the methods described above would not clearly generalize when moving the wedge vertex \(v\) such that it lies entirely on one of the three paths, \(R_{2,3}\), \(R_{1,3}\), or \(R_{2,3}\), in the \(\theta\) graph.
We return to our discussion of graphs for which we can compute the cable-trench solution quickly, and will refer to such graphs as **tractable**. We also describe a class of graphs, given an oracle that provides some structural information about the graph, admit an efficient scheme for solving the generalized cable-trench problem. We will call such graphs **tractable with an oracle**.
### Tractability in cactus graphs
As shown in Section 3, given a collection of cycles and trees, we can iteratively construct a graph \(G\) by wedging cycles and trees onto it and, in parallel, keep track of the cable-trench solution in polynomial time. Such a construction ensures a graph in which all cycles are edge-disjoint (i.e. each edge is in at most one cycle). These graphs are called _cactus graphs_.
Keeping consistent with existing literature, a **decomposition** of a graph \(G\) is defined as a collection of edge-disjoint subgraphs \(G_{1},G_{2},\ldots,G_{m}\) such that the edge sets \(E(G_{1}),\ldots,E(G_{m})\) form a partition of the edge set of \(G\). We say that this is a **path decomposition** if each subgraph is either a path or a cycle. Every graph clearly admits a path decomposition, and the following theorem of Lovasz [5] characterizes the size of such a decomposition.
**Theorem 6.1**.: _A graph \(G\) on \(n\) vertices (not necessarily connected) can be decomposed into \(\lfloor n/2\rfloor\) paths and cycles._
As an algorithm, finding such a decomposition is closely related to the pathwidth metric on graphs. Although computing the pathwidth of arbitrary graphs is NP-complete, there exist fixed-parameter tractable algorithms (i.e. if the pathwidth is known to be bounded above, there exist tractable algorithms to compute it)[2][3].
Further, the decomposition itself is quite delicate, as we require a sort of _maximal_ decomposition, one in which every pair of component intersects at most at one vertex. Such a decomposition must exist for a cactus graph essentially by definition of a cactus graph. Given an oracle that finds such a path decomposition for cactus graphs, we obtain the following corollary as a consequence of Theorem 3.5.
**Corollary 6.2**.: _Cactus graphs are tractable with an oracle._
**Example 6.3**.: Consider the graph in Figure 5. If we were to wedge cycle \(A\) to tree \(B\) rooted at \(r_{B}\), we could determine the cable-trench solution. At this point, we could wedge it to \(C\) at \(r_{C}\), and continue doing this until we wedged the graph \(A\wedge B\wedge C\wedge D\wedge E\) onto \(F\) at point \(r_{F}\). At each step we know the optimal weight spanning tree, which allows us to wedge it
Figure 5: Illustrative example of the ‘maximal’ decomposition of a graph into cycles \(A,D,F\) and trees \(B,C,E\).
**Observation 6.4**.: Let \(T\) be a spanning tree for a cactus graph \(G\). Given a subset of the vertices \(V^{\prime}\), if we look at the induced subgraph \(G^{\prime}\subseteq G\), and take the intersection \(T^{\prime}=G^{\prime}\cap T\), we note that \(T^{\prime}\) is necessarily a spanning tree of \(G^{\prime}\).
Essentially, our knowledge of spanning trees of individual components of \(G\) determine the total number of spanning trees of the entire graph.
### Identifying multiple vertices in graph constructions
We have seen that for cycles, trees, restricted cases of \(\Theta\) graphs, and graphs constructible by wedging the previous graphs under specific conditions, the cable-trench solution can be found quickly. In general, there is a brute-force computation over all spanning trees via Kirchhoff's result on spanning trees that enables a fast search over a reasonably small set of spanning trees. A natural next step is to consider the point at which finding the cable-trench solution becomes computationally difficult.
Of course, a necessary condition for a family of graphs to be _intractable_ is for the family to admit an exponential number of spanning trees. The example we will use is the \(2\times m\) grid graph (\(G[2,m]\)). By a simple inductive argument, it is clear that the number of spanning trees of \(G[2,m]\) grid graph is \(\Omega(2^{n})\).
Interestingly, we note that there exist spanning trees \(T\) of the \(2\times(m+1)\) grid such that the induced subgraph \(T^{\prime}\subset T\) restricted to the \(2\times m\) grid is not a tree. This graph does not satisfy the property established in Example 6.3, and as a consequence, an algorithm dependent on decomposing \(G[2,m]\) should not be expected to consistently find the cable-trench solution.
As iteratively wedging graphs together at chosen roots preserves the tractability of the graph, and any arbitrary graph admits a path decomposition, we must have a connection (or a class of connections) between graph components for which the graph is no longer tractable. We conjecture that families of graphs with exponentially many spanning trees, and those whose construction necessitates identifying paths and cycles at multiple points, do not easily admit tractable solution schemes.
|
2309.13581 | CDF II W-mass anomaly and SO(10) GUT | The W-mass anomaly has yet to be established, but a huge proliferation of
articles on the subject established the rich potential of such event. We
investigate the SO(10) GUT constraints from the recently reported W-mass
anomaly. We consider both Supersymmetric (SUSY) and non-supersymmetric
(non-SUSY) grand unified theories by studying renormalization group equations
(RGEs) for gauge coupling unification and their predictions on proton decay. In
the non-SUSY models, single-stage unification is possible if one include a
light (around TeV) real triplet Higgs scalar. However, these models predict
speedy proton decay, inconsistent with the present experimental bound on the
proton decay. This situation may be improved by including newer scalars and new
intermediate-mass scales, which are present in the $SO(10)$ GUTs. The standard
model is extended to a left-right symmetric model (LR), and the scale of LR
breaking naturally introduces the intermediate scale in the model. A
single-stage unification is possible even without including any triplet Higgs
scalar in a minimal supersymmetric standard model. | Purushottam Sahu, Hiranmaya Mishra, Prasanta K. Panigrahi, Sudhanwa Patra, Utpal Sarkar | 2023-09-24T08:47:44Z | http://arxiv.org/abs/2309.13581v1 | # CDF II W-mass anomaly and SO(10) GUT
###### Abstract
The W-mass anomaly has yet to be established, but a huge proliferation of articles on the subject established the rich potential of such event. We investigate the SO(10) GUT constraints from the recently reported W-mass anomaly. We consider both Supersymmetric (SUSY) and non-supersymmetric (non-SUSY) grand unified theories by studying renormalization group equations (RGEs) for gauge coupling unification and their predictions on proton decay. In the non-SUSY models, single-stage unification is possible if one include a light (around TeV) real triplet Higgs scalar. However, these models predict speedy proton decay, inconsistent with the present experimental bound on the proton decay. This situation may be improved by including newer scalars and new intermediate-mass scales, which are present in the \(SO(10)\) GUTs. The standard model is extended to a left-right symmetric model (LR), and the scale of LR breaking naturally introduces the intermediate scale in the model. A single-stage unification is possible even without including any triplet Higgs scalar in a minimal supersymmetric standard model.
## I Introduction
The Standard Model (SM) of Particle Physics, one of the particle physics community's greatest treasures, has successfully explained nearly all experimental data up to the current accelerator energy scale. At the same time, it fails to answer theoretical concerns such as the origin of non-zero neutrino masses, the matter-antimatter asymmetry of the universe, dark matter and dark energy, and so on, suggesting that SM is not the final theory. Alternatively, precision measurements of observables may be critical in testing the SM and shedding information on the possibility of Beyond Standard Model (BSM) physics. The CDF II collaboration's most recent precision measurement indicated a shift in W boson mass [1]:
\[m_{W}^{\rm CDF}=\big{(}80.433\pm 0.0064_{\rm stat}\pm 0.0069_{\rm syst}\big{)}{ \rm GeV}\,.\]
This CDF II data clearly shows disagreement with SM prediction and is also in align with the earlier global data from LEP, CDF, D0, and ATLAS, with mass range of W boson mass, \(m_{W}=(80.357\pm 0.006)\) GeV [2].
Many proposals have been investigated to account for the shift in W boson mass; the most straightforward extension among them is to incorporate a real scalar triplet without any hypercharge that explains the requisite shift in W boson mass without impacting the Z boson mass. Although Ref [3] made the initial proposal to explain CDF W mass anomaly by extending SM with a real scalar triplet with zero hypercharge, the detailed phenomenology of scalar triplet with zero hypercharge in the context of CDF II W boson mass anomaly was studied in Ref [4]. Interestingly, the relation of the W-boson mass anomaly to grand unified theories such as SU(5) has been investigated by adding complex scalar triplets without hypercharge [5; 6; 7]. In [5] they claimed that \(SU(5)\) GUT with minimal representations of \(24_{H},5_{H},\overline{5}^{a}_{F},10^{a}_{F},\overline{10}^{a}_{F}\) can address the W boson mass anomaly while replicating SM fermion masses and mixings. Another group performed a similar analysis within SU(5) GUT by adding scalar triplet and fermion triplet without hypercharge [6], where they consider representations such as \(24_{H},24_{F},5_{H},\overline{5}^{a}_{F},10^{a}_{F}\). Extending on the ideas of explaining the W-boson mass anomaly and its connection to grand unified theories, we intend to investigate \(SO(10)\) GUT with and without any intermediate symmetry-breaking steps, with implications for gauge coupling unification, experimental constraints on proton decay, neutrino mass, and the universe's matter-antimatter symmetry, among other things. We explore a non-supersymmetric \(SO(10)\) GUT with direct breaking to SM and explain the W-boson mass anomaly by introducing an additional scalar degree of freedom at the TeV scale. For the analysis, the SO(10) representations are \(SO(10):10_{H}\); \(16^{a}_{F}\); \(45_{H}\) for direct breaking of \(SO(10)\) to SM and \(SO(10):10_{H}\); \(16^{a}_{F}\); \(126_{H}\); \(45_{H}\) for an intermediate Left-right symmetric models (LRSM) breaking between SO(10) and SM.
Left-right symmetric models [8; 9; 10; 11] are believed to be one of the promising extensions of the SM that offer a reasonable explanation for parity violation. In such models, the gauge symmetry is improved to \(SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\), with left-handed leptons and quarks translating as \(SU(2)_{L}\) doublets with right-handed counterparts as \(SU(2)_{R}\) doublets. Due to the inclusion of right-handed neutrinos to complete the gauge multiplets, these models, not only shed light on the cause of parity violation but also explain small neutrino masses using the type-I [12; 13; 14; 15] or type-II [16; 17; 18; 11] seesaw
mechanism. Furthermore, they provide a mathematical description of fermion hypercharge quantum numbers derived from baryon number minus lepton number (B-L) charges and the third component of the right-handed isospin. In this letter, we investigate at the scenario of the \(SO(10)\) GUT, which not only accommodates CDF II W boson anomaly, but also able to unify all matter fields (including the right-handed neutrino) of each generation in a single \(SO(10)\) representation, for example, the spinorial \(16_{F}\).
The paper is organized as follows: In section II, We briefly introduce the explanation of the CDF W mass anomaly by extending SM using the isospin \(SU(2)_{L}\) triplet scalar to make this study self-contained. In section III, we examine a novel symmetry breaking of non-supersymmetric SO(10) GUT without having any intermediate symmetry, along with including a \(SU(2)_{L}\) scalar triplet with zero hypercharge from the electroweak scale onwards to accommodate the W mass anomaly, as well as to study the evolution of the gauge coupling constants of SM gauge symmetry and proton decay predictions. In section IV: to address proton decay constraints and accommodate W mass anomaly, we embed the framework in a non-SUSY SO(10) grand unified theory with an intermediate left-right symmetry potential to explain neutrino mass and the matter-antimatter asymmetry of the universe. We discuss the inclusion of supersymmetric SO(10) GUT with and without intermediate left-right symmetry in section V. Finally, we summarize our significant findings and scope for future studies.
## II CDF W mass anomaly with SM plus isospin \(Su(2)_{l}\) triplet scalar
It has been pointed out that the recent CDF W boson mass shift from SM predictions might arise due to the presence of complex zero hypercharge triplet Higgs boson \(\Omega_{1_{C},3_{L},0_{Y}}\)[4; 19]. The tiny mass shift of the W boson mass can be interpreted as sub-dominant contribution arising from the small induced VEV of the same scalar triplet. The relevant Lagrangian involving SM Higgs \(H\) and triplet scalar \(\Omega\) is given by
\[\mathcal{L}=\big{(}D_{\mu}H\big{)}^{\dagger}\big{(}D^{\mu}H\big{)}+ \mathrm{Tr}\left[(D_{\mu}\Omega)^{\dagger}(D^{\mu}\Omega)\right]-V(H,\Omega), \tag{1}\]
where the SM Higgs field is denoted by \(H^{T}=\big{(}\phi^{+},\phi^{0}\big{)}\) and the real scalar triplet \(\Omega\) in its matrix representation as,
\[\Omega=\frac{1}{2}\begin{pmatrix}\Omega^{0}&\sqrt{2}\Omega^{+}\\ \sqrt{2}\Omega^{-}&-\Omega^{0}\end{pmatrix}. \tag{2}\]
The covariant derivative involving SM Higgs \(H\) is straightforward, while for the scalar triplet, it is \(D_{\mu}\Omega=\partial_{\mu}\Omega-ig_{2L}[W_{\mu},\Omega]\) where \(\Omega=\frac{\tau^{a}}{2}\Omega^{a}\) and \(W_{\mu}=\frac{\tau^{a}}{2}W_{\mu}^{a}\) with \(\tau^{a},a=1,2,3\) the Pauli matrices. The \(W_{\mu}\) (\(g_{2L}\)) is the corresponding gauge boson (gauge coupling) of the \(SU(2)_{L}\) gauge group. After spontaneous symmetry breaking, we get known SM Higgs bosons, one CP-even heavy Higgs boson, a CP-odd Higgs boson, and two charged Higgs bosons. The relevant scalar potential involving SM Higgs boson \(H\) and scalar triplet \(\Omega\) is presented below
\[V(H,\Omega) = -\mu_{H}^{2}H^{\dagger}H+\lambda_{H}\big{(}H^{\dagger}H\big{)}^{ 2}-M_{\Omega}^{2}\mathrm{Tr}(\Omega^{2}) \tag{3}\] \[+\lambda_{\Omega}\mathrm{Tr}(\Omega^{4})+\lambda_{\Omega}^{\prime }\big{(}\mathrm{Tr}\Omega^{2}\big{)}^{2}+\mu_{H\Omega}H^{\dagger}\Omega H\] \[+\alpha\big{(}H^{\dagger}H\big{)}\mathrm{Tr}(\Omega^{2})+\beta H ^{\dagger}\Omega^{2}H+h.c.\]
where \(\mu_{H\Omega}\) is considered to be real. The spontaneous symmetry breaking is achieved by assigning non-zero VEV to SM Higgs as well as to the scalar triplet. Thus, after spontaneous symmetry breaking, the modified Higgs fields are defined as
\[H=\begin{pmatrix}\phi^{+}\\ \frac{v+h+iA}{\sqrt{2}}\end{pmatrix}\] \[\Omega=\frac{1}{2}\begin{pmatrix}v_{\Omega}+\Omega&\sqrt{2} \Omega^{+}\\ \sqrt{2}\Omega^{-}&-v_{\Omega}-\Omega\end{pmatrix} \tag{4}\]
After simplifications, the resulting mass formula for CP-even, CP-odd, and charged Higgs is,
\[M_{h}^{2}=2\lambda_{H}v^{2}\] \[M_{\omega}^{2}=2\rho v_{\Omega}^{2}+\frac{\mu_{H\Omega}v^{2}}{4v _{\Omega}}\] \[M_{\omega^{\pm}}^{2}=\mu_{H\Omega}v_{\Omega}\bigg{(}1+\frac{v^{2 }}{4v_{\Omega}^{2}}\bigg{)} \tag{5}\]
where \(\rho=\lambda_{\Omega}+\lambda_{\Omega}^{\prime}\). The physical fields \((h,\omega)\) are related to \((\phi^{0},\Omega^{0})\) by some rotation mixing matrix. The shift in W boson mass can be explained using the extra contribution arising from scalar triplet VEV
\[M_{W}=M_{W}^{\mathrm{SM}}+g_{2L}^{2}v_{\Omega}^{2}\]
, due to the presence of interaction \(\Omega^{0}W_{\mu}^{-}W^{+\mu}\). Taking \(g_{2L}(\mu=M_{Z})=0.65171\), we estimate the value of \(v_{\Omega}\) as 5.4 GeV.
## III CDF W boson mass anomaly and non-susy \(So(10)\) Gut
In the previous section, it is evident that the shift in mass of W boson can be explained if SM is extended with a scalar triplet with zero hypercharge, lying around TeV scale 1. We consider a novel symmetry breaking of non-supersymmetric \(SO(10)\) GUT [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], without
having any intermediate symmetry and adding a \(SU(2)_{L}\) scalar triplet with zero hypercharge at electroweak scale onwards for accommodating W boson mass anomaly as,
Footnote 1: The \(SU(2)_{L}\) symmetry is broken under the \(SU(2)_{L}\) symmetry.
\[SO(10) \stackrel{{ M_{U}}}{{\longrightarrow}}SU(3)_{C}\otimes SU (2)_{L}\otimes U(1)_{Y} \tag{6}\] \[\stackrel{{ M_{\Delta}}}{{\longrightarrow}}SU(3)_{C} \otimes U(1)_{Q}\;.\]
The \(SO(10)\) breaks down to SM by assigning a non-zero VEV to the singlet direction of \(45_{H}\), while the subsequent symmetry breaking of SM to low energy theory is achieved by SM Higgs contained in \(10_{H}\). All the SM fermions per generation plus a right-handed neutrino are unified within spinorial representation \(16_{F}\) of \(SO(10)\). Interestingly, the minimal addition of \(\Omega\) can also be found in \(45_{H}\). Thus, we need one complex \(10_{H}\), \(45_{H}\) and \(16^{a}_{F}\) for explanation of CDF W mass anomaly, light neutrino masses via type-I seesaw mechanism with the presence of right-handed neutrinos [12; 13; 14; 15; 31]:
\[10_{H}\equiv H_{\Phi}\left(10\right)=\Phi(1,2,2;0)\oplus(3,1,1;-\frac{1}{3}) \oplus(\overline{3},1,1;\frac{1}{3})\,.\]
All the SM gauge bosons (\(G^{a}_{\mu}\), \(W^{i}_{\mu}\) and \(B_{\mu}\)) plus additional vector bosons are contained in a 45-dimensional representation of SO(10), out of which a few vector boson can mediate proton decay. The SM fermions plus a right-handed neutrino are contained in a spinorial representation of \(SO(10)\) as,
\[16_{F} = Q_{L}(3,2,1/6)\oplus u_{R}(3,1,2/3)\oplus d_{R}(3,1,-1/3)\] (SM) \[\oplus\ell_{L}(1,2,-1/2)\oplus e_{R}(1,1,-1)\oplus N_{R}(1,1,0)\,.\]
The relevant \(SO(10)\) invariant Higgs potential, involving \(10_{H}\equiv\Phi_{H}=H\) and \(45_{H}\equiv A(45)\), is written as
\[V_{SO(10)} \supset\mu_{A}^{2}\,A_{ab}A_{ba}+\mu_{H}^{2}\,H_{a}H_{a}+\lambda_ {A}\,A^{2}A^{2}+\lambda_{A}^{\prime}A^{4}\] \[+\lambda_{H}\,H^{4}+H_{a}g_{HA}A_{ab}A_{bc}H_{c}+g^{\prime}_{HA}A^ {2}H^{2}.\]
The evolution of gauge couplings for \(SU(3)_{C}\), \(SU(2)_{L}\) and \(U(1)_{Y}\) gauge groups are displayed in Fig.1. Here, the dashed (solid) lines correspond to SM contribution (SM plus scalar triplet contributions). The purple line represents the inverse fine structure constant for \(U(1)_{Y}\) group whereas blue and green lines indicate the \(SU(2)_{L}\) and \(SU(3)_{C}\) groups respectively. The SM predictions with dashed lines clearly show that there is no such gauge coupling unification. However, as more particles are added to the SM at the TeV scale, the evolution of gauge couplings begins to break from the SM results and give successful gauge coupling unification of weak, electromagnetic, and strong forces.
The gauge coupling constants of SM meet close to \(M_{U}\simeq 10^{14.4}\) GeV at a two-loop level while one-loop results are close to \(M_{U}\) as shown in Fig.(1) and Ref.[5]. On the other hand, the one-loop results are exact for RGE analysis, including both scalar and fermion triplet at few TeV scale [6].
The super heavy gauge bosons and scalars carrying fractional charges as well as nontrivial color quantum numbers can mediate various proton decay modes [31; 32; 33; 34; 35; 36]. The experimental bound on proton lifetimes in various decay modes are displayed in Table 2. We wish to calculate the proton lifetime in the context of \(SO(10)\) GUT, breaking to SM directly accommodating CDF W boson mass and compare how the model predictions are closer to present bounds set by recent or planned experiments [37]. It has been established that the present best limits were set on proton lifetime by Super-Kamiokande (SK) experiments as presented in Table 2 in the decay modes \(p\to e^{+}\pi^{0}\) and \(p\rightarrow\mu^{+}\pi^{0}\). We have omitted here the discussion on the less important decay modes of proton decay mediated by super heavy color triplet scalars. Thus, we focus on the contributions arising from the dimension-6 operators mediating the proton decay with the exchange of lepto-quark gauge bosons violating both baryon and lepton numbers simultaneously. The spontaneous symmetry breaking of \(SO(10)\) to SM happens at grand unified scale \(M_{\rm GUT}\equiv M_{U}\), also leading to masses for these lepto-quark gauge bosons at this scale.
Following [41] and using the input model parameters,
\begin{table}
\begin{tabular}{l c} \hline \hline Mass Range & 1-loop level \\ \hline \(M_{Z}-M_{I}\) & \(b_{i}=(-7,-\frac{19}{6},\frac{41}{10})\) \\ \(M_{I}-M_{U}\) & \(b^{\prime}_{i}=(-7,-\frac{19}{6}+\frac{2}{3}*n_{\Omega},\frac{41}{10})\)[5] \\ \(M_{I}-M_{U}\) & \(b^{\prime}_{i}=(-7,-\frac{19}{6}+\frac{1}{4}*n_{\Omega}+\frac{2}{3}*n_{\Sigma}, \frac{41}{10})\)[6] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Explanation of CDF W mass anomaly requiring an extension of SM with a TeV scale scalar (color singlet, weak isospin triplet, and zero hypercharge) and their beta coefficients at one-loop and two-loop levels for the study of RGEs of gauge coupling constants. The \(n_{\Omega}\) is 2 for complex and 1 for real scalar triplet.
Figure 1: The behavior of evolution of the gauge coupling constants of SM gauge symmetry by adding a scalar triplet with zero hypercharge (as in ref.[5]). The dashed lines are contributions from SM particle content, and solid lines correspond to RGEs with extra particles..
beta coefficients presented in Table (1) and solving the set of standard RGEs [42] for gauge coupling constants, the estimated values of unification mass scale and inverse GUT coupling constant are as follows,
\[M_{U}=10^{14.40}\,\,\text{GeV}\,\,\,\text{and}\,\,\,\,\,\alpha_{G}^{-1}=39.0\,.\]
The predicted values of \(\tau_{p}\) for the above-mentioned unification mass scale and inverse GUT coupling constant, including scalar and fermion triplet at TeV scale within SO(10) GUT is presented in Table (3). This prediction is significantly below the current bound provided by Super-Kamiokande and Hyper-Kamiokande studies, as seen in Table (2).
We will see in the following section that allowing intermediate left-right symmetry between \(SO(10)\) and SM will enable us to overcome the problem of proton decay limitations.
## IV Impact of W-mass anomaly in SO(10) GUT with intermediate left-right symmetry
We now examine the impact of a scalar triplet \(\Omega(1_{C},3_{L},0_{Y})\), included at electroweak scale for the explanation of the W boson mass anomaly, by embedding the framework in non-SUSY \(SO(10)\) grand unified theory with intermediate left-right symmetry. We consider breaking of \(SO(10)\) GUT symmetry to SM via intermediate left-right symmetry [8; 9; 10; 11] as Pati-Salam symmetry [20; 43]\(SU(4)_{C}\times SU(2)_{L}\times SU(2)_{R}\) or \(SU(3)_{C}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\). The important advantage of accommodating CDF W mass anomaly in \(SO(10)\) GUT in comparison to \(SU(5)\) GUT is that \(SO(10)\) is a rank five group while SM is a rank four gauge group and thus, can allow intermediate symmetries between \(SO(10)\) to SM which thereby can solve the proton decay constraints along with the potential to address non-zero neutrino masses [31], matter-antimatter asymmetry of the universe [44; 45; 46; 47; 48; 49] and neutrinoless double beta decay etc.
The \(SO(10)\) GUT can be spontaneously broken down to left-right symmetry \(\mathbb{G}_{3221}\equiv SU(3)_{C}\otimes SU(2)_{L}\otimes SU(2)_{R}\otimes U( 1)_{B-L}\) at unification mass scale \(M_{U}\). One can use Higgs multiplets belonging to the 45 and 210 representations for spontaneous breaking of \(SO(10)\) GUT. The subsequent stage of symmetry breaking from \(\mathbb{G}_{3221}\rightarrow\mathbb{G}_{SM}\) is done at an intermediate-mass scale \(M_{I}\). The Higgs scalar \(\Delta_{R}\) (\(H_{R}\)), contained in 126 (16) representation of SO(10) GUT, can be used for breaking of LRSM to SM depending upon the exact phenomenology one wish to explore. The interesting point to note here is that the inclusion of intermediate symmetry breaking scale demands the existence of right-handed neutrinos and/or scalar triplets at \(M_{I}\) and can explain light neutrino masses via well-known type-I (and/or, type-II) seesaw mechanism and matter-antimatter asymmetry of the universe via Leptogenesis. One can introduce the discrete left-right symmetry invariant in addition to the gauge symmetry to simplify model parameters required for fermion masses and mixing. The last stage of symmetry breaking is done with known SM Higgs contained in \(10_{H}\). At this stage, we include a scalar triplet with zero hypercharge for CDF II W mass anomaly, and then, this field will be present from \(M_{Z}\) scale onwards. This is an equally important, on a stage of symmetry breaking, simultaneously addressing SM fermion masses and mixing along with the mass shift in W boson mass.
The SM Higgs \(\Phi\) contained in \(10_{H}\) is required to break SM to low energy theory, which can reproduce fermion masses and mixing. The decomposition of 10-dimensional Higgs field \(H_{\Phi}\) under Pati-Salam symmetry and left-right symmetry having \(B-L\) gauge symmetry as,
\[10_{H}\equiv H_{\Phi}\left(10\right)=\Phi(1,2,2;0)\oplus(3,1,1;-\frac{1}{3}) \oplus(\overline{3},1,1;\frac{1}{3})\,.\]
Under left-right gauge symmetry, all the SM fermions plus a right-handed neutrino are contained in a spinorial representation of \(SO(10)\) as,
\[16_{F} = \ell_{L}(1,2,1,-1)\oplus\ell_{R}(1,1,2,1)\] \[\oplus Q_{L}(3,2,1,\frac{1}{3})\oplus Q_{R}(\overline{3},1,2,- \frac{1}{3})\,.\] (LRSM)
Similarly,
\[45_{H}\equiv A(45) = (1,1,1;0)\oplus\Omega_{L}(1,3,1;0)\oplus\Omega_{R}(1,1,3;0)\] \[\oplus(3,1,1;\frac{4}{3})\oplus(\overline{3},1,1;-\frac{4}{3}) \oplus(8,1,1;0)\] \[\oplus(3,2,2;\frac{2}{3})\oplus(\overline{3},2,2;-\frac{2}{3})\,.\]
\begin{table}
\begin{tabular}{||c|c|c|} \hline \hline Decay Modes & Expt. Bound (yrs) \\ \hline \hline \(p\rightarrow\pi^{0}e^{+}\) & \(>2.4\times 10^{34}\)[37] \\ \(p\rightarrow\pi^{0}\mu^{+}\) & \(>1.6\times 10^{34}\)[37] \\ \hline \(p\to K^{0}e^{+}\) & \(>1.0\times 10^{33}\)[38] \\ \(p\to K^{0}\mu^{+}\) & \(>3.6\times 10^{33}\)[39] \\ \hline \(p\rightarrow\pi^{+}\mathcal{P}\) & \(>3.9\times 10^{32}\)[40] \\ \(p\to K^{+}\mathcal{P}\) & \(>5.9\times 10^{33}\)[40] \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental limit on life time of the proton decay for its various decay channels using Super-Kamiokande experiment [37; 38; 39; 40].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Breaking chain & Observables & Model Predictions \\ \hline \multirow{3}{*}{\(SO(10)\to SM\)} & \(\log_{10}\left(\frac{M_{U}}{G_{\text{TeV}}}\right)\) & 14.4 & 14.6 \\ \cline{2-4} & \(\alpha_{U}^{-1}\) & 39.0 & 40.0 \\ \cline{2-4} & \(\tau_{p}\) (yrs) & \(3.5\times 10^{30}\) & \(1.1\times 10^{32}\) \\ \hline \end{tabular}
\end{table}
Table 3: Prediction of proton decay lifetime \(\tau_{p}\) within \(SO(10)\) GUT.
The model predictions on intermediate-mass scale \(M_{I}\), unification mass scale, and inverse GUT scale coupling \(\alpha_{U}^{-1}\equiv\alpha_{\rm GUT}^{-1}\) also depend upon how left-right symmetry is spontaneously broken down to SM. We have considered different benchmark points by considering the breaking of LRSM to SM symmetry via Higgs doublet \(H_{R}\) (BP1), and Higgs triplet \(\Delta_{R}\) (BP2). These choices of scalars needed for spontaneous symmetry breaking of LRSM to SM have been presented below with their respective one-loop beta coefficients.
BP1: Minimal LRSM: \(\Phi(1,2,2,0)\)), \(H_{R}(1,1,2,1)\)
\(\left(b_{3C},b_{2L},b_{Y}\right)=\left(\,-7,-19/6,41/10\right)\)
\(\left(b_{3C}^{\prime},b_{2L}^{\prime},b_{2R}^{\prime},b_{BL}^{\prime}\right)= \left(\,-7,-3,-17/6,17/4\right)\)
BP2: Manifest LRSM: \(\Phi(1,2,2,0)\)), \(\Delta_{R}(1,1,2,-2)\)
\(\left(b_{3C},b_{2L},b_{Y}\right)=\left(\,-7,-19/6,41/10\right)\)
\(\left(b_{3C}^{\prime},b_{2L}^{\prime},b_{2R}^{\prime},b_{BL}^{\prime}\right)= \left(\,-7,-3,-7/3,11/2\right)\)
One can consider two bidoublet scalars instead of one bidoublet scalar used in BP1 and BP2 for correct fermion mass fitting in \(SO(10)\) GUT. We only estimated the model parameters expressed in terms of one-loop beta coefficients. Using these estimated values of parameters, the values of intermediate left-right symmetry breaking scale \(M_{I}\), unification mass scale \(M_{U}\), and inverse GUT coupling constant \(\alpha_{U}^{-1}\) are estimated and presented in left panel of Fig.(2) with successful unification of gauge couplings. From left to right, the vertical dotted lines in Fig.(2) show the symmetry-breaking scales \(M_{Z}\) as electroweak scale, \(M_{I}\) as intermediate left-right symmetry breaking scale and \(M_{U}\) as the unification scale. One such estimation correspond to BP2 is given below,
\[M_{I}=10^{9}\,,\quad M_{U}=4.80\times 10^{16}\,,\quad\alpha_{U}^{-1}=46.20\]
We wish to examine how these model predictions are modified if we include a scalar triplet \(\Omega\) with zero hypercharge from electroweak symmetry breaking onwards. We designate these new benchmark points as NBP1 and NBP2 by adding \(\Omega\) from the SM symmetry breaking scale onwards and the corrosponding modification is presented in the right panel of Fig.(2). Here, we have denoted \(B-L\) as \(BL\) for notational simplicity.
We consider one benchmark point with unification mass scale \(M_{U}\) and inverse GUT coupling constant as,
\[M_{U}=10^{15.65}\ {\rm GeV\ and}\quad\alpha_{G}^{-1}=43.5\]
The numerically estimated proton lifetime for this benchmark point is \(\tau_{p}=4.0\times 10^{34}\) yrs. In Table (4), we have provided proton decay lifetime \(\tau_{p}\) calculated numerically in the \(SO(10)\) GUT with intermediate left-right symmetry. The projected proton lifetimes are provided for BP1, NBP1, and NBP2, all consistent with the Super-Kamiokande or Hyper-Kamiokande experiment's upper limit.
Figure 2: Unification plot showing successful gauge coupling unification satisfying proton decay constraints.
## V CDF W boson mass anomaly and supersymmetric \(So(10)\) GUT
Supersymmetry (SUSY) models predict that a corresponding fermion exists for every boson and vice versa. Due to the presence of superpartners of SM particles lying around the electroweak scale, gauge couplings receive corrections from the exchange of superpartners in loop diagrams, leading to a modification of the running of the gauge couplings with energy. For supersymmetric models,
\[b_{i} = -3\mathcal{C}_{2}(G)+\sum_{R}T(R)\prod_{j\neq i}d_{j}(R)\,. \tag{7}\] \[= -3\mathcal{C}_{2}(G)+2N_{G}+T(S)\]
Here, \(\mathcal{C}_{2}(G)\) is the quadratic Casimir operator for gauge bosons in their adjoint representation of a given gauge group, and its value for a given gauge group here,
\[\mathcal{C}_{2}(G)\equiv\begin{cases}N&\text{if $SU(N)$},\\ 0&\text{if $U(1)$}.\end{cases} \tag{8}\]
The other parameter \(T(R)\) is the trace of the irreducible representation \(R\) for a given fermion or scalar with its analytic formula,
\[T(R)\equiv\begin{cases}1/2&\text{if $R$ is fundamental},\\ N&\text{if $R$ is adjoint},\\ 0&\text{if $U(1)$}.\end{cases} \tag{9}\]
Moreover, \(d(R)\) is the dimension of a given representation \(R\) under all \(SU(N)\) gauge groups, excluding the gauge group for which one loop beta coefficient is derived. The number of fermion generation is denoted by \(N_{G}\), and its value is 3. The explicit scalar contribution to one loop beta coefficient is defined by \(T(S)\) for a complex scalar \(S\). The value of \(T(S)\) is 0 for \(b_{3C}\), 1 for \(b_{2L}\) and 3/5 for \(b_{Y}\). Thus, the derived one-loop beta coefficients for MSSM without inclusion of scalar triplet is \(b_{i}=(-3,1,33/5)\) with i=3C,2L,Y. Using standard RGEs, the unification mass scale in the minimal supersymmetric standard model is found to be at \(2\times 10^{16}\) GeV as presented in Fig.(3). Motivated by CDF II W-boson mass anomaly, the inclusion of a real scalar triplet within MSSM spoils the gauge couplings unification when embedded in SUSY \(SO(10)\) GUT.
We also present a gauge coupling unification plot in Fig.(4) for the SUSY \(SO(10)\) GUT having intermediate left-right symmetry motivated for explaining neutrino masses, dark matter, and matter-antimatter asymmetry of the present universe. However, one can introduce intermediate left-right symmetry between MSSM and SUSY \(SO(10)\) GUT for addressing successful unification of gauge couplings and proton decay constraints [50; 51; 52; 53; 54; 55; 56; 57; 58]. With intermediate left-right symmetry, the framework allows \(SU(2)_{R}\times U(1)_{B-L}\) breaking around few TeV scale by non-zero VEV of \(\overline{16}_{H}\) resulting TeV scale masses for RH gauge bosons \(W_{R}\), \(Z_{R}\) which can be directly produced at LHC [59; 50; 54].
## VI Summary
We have studied the non-supersymmetric \(SO(10)\) GUT with and without any intermediate symmetry
\begin{table}
\begin{tabular}{||c|c|c|c||} \hline \hline Benchmark Points & \(M_{U}\) (GeV) & \(\alpha_{G}^{-1}\) & \(\tau_{p}\) (yrs) \\ \hline \hline BP1 & \(5.49\times 10^{16}\) & 46.34 & \(8.6\times 10^{38}\) \\ \hline NBP1 & \(4.71\times 10^{15}\) & 43.60 & \(\mathbf{4.77\times 10^{34}}\) \\ \hline NBP2 & \(4.40\times 10^{15}\) & 43.52 & \(\mathbf{3.5\times 10^{34}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Numerical estimation of proton decay lifetime \(\tau_{p}\) in \(SO(10)\) GUT with intermediate left-right symmetry. The estimated proton lifetimes are presented in various benchmark points, which agree with the limit set by the Super-Kamiokande or Hyper-Kamiokande experiment.
Figure 3: Unification plot for MSSM with unification mass scale \(M_{U}=10^{16.4}\) GeV, \(M_{\text{SUSY}}=500\) GeV, \(M_{Z}=91.187\) GeV and \(\alpha_{U}^{-1}\simeq 23\) for SUSY \(SO(10)\) GUT without having any scalar triplet and with no intermediate symmetry.
breaking step and its connection to CDF II W boson mass anomaly by including a scalar triplet with zero hypercharge at the electroweak scale. The \(SU(2)_{L}\) scalar triplet is contained in Higgs representation \(45_{H}\) of \(SO(10)\) initially introduced to break \(SO(10)\) to SM without having any intermediate stage of symmetry breaking. The other representations of \(SO(10)\) needed for known usual quark and lepton masses are \(10_{H}\) and \(16^{\circ}_{F}\) (a=1,2,3) while gauge bosons are lying in the adjoint representation of \(45_{V}\).
The minimal representations (\(45_{H}\), \(16_{F}\), and \(10_{H}\)) of \(SO(10)\) GUT, not only explained the reported shift in W boson mass anomaly, but also could account for known SM fermion masses and mixings. Including extra \(SU(2)_{L}\) scalar triplet contained in \(45_{H}\) at electroweak scale onwards helps in achieving unification of gauge couplings corresponding to fundamental forces of SM. However, the predicted unification mass scale, \(M_{U}\simeq 10^{14.4}\) GeV, and the GUT scale inverse fine structure constant, \(\alpha_{U}^{-1}\simeq 39.0\) yield the proton lifetime \(\tau_{p}=10^{30}\) yrs which is well below the current experimental bound by Super-Kamiokande experiments. The issue of proton decay constraints can be avoided if we allow intermediate left-right symmetry between \(SO(10)\) to SM. The \(SO(10)\) GUT being a rank five group and SM as a rank four group, one can accommodate intermediate symmetry-breaking steps addressing the issue of proton decay constraints, neutrino mass, and other observables which otherwise not possible in case of rank four \(SU(5)\) GUT.
With the intermediate left-right symmetry, the estimated value of the unification mass scale is \(M_{U}\simeq 5\times 10^{15}\) GeV, proton lifetime as \(\tau_{p}\simeq 5.0\times 10^{34}\) yrs consistent with experimental bound set by Super-Kamiokande experiment. The additional feature of left-right symmetry at an intermediate scale as to accommodate right-handed neutrinos and/or scalar triplets having non-zero \(B-L\) charge which can explain non-zero neutrino mass via Type-I/Type-II seesaw mechanism and address matter-antimatter asymmetry of the universe via decay of right-handed neutrinos or scalar triplets [61; 62; 63; 64]. Alternatively, the impact of two loop contributions, GUT threshold corrections, and gravitational corrections can modify unification mass scale and proton lifetime prediction along with other phenomenology like neutrino mass and matter-antimatter asymmetry of the universe, which will be studied separately.
In SUSY models, single-stage unification is possible even without including any triplet Higgs scalar. Proton decays are mediated by dimension-4 operators, making the decay rates too fast. This has been solved by imposing R-parity, which has many virtues. However, non-observation of SUSY particles smears out the other features of SUSY models.
###### Acknowledgements.
Purushottam Sahu would like to acknowledge the Institute Postdoctoral Fellowship of IIT Bombay for financial support. PS also acknowledges the support from the Abdus Salam International Centre for Theoretical Physics (ICTP) under the 'ICTP Sandwich Training Educational Programme (STEP)' SMR.3676 and SMR.3799.
|
2304.02470 | Mechanisms and safety of air plasma inactivated SARS-CoV-2 | Cold atmospheric plasma (CAP) displays antimicrobial, antitumor, and
antiviral properties, while the underlying mechanism is seldom clearly
elucidated. In this work, we employed CAP with air-feeding gas to directly
inactivate SARS-CoV-2. The results indicate that the typical SARS-CoV-2
morphological spikes disappeared after plasma treatment and the proteosomes of
SRAS-CoV-2 were modified. In addition, we also evaluated the safety of the air
plasma device in simulating daily life environments through rat experiments. We
evaluated rats' daily physiological behavior, body weight, food consumption,
organ histopathology, blood biochemical indicators, and so on. These results
demonstrate air plasma device is a safe and effective mean prevents virus
transmissions and infections. | Guiqiang Wang, Dehua Kong, Wei Tang, Jie Fang, Zhitong Chen | 2023-03-31T01:58:59Z | http://arxiv.org/abs/2304.02470v2 | # Mechanisms and safety of air plasma inactivated SARS-CoV-2
###### Abstract
Cold atmospheric plasma (CAP) displays antimicrobial, antitumor, and antiviral properties, while the underlying mechanism is seldom clearly elucidated. In this work, we employed CAP with air-feeding gas to directly inactivate SARS-CoV-2. The results indicate that the typical SARS-CoV-2 morphological spikes disappeared after plasma treatment and the proteosomes of SRAS-CoV-2 were modified. In addition, we also evaluated the safety of the air plasma device in simulating daily life environments through rat experiments. We evaluated rats' daily physiological behavior, body weight, food consumption, organ histopathology, blood biochemical indicators, and so on. These results demonstrate air plasma device as a safe and effective mean prevents virus transmissions and infections.
Cornavirus disease 2019 (COVID-19), an emerging infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been inducing a serious century pandemic [1]. COVID-19 has spread to all continents with multiple epicenters and globally caused around 7 million deaths. Although certain physical treatment has been shown to assist patients to fight COVID-19 with their own immune systems, no proven remedies exist so far, inducing high mortality rates, especially in senior groups. COVID-19 transmission between people occurs through bioaerosol inhalation or self-inoculation in the mouth and eyes. These two transmission conditions are facilitated in many ways: droplets, direct contact, fomite, aerosols, blood-borne, fecal-oral, and mother-to-child [2]. Recent progress in plasma has led to the creation of atmospheric and room temperature plasmas, which is cold atmospheric plasma (CAP). CAP has been used for a wide range of biomedical applications [3, 4, 5]. The efficacy of CAP is because of its components that exhibit favorable behavior for biomedical applications, including electrons, charged particles, reactive oxygen species (ROS), reactive nitrogen species (RNS), free radicals, ultraviolet (UV) photons, molecules, electromagnetic fields, physical forces, and electric fields [6, 7, 8, 9, 10, 11].
Chen et al. employed CAP to inactivate SARS-CoV-2 on various surfaces in UCLA P3 Lab, including plastic, metal, cardboard, basketball composite leather, football leather, and baseball leather [12]. Their results demonstrate the great potential of CAP as a safe and effective means to prevent virus transmission and infections. Ibanez-Cervantes et al. examined the disinfection capacity of H\({}_{2}\)O\({}_{2}\) plasma against the SARS-CoV-2 and bacteria (Acinetobacter baumannii and Staphylococcus aureus) through N95 masks, and they pointed out that H\({}_{2}\)O\({}_{2}\) plasma was an efficient way to disinfect N95 masks [13]. In addition, an 80-day clinical trial took place in a hospital to evaluate the CAP lowering the viral load in the COVID-19 room, and results indicated that CAP could decrease coronavirus spread in hospitals and prevent virus transmission [14]. It's known that the disruption of the interaction between the receptor-binding domain (RBD) and human angiotensin-converting enzyme 2 (hACE2) can prevent coronavirus infection. Scientists utilized plasma or plasma-activated media to induce spike protein or RNA damage to demonstrate plasma works for the inactivation of SARS-CoV-2 [15, 16]. Attri et al. employed molecular dynamic (MD) simulations to elucidate that the C-terminal domain of the SARS-CoV-2 spike protein structure became unstable after plasma oxidation, and binding free energy decreased with plasma-induced oxidation [17]. However, there is no direct evidence showing the proteins change of SARS-CoV-2 after plasma treatment. In this paper, we develop a CAP device with air-feeding gas that can be used in homes, offices, hospitals, etc. We utilized transmission electron microscopy (TEM) to characterize proteins of SARS-CoV-2 after plasma treatment. We also evaluated the safety of CAP devices in daily life environments through rat experiments.
Figure 1: (a) The schematic diagram of the air plasma device containing plasma discharge picture and experimental setup for SARS-CoV-2 inactivation. (b) I-V curves of plasma discharge (peak-peak discharge voltage: 5.88kV). (c) The optical emission Spectrometry of plasma discharge.
Fig. 1a demonstrates the air plasma device including the power plug, control center, air inlet, fan, plasma generator, and so on. Insert picture is the plasma from the plasma generator discharging. For the SARS-CoV-2 inactivation experiments, plates containing SARS-CoV-2 were put under a plasma generator at a 20 cm distance. The plasma generator comprised of comb-shaped electrodes, and metal coatings with 30 um thickness deposited on the comb-shaped electrode structures. The distance between each electrode is about 3mm. Fig. 1b indicates I-V curves of the air plasma device with dielectric barrier corona discharge mode. From Fig. 1b, the peak-peak discharge voltage is 5.88 kV and its discharge current is with microampere level.
Fig 1c shows the optical emission spectrometry (OES) of plasma discharge, which was detected by a high-sensitivity PMT monochromator with 0.2 nm H3 fine spectrum resolution. Atomic levels are broadened and shifted because of the Stark effect, caused by electric micro-fields formed by ions and electrons. Here, we considered the Stark broadening theory of the Balmer lineage of hydrogen atoms, we employed the Inglis
Figure 2: (a) and (b) TEM pictures of SARS-CoV-2 before plasma treatment show the Sparks and bright/dark virions, respectively. (c) and (d) TEM pictures of SARS-CoV-2 after plasma 30-minute treatment demonstrate spikes disappearing and coronavirus protein denaturing, respectively.
Teller equation to calculate the plasma equivalent electron temperature. The Inglis-Teller equation indicates an approximate relationship between the plasma density and the principal quantum number of the highest atom bound state derived by David R. Inglis and Edward Teller. The plasma equivalent electron density was calculated by the Boltzmann plot method. The plasma electron temperature and density were 0.31eV and \(3.25\times 10^{17}\) m\({}^{3}\), respectively. It represents that our plasma generator has the potential in sterilization and disinfection with extremely great efficiency.
Fig. 2a displays a TEM picture of SARS-CoV-2 with sparks and the average diameter of SAR-CoV-2 is approximately 100 nm. SAR-CoV-2 was scattered in the culture medium. As shown in Fig. 2b, there are mixed shapes of coronavirus clusters, for example, spherical and hook-shaped, exhibiting light and dark virions. Fig. 2c shows the TEM picture of SARS-CoV-2 after plasma treatment. The typical spikes of SARS-CoV-2 disappeared and the edge proteins of SARS-CoV-2 body were clear. In Fig. 2d, the protein of SARS-CoV-2 was denatured/modified, and there were no distinct regions of bright and dark of SARS-CoV-2. The basis for the modification of the protein of the coronavirus is that the reflected spectrum in the electron microscope photo has changed. All the proteins are in a state, just like the egg white is "cooked". At the same time, it is even difficult to distinguish the protein body of the coronavirus from the background in some areas. Multiple coronaviruses were condensed together after being denatured by plasma treatment. Therefore, the coronavirus protein has undergone irreversible modification after plasma treatment and is killed by plasma.
Table 1 shows the results of biochemical blood indicators for Group 1, Group 2, Group 3, and the control group. Group 1, Group 2, and Group 3 breathed plasma for 2 weeks, 3 weeks, and 4 weeks, respectively. It can be seen from Table 1 that the value of each biochemical blood index is mainly lower than that of the control group. The CR and HDL-C of the Group 1 are lower than the control group. And the CR of the Group 2 has similar behavior, too. TP/AST/ALT in Group 1 and CR/HDL-C/AST in Group 3 are lower than those in the control group. Compared the Group 3 and the control group, it can be seen that the levels of serum creatinine (CR), high-density lipoprotein
\begin{table}
\begin{tabular}{l l l l l} \hline Parameters & Group1 & Group2 & Group3 & Control \\ \hline CR(\(\mu\)mol/L) & 49.88\(\pm\)4.53\({}^{\#}\) & 51.72\(\pm\)3.76\({}^{\#}\) & 57.13\(\pm\)1.96\({}^{*}\) & 73.9\(\pm\)8.46 \\ GLU(mmol/L) & 5.76\(\pm\)0.56 & 6.97\(\pm\)1.22 & 7.95\(\pm\)1.84 & 8.19\(\pm\)2.29 \\ BUN(mmol/L) & 5.61\(\pm\)0.81 & 5.93\(\pm\)1.19 & 7.75\(\pm\)1.33 & 7.19\(\pm\)1.24 \\ TP(g/L) & 52.4\(\pm\)2.84\({}^{*}\) & 54.72\(\pm\)2.27 & 55.35\(\pm\)0.57 & 57.56\(\pm\)3.23 \\ UA(\(\mu\)mol/L) & 243.4\(\pm\)42.67 & 270.92\(\pm\)29.53 & 269.43\(\pm\)23.27 & 260.43\(\pm\)15 \\ HDL-C(mmol/L) & 0.58\(\pm\)0.1\({}^{\#}\) & 0.78\(\pm\)0.14 & 0.72\(\pm\)0.07\({}^{*}\) & 0.83\(\pm\)0.05 \\ LDL-C(mmol/L) & 0.27\(\pm\)0.08 & 0.43\(\pm\)0.11 & 0.32\(\pm\)0.03 & 0.33\(\pm\)0.05 \\ AST(U/L) & 142.96\(\pm\)27.69\({}^{*}\) & 167.54\(\pm\)46.37 & 149.75\(\pm\)14.26\({}^{*}\) & 212.87\(\pm\)34.58 \\ ALT(U/L) & 25.62\(\pm\)7.26\({}^{*}\) & 44.26\(\pm\)14.42 & 40.85\(\pm\)2.42 & 53.33\(\pm\)12.02 \\ ALB(g/L) & 24.82\(\pm\)1.25 & 25\(\pm\)1.24 & 24.23\(\pm\)0.29 & 25.16\(\pm\)1.92 \\ \hline \end{tabular}
\end{table}
Table 1: Results of biochemical blood indicators for Group 1 (2 weeks), Group 2 (3 weeks), Group 3 (4 weeks), and the control group. * p\(<\)0.05, # p\(<\)0.01, n=5
cholesterol (HDL-C), and aspartate aminotransferase (AST) in rats were significantly down-regulated after breathing plasma, and the other the indicators did not change significantly. Each group has 2 male mice and 3 female mice, and gender differences may cause differences in individual indicators. From the above results, it can be obtained that breathing the proper amount of plasma is good for the body.
Food consumption of rats is shown in Fig. 3a, including control male, control female, group 3 male, and group 3 female. The test process followed the SPF level barrier system laboratory with the license (SYXK (Zhe) 2019-0011). Rats were fed with Co60 sterilized nutrient compound feed and water, and illuminated at intervals of 12 hours. From Fig. 3a, the control group and group 3 have a similar increasing trend in both male and female groups. Breathing plasma long time and enduring plasma irradiation long time have almost no effect on rats' food consumption. From Fig. 3b, it can be seen that the overall increasing trends of body weight for rats in both males and females are consistent. The body weight increasing rate of females is the same for both control and group 3, while the body weight of the control males has a little higher increasing rate than group 3. Overall, plasma-generating ROS, RNS, and others have almost no effect on rats in both food consumption and body weight.
Rats after plasma treatment for 4 weeks, their mental and behavioral performances were normal. There was no diarrhea, hair loss, or death. Fig. 4a shows no changes on the back skin of the rats such as skin dryness, aging, and telangiectasia after 4 weeks of plasma treatment. Fig. 4b shows HE stains of male rat's testis and female rat's ovary before and after plasma treatment. Breathing plasma for a long time and enduring plasma irradiation for a long time have no pathological effect on male rat's testis and female rat's ovary. In addition, the HE stains of the heart, lung, stomach, liver, and brain of rats also have no pathological changes (Fig. 4c).
Figure 3: (a) Food consumption of rats, including control male, control female, group 3 male, and group 3 female. (b) Body weight of rats, including control male, control female, group 3 male, and group 3 female.
CAP has received considerable attention for its potential biomedical applications. Emerging fields of application of CAP include wound healing, sterilization of infected tissue, inactivation of microorganisms, tooth bleaching, blood coagulation, skin regeneration, and cancer therapy.[18; 19; 20; 21; 22; 23] Plasma contains the energies ions, free radicals, reactive species, UV radiation, and the transient electric fields inherent with plasma delivery, which interact with the cells and other living organisms.[24; 25; 26; 27; 28] From Fig.1c, it can be argued that UV photons are not the major plasma species with our plasma setup. Plasma-generating major reactive species include superoxide (O\({}_{2}\)'), nitrite (NO), atomic oxygen (O), ozone (O\({}_{3}\)), hydroxyl radical (\(\bullet\)OH), singlet delta oxygen (SOD, O\({}_{2}\)(\({}^{1}\Delta g\))), peroxonitrile (ONOO'), hydrogen peroxide (H\({}_{2}\)O\({}_{2}\)), nitrite (NO\({}_{2}\)'), etc.[29; 30; 31; 32; 33] SARS-CoV-2 infection relies on cognition of and binding to the cellular receptor hACE2 through RBD of the spike protein, and disruption of this process can effectively inhibit SARSCoV-2 invasion.[34] Guo et al. applied plasma-activated water to inhibit pseudovirus infection through S protein inactivation.[35] Qin et al. identified plasma-generated reactive species inactivated SARS-CoV-2 spike protein receptor binding domain (RBD), which is responsible for the recognition and binding to human angiotensin-converting enzyme 2 (hACE2).[36] While our results indicated that the typical spikes of SARS-CoV-2 disappeared and the edge of the protein body of SARS-CoV-2 was clear after plasma treatment (Fig. 2). The coronavirus proteins not only spike protein have undergone irreversible modification after plasma treatment.
Figure 4: (a) The skin of rat before and after plasma treatment. (b) HE stains of male rat’s testis and female rat’s ovary before and after plasma treatment. (c) HE stains of rat’s heart, lung, stomach, liver, and brain before and after plasma treatment. The image of brain HE stains is 40\(\times\), while the rest of other HE stains are 100\(\times\).
Whether plasma-generated reactive species cause runny nose, itchy eyes, and scratchy throat. Our observations indicate that rats undergoing plasma treatment and breathing lots of reactive species without a runny nose, itchy eyes, and scratchy throat. In attrition, their mental and behavioral performances were normal and rats had no diarrhea, hair loss, or death. The plasma does not affect the rats' daily physiological behavior, body weight, food consumption, and organ histopathology (Fig. 3). When rats breathe in, reactive species move through the nose or mouth, down the throat into the trachea, and then into the lungs.[37] The pathological section of the bronchi did not observe the muscles around the bronchi tighten or the lining of the bronchi swell. Reactive species come into the lungs easily and pass directly from the alveoli in the lungs into the bloodstream. The biochemical blood indicators almost had no change after breathing reactive species for a long time (Table 1). Plasma directly treated blood inducing blood coagulation, while breathing plasma almost did not induce changes in biochemical blood indicators.[38, 39] Breathing plasma for a long time and enduring plasma irradiation for a long time have no pathological effect on the testis, ovary, heart, lung, stomach, liver, and brain (Fig. 4). Plasma did not cause changes in the system that controls the how the heart beats, and it did not induce plaque to break off the wall of the blood vessel and block blood flow. Plasma caused no changes on the back skin of the rats such as skin dryness, aging, and telangiectasia (Fig. 4a). In addition, plasma did not cause inflammation throughout the body, either. From the HE stains of the brain, plasma did not decrease blood flow, loosen plaque, or trigger a blood clot. Headaches, anxiety, and affecting the central nervous system had not been found. Overall, this air plasma device is safe to use in a daily environment.
In conclusion, TEM results show that the typical spikes of the processed coronaviruses have disappeared and the edge proteins of SARS-CoV-2 body were clear after plasma treatment. The coronavirus proteins, not only spike proteins, undergo irreversible denaturaturating by plasma treatment. Breathing reactive species generated by plasma for a long time and enduring plasma irradiation for a long time do not affect the rats' daily physiological behavior, body weight, food consumption, and organ histopathology. The level of HDL-C and AST lowered, while other indicators have no significant changes. In all, the CAP device with air-feeding gas that we developed can be safely used in homes, offices, hospitals, etc.
This work was supported by the grants from the National Key R&D Program of China (2022YFE0126000, to Z.C.) and the Guangdong Basic and Applied Basic Research Foundation (2022A1515011129, to Z.C.).
|
2309.12847 | Numerical Simulation of Radiative Transfer of Electromagnetic Angular
Momentum | We present numerical simulations of light emitted by a source and scattered
by surrounding electric dipoles with Zeeman splitting. We calculate the leakage
of electromagnetic angular momentum to infinity. | B. A. van Tiggelen, R. Le Fournis | 2023-09-22T13:17:54Z | http://arxiv.org/abs/2309.12847v1 | # Numerical Simulation of Radiative Transfer of Electromagnetic Angular Momentum
###### Abstract
We present numerical simulations of light emitted by a source and scattered by surrounding electric dipoles with Zeeman splitting. We calculate the leakage of electromagnetic angular momentum to infinity.
## I Introduction
Optical sources radiate electromagnetic energy with a rate that depends on the local density of radiative states (LDOS) near the source and at the emitted frequency of the source [1]. This statement is true classically and recalls Fermi's Golden Rule in quantum mechanics [2]. The LDOS is affected by the environment, and can be either dielectric, structured, gapped in frequency, or disordered. If the environment is magneto-active, induced by the presence of an externally applied magnetic field \(\mathbf{B}_{0}\), the source can also radiate angular momentum (AM) with direction \(\mathbf{B}_{0}\) into space.
The first study [3] of this phenomenon used the phenomenological concept of radiative transfer and in particular the role of the radiative boundary layer of an optically thick medium to argue that the Poynting vector has two components in the far field. The first is the usual energy flux, purely radial, and decays with distance \(r\) from the object as \(1/r^{2}\). The second is magneto-transverse and circulates energy around the object (see Fiure 1). This component decays faster as \(1/r^{3}\) but has finite angular momentum constant with distance that travels away from the object. Our second study [4] demonstrated that this leakage is not restricted to multiple scattering and also exists when a homogeneous magneto-birefringent environment surrounds the source. In this case 1) the radiation of AM results in a torque on the source and not on the environment; 2) it depends sensitively on the nature of the source with huge differences between e.g. an electric dipole source and a magnetic dipole source, and 3) geometric "Mie" resonances can enhance the effect much like the Purcell effect does in nano-antennas [5].
Our latest study [6] considered numerically a spherical environment filled with small resonant electric dipole scatterers. When the optical thickness increases at fixed frequency, the total leakage rate of AM was seen to increase. Upon varying the optical thickness we investigated separately the role of photonic spin and orbital momentum, the two essential constituents of electromagnetic AM [7], and found both to co-exist.The torque on the source was also seen to increase with dipole density but hardly with optical thickness. In this work we study the frequency dependence of the dipole scatterers. Especially for large detuning from their resonance, the scattering from one dipole becomes weak so that to keep a reasonable optical thicknesses we require more dipoles, typically thousands, that takes more CPU time and memory.
## II Leakage of angular momentum
For a monochromatic electric dipole source with electric dipole moment \(\mathbf{d}\) at frequency \(\omega=kc_{0}\), positioned at \(\mathbf{r}=0\), the radiated electric field at position \(\mathbf{r}\) is given by
\[\mathbf{E}(\mathbf{r},\omega)=-4\pi k^{2}\mathbf{G}(\mathbf{r},0,\omega+i0) \cdot\mathbf{d}(\omega) \tag{1}\]
with \(G_{kk^{\prime}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) the vector Green's function associated with the Helmholtz equation for the electric field. This Green's function contains full information about the environment. The slightly positive imaginary part of the frequency \(\omega+i0\) guarantees outward propagation of the light. The power \(P\) (radiated energy per second) radiated by the electric dipole is equal to its dissipation rate \(\mathrm{Re}(\mathbf{J}^{*}\cdot\mathbf{E})/2\)[8]. Since \(\mathbf{J}=-i\omega\mathbf{d}\) we find, after averaging over the orientation of the dipole source,
\[P=-\frac{2\pi}{3}k^{3}c_{0}|\mathbf{d}|^{2}\mathrm{Im}\,\mathrm{Tr}\, \mathbf{G}(0,0) \tag{2}\]
Figure 1: The geometry considered in this work. A source emits light into a disordered environment containing \(N\) electric dipole scatterers. In the presence of a magnetic field \(\mathbf{B}_{0}\) a magneto-transverse component \(\mathbf{S}_{\phi}\) of the Poynting vector appears outside the medium that carries electromagnetic angular momentum. This angular momentum propagates to infinity and a torque is exerted on scatterers and source.
We recognize \(\rho(k)\sim-{\rm Im}\,{\rm Tr}\,{\bf G}(0,0)\) as the LDOS at the source position. The balance equation for the angular momentum can be written as [4],
\[\frac{d}{dt}J_{i,{\rm mec}} = M_{i}\] \[= \frac{R^{3}}{8\pi}\epsilon_{ijk}{\rm Re}\int_{4\pi}d\hat{\bf r}\, \hat{r}_{l}\hat{r}_{j}(E_{l}^{*}E_{k}+B_{l}^{*}B_{k})(R\hat{\bf r})\]
with \({\bf J}_{\rm mec}\) the mechanical AM of the matter and with implicit summation over repeated indices. This formula expresses that the torque \({\bf M}\) exerted on the matter is radiated away as AM to infinity (\(r>R\)), thereby assuming a source that has been constant during a time longer than \(R/c_{0}\). In this picture the radiative AM inside the environment enclosed by the sphere of radius \(R\) is constant in time, and AM leaks to infinity somewhere ar \(r(t)\sim c_{0}t>R\). The cycle-averaged torque acts on both source and its environment, \({\bf M}={\bf M}_{S}+{\bf M}_{E}\). The latter is
\[{\bf M}_{E}=\frac{1}{2}{\rm Re}\,\int d^{3}{\bf r}\,\left[{\bf P}^{*}\times{ \bf E}+P_{m}^{*}({\bf r}\times\nabla)E_{m}\right] \tag{4}\]
This torque vanishes for a rotationally-invariant environment around the source but not when it this symmetry is broken by structural heterogeneity, as will be discussed here. The torque on a source with an electric dipole moment \({\bf d}(\omega)\) is given by [8],
\[{\bf M}_{S}=\frac{1}{2}{\rm Re}\,({\bf d}^{*}\times{\bf E}) \tag{5}\]
With the electric field given by Eq. (1), an expression similar to Eq. (2) can be obtained,
\[M_{S,i}=-\frac{2\pi}{3}k^{2}|{\bf d}|^{2}{\rm Re}\,\epsilon_{ijk}G_{kj}(0,0) \tag{6}\]
This torque vanishes for an (on-average) spherical environment with isotropic optical response [9].
Alternatively, the leak of AM given by the righthand side of Eq. (II) can be split up into parts associated with photonic spin and orbital momentum [7]
\[{\bf M}=\frac{R^{2}}{8\pi k}{\rm Im}\int_{r=R}d^{2}\hat{\bf r}\,\left[{\bf E} ^{*}\times{\bf E}+E_{m}^{*}({\bf r}\times\nabla)E_{m}\right] \tag{7}\]
Especially the existence of orbital AM expressed by the second term is interesting since polarized radiation by a source subject to a magnetic field may be more intuitive to accept in view of the Faraday effect.
## III Environment of \(N\) dipoles with Zeeman Shift
The Helmholtz Green function for light scattering from \(N\) electric dipoles can be found in literature [10]. Because of the pointlike nature of the dipoles it reduces to a \(3N\times 3N\) complex-symmetric non-hermitian matrix. The magnetic response of the dipoles - due to the Zeeman splitting of their internal resonance - can be extracted in linear order so that the AM linear in the external field can be calculated numerically given the \(N\) positions of the dipoles. The polarizability of a single dipole is given by,
\[\alpha(\omega,{\bf B}_{0})=\alpha(0)\frac{\omega_{0}^{2}}{\omega_{0}^{2}-( \omega+\omega_{c}i\epsilon\cdot\hat{\bf B}_{0})^{2}-i\gamma\omega} \tag{8}\]
in terms of the radiative damping rate \(\gamma\), the resonant frequency \(\omega_{0}\) and the cyclotron frequency \(\omega_{c}=eB_{0}/2mc_{0}\). The second rank operator \(i\epsilon\cdot\hat{\bf B}_{0}\) has the 3 eigenvalues \(0,\pm 1\) corresponding to 3 Zeeman levels that make the environment linearly magneto-birefringent. The external magnetic field \({\bf B}_{0}\) is assumed homogeneous across the environment, but this can easily be altered in future work, e.g. to describe an environment surrounding a magnetic dipole. The detuning parameter is defined as \(\delta\equiv(\omega-\omega_{0})/\gamma\). It is also useful to introduce the dimensionless material parameter \(\mu=(12\pi/\alpha(0)k^{3})\times\omega_{c}/\omega_{0}\) that quantifies the magnetic birefringence induced by one dipole and typically small (for the \(1S\)-\(2P\) transition in atomic hydrogen one can estimate \(\mu\sim 7\cdot 10^{-5}\)/Gauss ). The mean free path \(\ell\) is another important length scale and follows from \(1/\ell=nk{\rm Im}\alpha(\omega)\) if we neglect recurrent scattering, with \(n=3N/4\pi a^{3}\) the density of dipoles. Without magnetic field the polarizability can be written as \(\alpha=-(3\pi/k^{3})/(\delta+i/2)\).
The leak of AM is calculated by performing the surface integral in Eq. (II) numerically for different realizations in which dipole positions are averaged over a sphere with given radius \(a\) at homogeneous average density throughout the sphere. For more details we refer to Ref. [6]. It was explicitly checked that the surface integral did not
Figure 2: Total normalized leakage rate of angular momentum (blue) as a function of detuning \(\delta=(\omega-\omega_{0})/\gamma\) from the dipole resonance, separated into torque on the source (orange) and torque on the environment (green). The optical depth is \(\tau=1.9\) and dimensionless dipole density is \(\eta=0.3\).
depend on the choice of \(R>a\) as required by conservation of AM. Once verified it is convenient to evaluate Eq. (3) in the far field \(R\gg a\) where the fields simplify. Our code was also tested on flux conservation and obeys the Optical Theorem.
## IV Numerical results
In this paper we focuss on numerical results obtained for different detunings \(\delta\) and relatively large optical depths \(\tau\equiv a/\ell\) in the hope to see major trends that can be extrapolated to even larger detunings. This regime becomes rapidly challenging since \(\tau\sim\frac{\theta}{4}N/(ka\times\delta)^{2}\) so that for \(\delta\gg 1\) and \(\tau\gg 1\) we need large \(N\). Typically, for \(\delta=2\), the best we have done sofar, and \(\tau=5\) we already need \(N=2000\) in a sphere of 13 inverse wave numbers in radius (\(ka=13\)). These numbers imply a value for the number of dipoles per optical volume \(\eta\equiv 4\pi n/k^{3}=3N/(ka)^{3}\sim 2.5\), i.e. the dipoles are largely located in each others near field. Nevertheless for a detuning \(\delta=2\), dipoles still scatter more or less independently because \(k\ell\sim 2\delta^{2}/\eta=3>1\)[11] but this is no longer true for \(\eta=6\). This implies that completely unknown effects such weak localization may affect the radiative transfer of AM.
After an ideal average over all \(N\) dipole positions, the magnetic field is the only orientation left in the problem and we expect that \(\mathbf{M}=\kappa\hat{\mathbf{B}}_{0}\) with \(\kappa\) a real-valued scalar to be calculated that can have both signs. Following earlier work [3; 4; 6] we normalize the leakage of AM by the radiated amount of energy and introduce the dimensionless AM \(\kappa\omega/P\), with \(P\) the radiated amount of energy per second. This number is linear in the material parameter \(\mu\) introduced earlier and can directly be related to the Hall angle of the Poynting vector in the far field of the sphere. Alternatively, the number quantifies the amount of leaked angular momentum expressed in \(\hbar\), normalized per emitted photon.
In Figures 2 and 3 we show the normalized AM leakage for an optically thin sphere as a function of detuning. The bars in all figures denote the typical support of the full PDF when calculating the torque for 1000 different realizations of the dipole positions. Except for the spin leakage rate, they are large and all AM related to source and orbital momentum are genuine mesoscopic parameters. The optical depth \(\tau=1.9\) and average density \(\eta=0.3\) are kept constant which means that both the number \(N\) of dipoles and the radius \(a\) of the sphere change as \(\delta\) is varied. It is seen that the dimensionless AM depends significantly on detuning and changes sign near the resonance at \(\delta=0\). In this weakly scattering regime single scattering is still dominant. For a thin layer in the far field of the source we can derive a profile \(\kappa\omega/P\sim\eta\,\mathrm{Im}\alpha^{2}\sim-\eta\delta/(\delta^{2}+1/4) ^{2}\) independent of distance. This corresponds more or less to the observed profile of total AM in Figures 2 and 3 that are nevertheless affected by higher order scattering events. Both figures also show how total AM leakage splits up into either spin + orbital AM (Figure 2) or torque on source + torque on environment (Figure 3). All are of same order of magnitude but are not always of same sign. In particular for this set of parameters both decompositions have opposite sign.
The picture changes significantly in the multiple scattering regime. Figures 4 and 5 show the same normalized leakage of AM for optical depth \(\tau=4.6\) and dimensionless density as large as \(\eta=6\). Going to smaller values of \(\eta\) would require a too large value for \(N\). The total leakage is now negative for all (positive) detunings and spin leakage and orbital leakage have same sign. Except near resonance, it is dominated by leaks in orbital AM. From Figure 5 we see that the torque on the source is mainly positive but changes sign with detuning near resonance. This implies again that for most detunings source and environment are subject to opposite torques. A normalized torque on the source around \(0.1-0.2\) is not much different than what was found for \(\eta=0.3\)[6]. For low densities this torque increased with \(\eta\) but seems to saturate for \(\eta>0.3\). The numbers for total leakage rate (\(-0.2\pm 0.1\)) are almost one order of magnitude less than what we found for \(\eta=0.03\) and \(\tau\approx 2\) in Ref. [6] and one may speculate about the possibility of some process related to "weak localization" that reduces the transfer of angular momentum.
To get an order of magnitude for this effect we consider an homogenous gas with atoms of mass \(Z\times m_{H}\) in a sphere of size \(a\). For an ideal gas at room temperature the density is roughly \(n_{0}=40\) mol/m\({}^{3}\). If we assume that all leaked angular momentum after a time interval \(\Delta t\) is transferred homogeneously to mechanical momentum of
Figure 3: As in previous figure with same fixed parameters \(\tau=1.9\) and \(\eta=0.3\), but this time the total leakage has been split up into into leakage of orbital angular momentum (green) and leakage of spin (orange).
the sphere, we can estimate that its angular velocity is
\[\Omega[\mathrm{rad/s^{-1}}] = 5.7\frac{\kappa\omega}{P\mu}\times\frac{\mu}{Z}\times\frac{\left( \frac{P}{10\,\mathrm{W}}\right)\left(\frac{\lambda}{500\,\mathrm{nm}}\right) \left(\frac{\Delta t}{\mathrm{days}}\right)}{\left(\frac{a}{\mathrm{T\,cm}} \right)^{5}\left(\frac{n}{n_{0}}\right)}\]
For \(\kappa\omega/P\mu=0.3\), \(\mu/Z\sim 10^{-5}\) the angular rotation is typically of the order of 1 mrad/s after 100 days.
## V Conclusion
In this work we have reported an exact numerical study of the radiation of electromagnetic angular momentum by a light source imbedded in a disordered and magneto-active environment described by resonant electric dipoles with Zeeman splitting that scatter light elastically. The angular momentum is directed along the magnetic field and its transfer is directed radially outward from the source. It is in general composed of both spin transport and transport of orbital angular momentum. The first implies polarization of radiated light, the second is related to an energy flux circulating around the object and the magnetic field. Leakage of angular momentum has been quantified by a dimensionless parameter that is essentially the ratio of angular momentum leakage rate (with physical unit Joule) and the energy of the source emitted during one optical cycle. The number can be seen to be equal to the angular momentum, expressed in \(\hbar\), transferred per emitted photon to source and environment. It is proportional to the product of a pure material parameter \(\mu\) associated with the magneto-scattering of the dipoles and the numbers that can be found in the figures that result from scattering. By conservation of angular momentum this transfer gives rise to torques on both the emitting source and the scattering environment. All parameters are seen to be of the same order, can have mutually opposite signs and depend on the detuning from the resonance.
The regime of Thompson scattering is interesting for astrophysical application and is a major scattering mechanism in our Sun. It corresponds to large detunings, where the phase shift between incident and scattered field becomes negligible. Out-of phase response seems essential for the leakage to exist, even for large optical depths. The amount of multiple scattering, quantified by optical depth, certainly affects radiative transfer of angular momentum but it is difficult at this point to deduce general trends. Our simulations are clearly in need of a radiative transport theory for angular momentum in magnetic fields that, to our knowledge, does not exist. From our simulations we suspect that different parts of the environment undergo different torques. We also expect that as the optical thickness of the environment increases, the precise nature of the source becomes of less importance, quite opposite to what was found for a homogeneous environment.
Indeed, this picture may possibly apply to stellar atmospheres, globally exposed to the magnetic dipole fields of their nuclei. Although all ingredients are present for leakage of angular momentum to exist, lots of extra complications, such as broadband radiation, Doppler broadening, etc make it difficult for quantitative predictions.
Figure 4: Total leakage of optical angular momentum (blue) as a function of detuning. The orange and green curves represent spin and orbital momentum. The optical depth and average density are kept constant (\(\tau=4.6\) and \(\eta=6\)). The calculations have been done only for \(\delta>0\), i.e. blue-shifted from the resonance.
Figure 5: As in Figure 4, for clarity only the torque on the source has been shown. Bars denote the support of the full probability distribution (PDF) over 1000 realizations. |
2309.03792 | Emergence of crystalline steady state in a driven superfluid | The spontaneous emergence of structures from initially homogenous systems
belongs to the most striking topics in natural science. Systems driven into
deeply nonlinear regimes are theoretically difficult to describe and can
produce states that do not exist in equilibrium. We observe the emergence of a
stable square lattice density modulation from an initially homogenous,
two-dimensional, radially symmetric Bose-Einstein condensate when periodically
driving the two-particle interaction. We show theoretically that this state can
be understood as an attractive fixed point of coupled nonlinear amplitude
equations, which result from phonon-phonon interactions. As a self-stabilized
state characterized by spontaneously broken translational symmetry, our results
establish a novel quantum material related to supersolids. | Nikolas Liebster, Marius Sparn, Elinor Kath, Keisuke Fujii, Sarah Görlitz, Tilman Enss, Helmut Strobel, Markus K. Oberthaler | 2023-09-07T15:43:18Z | http://arxiv.org/abs/2309.03792v3 | # Emergence of crystalline steady state in a driven superfluid
###### Abstract
The spontaneous emergence of structures from initially homogenous systems belongs to the most striking topics in natural science. Systems driven into deeply nonlinear regimes are theoretically difficult to describe and can produce states that do not exist in equilibrium. We observe the emergence of a stable square lattice density modulation from an initially homogenous, two-dimensional, radially symmetric Bose-Einstein condensate when periodically driving the two-particle interaction. We show theoretically that this state can be understood as an attractive fixed point of coupled nonlinear amplitude equations, which result from phonon-phonon interactions. As a self-stabilized state characterized by spontaneously broken translational symmetry, our results establish a novel quantum material related to supersolids.
Many physical systems exhibit the formation of large scale patterns, both in the ground state or in the course of dynamics. Understanding these structures can provide simplification as well as classification of complex phenomena. These patterns range from simple structures such as stripes and lattices to more complex arrangements such as spirals, and emerge in a wide variety of disciplines [1; 2; 3; 4]. In the last decades, powerful theoretical tools to analyze and classify pattern formation have been developed. In certain cases, the development and stability of patterns can be analyzed by calculating the evolution of amplitudes of elementary patterned states such as stripes, i.e., \(A_{j}e^{i\mathbf{k}_{j}\cdot\mathbf{x}}+c.c.\). The complex amplitudes \(A_{j}\) are assumed to vary slowly in time and are used to capture the dynamics resulting from complex underlying details. In two dimensions, under the assumptions of rotational invariance as well as generalized parity, translation, and inversion symmetry, one constructs the phenomenological amplitude equation [1; 5]
\[\frac{\mathrm{d}A_{j}}{\mathrm{d}t}=\varepsilon A_{j}-\gamma\left|A_{j} \right|^{2}A_{j}+\gamma\sum_{i\neq j}G\left(\theta_{ij}\right)\left|A_{i} \right|^{2}A_{j}. \tag{1}\]
Here, \(\varepsilon\) describes exponential growth of the individual stripes, \(\gamma\) the self-interaction of a single stripe pattern, and \(\gamma\,G(\theta_{ij})\) the cross-interaction between stripes. The first nonlinear term describes self interaction leading to saturation, while the second is the interaction between different stripes with angle \(\theta_{ij}\) between them. While \(\left|\mathbf{k}_{j}\right|=k_{c}\) is assumed to be constant, the direction is not fixed. The functional form of \(G(\theta_{ij})\) determines the stability of certain arrangements, for example square lattices. While the values of constants are set by the given physical system, the mathematical form of this equation is generic and does not depend on microscopic details.
An interacting degenerate Bose gas naturally implements a very similar mathematical model if the interaction is driven periodically. The drive leads to the exponential growth of density waves (\(\varepsilon>0\)) at a characteristic wavenumber \(k_{c}\). The corresponding length scale is set by the driving frequency through the dispersion relation of elementary excitations to good approximation, and a mean background interaction leads to a non-vanishing \(\gamma\). Previous work in ultracold gases has extensively employed the growth term of eq. (1) in one dimension [6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. In two dimensions, specific dual-frequency driving with negligible background interaction has been used to tailor scattering processes, leading to square and hexagonal lattices [16]. Driving at a single frequency tuned to the harmonic trapping frequency has been shown to produce highly-structured surface waves [17].
Here, we report on the spontaneous emergence of a square lattice state in a two-dimensional BEC driven with a single frequency that is not fine-tuned to other system parameters, in contrast to the previously mentioned studies. This state is stabilized due to a cubic nonlinearity similar to eq. (1) and emerges naturally as a result of two factors: large occupations of produced excitations and a positive background interaction, which ensures significant coupling between phonons. We show that the stability of the square lattice can be understood in terms of a stable fixed point of amplitude equations, whereas lattice solutions with \(\theta_{1,2}\) very different from \(90^{\circ}\) are unstable.
As is generically the case in systems with spontaneously broken translational symmetry, boundary effects play an important role in the development and stability of emergent patterns; we implement a novel box potential with slanted walls to achieve absorptive boundary conditions. The resulting suppression of coherent reflections at the boundary leads to dynamics similar to an infinitely extended system. This makes it possible to experimentally demonstrate the emergence of square lattices, which are explained theoretically as a fundamental structure of driven superfluids in the infinitely extended limit. In the following, we first present experimental results on the emergence of these patterns in our driven
condensate, and then detail the theoretical description of the non-equilibrium fixed point.
**Emergence of Lattices in Experiment** We experimentally realize a quasi two-dimensional BEC of around 30,000 \({}^{39}\)K atoms, with trapping frequency \(\omega_{z}=2\pi\times 1.5\) kHz in the gravity direction. In the horizontal plane, the trap shape is circularly symmetric and flat, with walls that are linearly slanted, such that the density distribution of the cloud is uniform and falls linearly at the edges (see materials and methods). This is crucial for the emergence of square patterns, as this density distribution implements absorptive boundaries. To achieve gain at a specific length scale (\(\varepsilon>0\)), we periodically change the s-wave scattering length \(a_{s}\) with a frequency \(\omega_{d}\) by varying the external magnetic field near the 561 G Feshbach resonance [18], such that \(a_{s}(t)=\bar{a}_{s}(1-r\sin\omega_{d}t)\), with \(0<r<1\) and \(\bar{a}_{s}\) set to 100\(a_{0}\), with \(a_{0}\) the Bohr radius. The resulting chemical potential is \(\mu\sim 2\pi\times 300\) Hz.
Single shots of _in situ_ density distributions as well as momentum distributions after various drive periods are shown in fig. 1. Density distributions are imaged directly after the drive with high field absorption imaging [19]. Alternatively, we extract the momentum space distribution with a phase space rotation, by rapidly switching off the atomic interaction using the zero-crossing of the Feshbach resonance and allowing the cloud to propagate in a weak harmonic trap in the horizontal plane for a quarter period (\(2\pi\times 4.7\) Hz) [20; 21]. The vertical confinement is ramped down to a weak value, just enough to retain the atoms within the focal plane of the imaging system.
At early times (7 periods of driving, fig. 1a), excitations appear at the critical length scale \(2\pi/k_{c}\) but no obvious pattern has yet emerged. In momentum space, the excitations appear distributed on a ring of the resonant momentum \(k_{c}\). These early times are well understood to show growth due to parametric resonance [22; 23; 24; 25], as the energy input from the drive produces pairs of quasi-particles, where the frequency of each quasi-particle \(E\) is half a drive quantum, \(E=\omega_{d}/2\). The corresponding \(k_{c}\) is determined by the Bogoliubov dispersion relation.
Figure 1: **Structure Formation.****a**, Real space (top row) and momentum space (bottom row) distributions after 7 shake periods with \(r=0.4\) and \(\omega_{d}=400\)Hz. **b**, Distributions after 14 periods, where density modulations have become more apparent in real space and appear in momentum space as back-to-back correlated, randomly oriented peaks. **c**, Structures in the square lattice phase. The left column shows single realizations, whereas the right column shows averaged real and momentum space distributions. The smooth mean distributions indicate that structures are formed spontaneously in random directions. **d**, Late times show that square lattices at the characteristic length scale are still apparent, but other momenta also become occupied. Both color codes indicate the signal in atoms per pixel. **e**, The interaction is periodically modulated with a single frequency and a non-zero offset. The colored arrow indicates the various phases during time evolution.
Continuing the drive (14 periods, fig. 1b), the density distribution shows enhanced modulation at the critical length scale, but still shows no global structure. This regime can be understood as the coexistence of different, randomly oriented stripe patterns. This is directly revealed by the momentum distributions, which show clear back-to-back correlated peaks at the resonant momentum.
At intermediate times, global square lattice patterns emerge (21 periods, fig. 1c). This can be observed directly in real space as a square density modulation, or in momentum space as four momentum peaks, each separated by \(90^{\mathrm{o}}\). Averaging all realizations in real and momentum space, we recover the homogeneous density distribution and a radially symmetric ring in momentum space (second column of fig. 1c). This confirms that square patterns are formed spontaneously, i.e., with random orientation and spatial phase. For late times, real space distributions still show some structure, while momentum space distributions reveal that many other momenta beside \(k_{c}\) have become occupied, indicating that the model of superimposed stripes at a specific momentum as in eq. (1) is insufficient to capture the late-time dynamics.
The emergence and persistence of square lattices can be quantified by analyzing occupations as well as angular correlations in momentum space. In fig. 2a, momentum distributions averaged over many realizations are shown. The ring at \(k_{c}\) is pronounced and saturates in amplitude, and at late times a broad range of off-resonant momenta becomes populated. Figure 2b shows the occupation increase relative to the unperturbed condensate, centered at \(k_{c}=0.7\,\mu\mathrm{m}^{-1}\) as well as off-resonant momenta centered at \(k=0.44\,\mu\mathrm{m}^{-1}\), integrated over a width of \(0.26\,\mu\mathrm{m}^{-1}\). Early times show exponential growth at \(k_{c}\), as expected by parametric resonance (dashed black line in fig. 2b). After around 10 periods, a deviation from this exponential growth due to saturation becomes clear, and eventually occupations reach a steady value of 4,000 atoms in the summed region, relative to 30,000 condensed atoms. When resonant momenta \(k_{c}\) are saturated, non-resonant momenta also begin to grow and eventually have similar occupations to the resonant momenta (red line in fig. 2b). We will show in the following that in the regime
Figure 2: **Growth and Stability of Structures.****a**, Mean momentum distributions at three representative drive times. Excitations at the resonant momentum are pronounced, and show radial symmetry. The occupations at \(k_{c}\) saturate, and at later times non-resonant momenta also become populated. **b**, **A** quantitative analysis of occupations in momentum space. The black data points show occupations at the resonant momentum \(k_{c}\), whereas the red points indicate off-resonant momenta. The inset shows the occupations at all momenta on a log scale from early (blue) to late (red) times; shaded regions correspond to the area yielding black and red data points. **c**, Correlation functions \(g^{(2)}(\delta\theta)\) at the same drive times as the momentum distributions plotted in (**a**). Early times show back-to-back correlations but no other structure. In the square lattice phase, peaks at \(\delta\theta=90^{\mathrm{o}}\) appear, as well as anti-correlations at other angles. Later times show dampened correlations but retain the peak at \(\delta\theta=90^{\mathrm{o}}\). Insets show representative single realizations at the corresponding drive times. **d**, Extracted values of \(\tilde{g}^{(2)}(\delta\theta)\) for \(\delta\theta=45^{\mathrm{o}}\) (purple), \(\delta\theta=90^{\mathrm{o}}\) (green), and \(\delta\theta=180^{\mathrm{o}}\) (blue). Near saturation of momentum occupations (**b**), the system begins to form \(90^{\mathrm{o}}\) correlations, while \(45^{\mathrm{o}}\) correlations are suppressed. The initial increase of back-to-back correlations at early times is an artifact due to enhanced signal-to-background. For all plots, standard errors are either shown or are smaller than the markers.
where occupations are saturated but modes with \(|\mathbf{k}|=k_{c}\) dominate, a description similar to eq. (1) captures the dynamics.
To describe the spatial structure of density waves, we analyze the correlations of particle distributions along the ring at \(k_{c}\) in single shots. Momentum distributions of single realizations are analyzed by binning occupations along the resonant momentum as a function of the polar angle \(\theta\). The auto-correlation of this vector is calculated, and these correlations are averaged over all realizations, yielding
\[g^{(2)}(\delta\theta)=\left\langle\frac{\left\langle n(k_{c},\theta)\,n(k_{c}, \theta+\delta\theta)\right\rangle_{\theta}}{\left\langle n(k_{c},\theta) \right\rangle_{\theta}^{2}}-1\right\rangle. \tag{2}\]
We further define \(\tilde{g}^{(2)}(\delta\theta)=g^{(2)}(\delta\theta)/g^{(2)}(0)\), a normalized value of the correlation that accounts for variations in total signal on the resonant ring. In this normalization, \(\tilde{g}^{(2)}(\delta\theta)=1\) indicates that every bin at \(\theta\) has the same magnitude as the bin at \(\theta+\delta\theta\), regardless of total occupations. Note that because this is an auto-correlation, the function only contains information about angle differences, \(\delta\theta\in[0^{\mathrm{o}},180^{\mathrm{o}}]\).
Exemplary \(g^{(2)}(\delta\theta)\) are plotted in fig. 2c. After 11 drive periods (fig. 2c, 1), an auto-correlation peak as well as a peak at \(\sim 180^{\mathrm{o}}\) appear, but no other structure is apparent, indicating randomly oriented stripes. During the square lattice phase (fig. 2c, 1), these correlations show a clear peak at \(90^{\mathrm{o}}\) and \(180^{\mathrm{o}}\), while angles between these peaks become anti-correlated. At even later times when redistribution effects begin to dominate (fig. 2b, 1), one can see that correlations have a slightly lower contrast, but the characteristic peaks at \(90^{\mathrm{o}}\) and \(180^{\mathrm{o}}\) remain. While the positive correlation at \(90^{\mathrm{o}}\) indicates that square lattices are produced frequently, the anti-correlation of angles other than \(90^{\mathrm{o}}\) indicates that single square lattice structures over the whole system are dominant in single realizations.
The dynamics can be characterized by extracting specific values of these correlations as a function of drive time. Figure 2d shows extracted values of correlation functions for \(\delta\theta=45^{\mathrm{o}}\), \(90^{\mathrm{o}}\), and \(180^{\mathrm{o}}\). Correlations corresponding to the lattice structure emerge in the course of the dynamics, when occupation numbers and back-to-back correlations are fully established, indicating that lattice formation is a higher-order process. These correlation values persist over a number of periods, indicating the emergence of the steady-state, until very late times where redistribution effects dampen correlations at all angles. We note that lattices emerge over a large range of experimental parameters, including different background interaction strengths, drive frequencies, and geometries, including a harmonic trapping potential, confirming the robustness of the formation process.
**Amplitude Equation** The emergence of this patterned state can be described by amplitude equations for standing waves on a BEC. The evolution of the BEC order parameter \(\Psi(\mathbf{x},t)\) is described by the time-dependent Gross-Pitaevskii equation (GPE),
\[\begin{split} i\hbar\frac{\partial\Psi(\mathbf{x},t)}{\partial t }=&\Bigg{(}-\frac{\hbar^{2}\nabla^{2}}{2m}+V(\mathbf{x})\\ &+g_{0}\left(1-r\sin\omega_{d}t\right)|\Psi(\mathbf{x},t)|^{2} \Bigg{)}\Psi(\mathbf{x},t).\end{split} \tag{3}\]
Here, \(m\) is the atomic mass, \(\hbar\) is the reduced Planck's constant, and the interaction strength given by \(g_{0}=\frac{\sqrt{8\pi}\hbar^{2}}{m}\frac{\tilde{a}_{s}}{l_{z}}\), where \(l_{z}=\sqrt{\frac{\hbar}{m\omega_{z}}}\). We describe the emerging pattern as a sum of two stripe patterns
\[\Psi(\mathbf{x},t)=\Psi_{\mathrm{uni}}(t)\Big{[}1+\phi_{k}(t)\cos\left( \mathbf{k}\cdot\mathbf{x}\right)+\phi_{p}(t)\cos\left(\mathbf{p}\cdot \mathbf{x}\right)\Big{]} \tag{4}\]
and insert this into the time-dependent GPE, eq. (3), with \(V(\mathbf{x})=0\). \(\Psi_{\mathrm{uni}}(t)\) is a uniform, infinitely extended background field with time evolution \(\Psi_{\mathrm{uni}}(t)=\sqrt{n_{0}}\,\exp[-i\mu t-i(\mu/\omega_{d})r\cos\omega t]\), where \(n_{0}\) is the 2D density. The vectors \(\mathbf{k}\) and \(\mathbf{p}\) with \(|\mathbf{p}|=|\mathbf{k}|=k_{c}\) have an angle \(\theta\in[0^{\mathrm{o}},180^{\mathrm{o}}]\) between them. The ansatz eq. (4) extends previous theoretical work, which analyzed the stability of a single standing wave, and did not consider coupling between multiple waves [23]. Because the excited modes are Bogoliubov quasi-particles, the amplitudes of the standing waves are parameterized by
\[\begin{split}\phi_{k/p}(t)=&\left(1-\frac{\epsilon +2\mu}{E}\right)R_{k/p}(t)e^{i\frac{\omega_{d}}{2}t}\\ &+\left(1+\frac{\epsilon+2\mu}{E}\right)R_{k/p}^{*}(t)e^{-i\frac {\omega_{d}}{2}t}.\end{split} \tag{5}\]
Here, \(E=\sqrt{\epsilon(\epsilon+2\mu)}\) with \(\epsilon=\frac{\hbar k_{c}^{2}}{2m}\) is the corresponding Bogoliubov energy in units of frequency. \(R_{k/p}\) are complex amplitudes that vary slowly in time. The phase of \(R_{k/p}\) can be understood as a relative phase between the oscillation of the phonon and the drive, and \(|R_{k/p}|^{2}\) corresponds to phonon occupation numbers.
The dynamics of these amplitudes can be understood using a Ginzburg-Landau-type equation [5]. To derive this equation from eq. (3) and eq. (4), we perform a multiple-timescale analysis [26], resulting in the amplitude equations
\[\begin{split} i\frac{d}{dt}R_{k}(t)=&-i\alpha R_{k} ^{*}(t)-i\Gamma R_{k}(t)+\Delta R_{k}(t)\\ &+\lambda\left|R_{k}(t)\right|^{2}R_{k}(t)\\ &+\lambda\Big{[}c_{1}(\theta)\left|R_{p}(t)\right|^{2}R_{k}(t)+c_ {2}(\theta)R_{p}(t)^{2}R_{k}^{*}(t)\Big{]}\end{split} \tag{6}\]
and analogously for \(R_{p}\), with the \(k\) and \(p\) labels exchanged. Here, \(\alpha=r\frac{\mu e}{2E}\) is the exponential growth rate from parametric resonance, \(\Gamma\) is a phenomenological damping constant, and \(\Delta=\frac{\omega_{z}}{2}-E\) is a detuning term that is set to zero. Other constants \(\lambda=\mu\frac{5\epsilon+3\mu}{E}\), \(c_{1}(\theta)\) and \(c_{2}(\theta)\) are set by \(k_{c}\) and the angle between \(\mathbf{k}\) and \(\mathbf{p}\) (see materials and methods). This equation exhibits a
similar structure to eq. (1). Exponential growth at early times is given by the difference between gain \(\alpha\) and loss \(\Gamma\), whereas saturation and coupling of waves results from the nonlinear terms. Saturation occurs even for a single wave (i.e., \(R_{k}>0\) while \(R_{p}=0\)), as growth of one stripe is limited due to a GPE-type nonlinear interaction. Additionally, the last two terms capture the angle-dependent coupling between \(R_{k}\) and \(R_{p}\) and are characterized by the real functions \(c_{1}(\theta)\) and \(c_{2}(\theta)\). These determine whether stripe or lattice solutions are stable at a given angle, as discussed in the generic amplitude eq. (1) with factor \(G(\theta)\).
For a set angle \(\theta\), the time evolution of the complex amplitudes \(R_{k}\) and \(R_{p}\) takes place in a four-dimensional phase space, as each amplitude influences the magnitude and phase of the other (see materials and methods for coupling between magnitude and phase of complex amplitudes). For the following discussion, we set the angle to \(\theta=90^{\mathrm{o}}\).
In order to visualize the structure of the amplitude equations, we utilize flow diagrams, which show the direction and magnitude of the first derivative \(\frac{d}{dt}R_{k/p}\) at all points in space. Flows in the four-dimensional phase space are projected onto the three-dimensional cut where the phases \(\varphi_{k/p}=\arg R_{k/p}\) are equal, and are illustrated in fig. 3a, where the color map shows the flow speed at a given point. The flow patterns reveal in-spirals toward two fixed points at nonzero amplitude and phase, marked by the blue and green spheres.
To further investigate the structure and stability of these fixed points, we focus on the plane \(\varphi_{k}=\varphi_{p}\) where all the fixed points lie (\(\varphi_{k/p}=\pi/4\) for \(\Gamma=0\)). Within this plane, by plotting the direction of the force field \(\frac{d^{2}}{dt^{2}}R_{k/p}\) (see [26]) we obtain the global stability diagram shown in fig. 3b. Three unique fixed points (i.e., steady states of the driven system, plotted as colored dots) can be identified: one where all amplitudes are zero (\(R_{k}=R_{p}=0\)), one where only one density wave has a nonzero amplitude (\(R_{k}=0,|R_{p}|>0\) or vice versa), and one lattice solution (\(|R_{k}|=|R_{p}|>0\)). The uniform fixed point \(R_{k}=R_{p}=0\) (grey) is unstable in the driven system, and the outgoing arrows represent exponential growth of small non-zero amplitudes (i.e. phonon occupation numbers) due to parametric resonance. While in one dimension, the single stripe fixed point would be a stable solution, in two dimensions this point is unstable towards forming a lattice at \(90^{\mathrm{o}}\). In the case of non-zero damping, the system will approach this stable fixed point.
**Observation and Probing of the Fixed Point** The location of the square lattice fixed point depends on the growth rate \(\alpha\), nonlinearity \(\lambda\), and damping \(\Gamma\). While \(\alpha\) is a parameter of the linear term that is straightforward to calculate, \(\lambda\) describes a nonlinear effect resulting from the interaction between waves, which is theoretically challenging. The phenomological damping parameter \(\Gamma\) must be extracted experimentally. The fixed point can be identified by looking at the saturation of occupations in the long-time limit \(n_{\mathrm{sat}}\) (see inset of fig. 3c), where the system is well characterized by the fixed point. These are given by \(n_{\mathrm{sat}}\propto\frac{1}{4}\sqrt{\alpha^{2}-\Gamma^{2}}\). Combining these observations with the extracted growth rate of the occupation num
Figure 3: **Attractive Fixed Point.****a** Flow diagram of a 3D cut from the full 4D phase space of the dynamics of \(R_{k/p}\), where \(\theta=90^{\mathrm{o}}\), \(\varphi_{k}=\varphi_{p}\), \(r=0.5\) and \(\Gamma=0.4\alpha\). Arrows show flow lines from the projection of the first derivative \(\frac{d}{dt}R_{k/p}\) onto this cut, and colors show the flow speed. Two in-spirals towards fixed points (blue and green points) are apparent. **b**, Stability diagram for \(\delta\theta=90^{\mathrm{o}}\). Arrows indicate the direction of the second derivative \(\frac{d^{2}}{dt^{2}}R_{k/p}\) in the plane where \(\varphi_{k}=\varphi_{p}=\pi/4\) for \(\Gamma=0\). The three unique fixed points are shown as colored points: unstable points for the homogeneous (gray) and stripe (blue) solutions, and a stable point for a lattice solution (green). **c**, Extracted values of saturated momentum occupations as a function of driving amplitude \(r\). The inset shows occupations near \(k_{c}\) for different values of \(r\), integrated over the same region as in fig. 2b. Saturated occupations \(n_{\mathrm{sat}}\) are determined by averaging the last 5 points, and the value is shown as the dashed line in the inset and as green points in the main plot. The green points of the main plot approximate the occupations at the fixed point at various drive amplitudes. The dashed line is a fit to the data with the theoretically predicted functional form \(F(r)=a\sqrt{(r/b)^{2}-1}\). Standard errors are smaller than the data points.
ber at early times \(n(t)=e^{2(\alpha-\Gamma)t}n(t=0)\) (fig. 2b), we determine these parameters (for details see materials and methods). We find quantitative agreement with the theoretical prediction for \(\alpha\). The experimentally-determined \(\lambda_{\text{eff}}\) is a factor of three larger than the predicted value from the theory framework. The strongly reduced theoretical model includes neither finite size, occupation-dependent loss nor the presence of additional excitations beyond a two-stripe description.
Building on our experimental capabilities of generating arbitrary potentials and with that density distributions [27], we can test the response of our system to specific angles and phases of structures by explicitly imprinting multiple stripes onto the condensate prior to the drive. We experimentally seed excitations with amplitudes and phases at values different from the stable fixed point solution. This is achieved by loading the BEC into a trapping potential with additional periodic spatial modulations to seed stripes. These periodic potentials are switched off individually with given delay times to set the initial phases of the stripes with respect to the drive. After the delays, we switch on the driving of the interaction, and detect the populations in the imprinted stripes after a given drive time. We thus have extensive control over imprinted waves: the angle between waves \(\theta\) is set by the shape of the modulated potential, the intensity of the projected light field sets \(|R_{k/p}|\), and the time between switching off the modulated potential and starting the drive sets \(\varphi_{k/p}=\arg(R_{k/p})\).
In particular, we contrast the different evolutions of imprinted lattices at two angles, \(\theta=90^{\text{o}}\) and \(\theta=30^{\text{o}}\), as shown in fig. 4. Initial momentum occupations for the two cases are shown in the left plots of fig. 4a and d. We extract \(|\phi_{k/p}|^{2}\) by summing over a region around the imprinted momenta, indicated by red and yellow boxes, and average the \(\pm k\) peaks. Even for constant phonon occupations, the measured atom numbers in momentum space oscillate in time. By fitting the function \(f(t)=A\left(1-\cos\left(\omega_{d}t+2\varphi\right)\right)+\text{const.}\) to single periods of these oscillating populations, we can extract the slowly-varying magnitude of the stripe amplitudes \(|R_{k/p}|=\sqrt{\frac{A\epsilon}{N\mu}}\), and the phase relative to the shaking \(\varphi=\varphi_{k/p}\) (see materials and methods).
Imprints for both angles are realized with the same initial occupations and \(\varphi_{p}(t=0)\sim-\pi/8\), \(\varphi_{k}(t=0)\sim-\pi/2\). In both cases, one observes the quick phase-locking of the stripes to the drive, such that \(\varphi_{p}=\varphi_{k}=-3\pi/8\) (inset), showing that the dynamics of eq. (6) set
Figure 4: **Dynamics of imprinted lattices****a**, Ensemble average of the initial condition for imprinted patterns at \(90^{\text{o}}\). Occupations of \(R_{k}\) (\(R_{p}\)) are extracted by summing over the red (yellow) boxes. **b**, Extracted \(|R|\) values (pink dots) in the theoretically predicted stability diagram for \(\theta=90^{\text{o}}\). **c** Oscillations of average occupations per summation box in momentum space of the imprinted waves, with error bars indicating standard errors. The inset shows the extracted phases of \(R_{k/p}\), with \(1\sigma\) fit errors. Though waves are imprinted with different phases, they quickly phase-lock with a slight delay to the drive. **d** Average momentum distributions for an identically imprinted wave to (**a**), but with \(\theta=30^{\text{o}}\). **e**, Extracted \(|R|\) values and the stability diagram for \(\theta=30^{\text{o}}\), indicating an unstable lattice fixed point. **f**, Occupations of momenta for the triangular lattice, \(\theta=30^{\text{o}}\). Despite identical initial conditions to the imprinted wave at \(\theta=90^{\text{o}}\), growth of the second stripe is significantly suppressed.
not only the occupations but also the phase of phonons. Despite similar phase evolutions for both angles, the dynamics of the occupations look dramatically different. While for the square lattice pattern both stripe directions grow over time, in the \(30^{\mathrm{o}}\) case only one of the stripe direction grows. This can be understood by looking at the stability diagrams for \(\theta=30^{\mathrm{o}}\) and \(\theta=90^{\mathrm{o}}\), plotted in fig. 4b and e. While the lattice solution for \(\theta=90^{\mathrm{o}}\) is a stable fixed point, for \(\theta=30^{\mathrm{o}}\) this point is unstable. Converting the extracted oscillation amplitudes to phonon amplitudes and plotting these into the respective stability diagrams (pink points in fig. 4b and e), one sees that amplitudes of waves in the square lattice grow towards the fixed point, while the triangular lattice shows a lack of growth of \(R_{p}\).
These measurements thus give insight into the process of lattice formation at later drive times. Once randomly produced stripes have large amplitudes, the interaction between them becomes relevant. Stripes with orientations close to \(\theta=90^{\mathrm{o}}\) flow towards higher occupation numbers, whereas other orientations flow towards single stripe solutions.
The observation of a stable, attractive fixed point in a driven BEC reveals that interactions of phonons can produce strongly correlated states. Such highly correlated systems typically result from strong interactions; here we show that weak interactions paired with large occupations can produce similar effects. Additionally, fixed points characterize the transition between physical phases [28], and thus our observation points to a new state of matter out of equilibrium. The characteristics of the observed crystalline steady-state are indicative of supersolidity [29; 30; 31], but a clean demonstration of superfluidity as well as multiple speeds of sound remain an outstanding challenge.
|
2309.06832 | Dual-recycled interference-based weak value metrology | Weak-value-amplification permits small effects to be measured as observable
changes at the sacrifice of power due to post-selection. The power recycling
scheme has been proven to eliminate this inefficiency of the rare
post-selection, thus surpassing the limit of the shot noise and improving the
precision of the measurement. However, the improvement is strictly limited by
the system setup, especially the system loss. Here we introduce a dual
recycling model based on the interferometric weak-value-based deflection
measurement. Two mirrors, the power-recycling mirror and signal-recycling
mirror, are placed at the bright and dark port of the interferometer
respectively, creating a composite resonator. The results show that both the
power and the signal-to-noise ratio (SNR) are greatly enhanced in a wider range
of experimental parameters compared to the power-recycling scheme. This work
considerably loosens the constraint of the system setup and further explores
the real advantage of weak measurement over traditional schemes. | Zi-Rui Zhong, Wei-Jun Tan, Yue Chen, Qing-Lin Wu | 2023-09-13T09:32:11Z | http://arxiv.org/abs/2309.06832v2 | # Dual-recycled interference-based weak value metrology
###### Abstract
Weak-value-amplification permits small effects to be measured as observable changes at the sacrifice of power due to post-selection. The power recycling scheme has been proven to eliminate this inefficiency of the rare post-selection, thus surpassing the limit of the shot noise and improving the precision of the measurement. However, the improvement is strictly limited by the system setup, especially the system loss. Here we introduce a dual recycling model based on the interferometric weak-value-based deflection measurement. Two mirrors, the power-recycling mirror and signal-recycling mirror, are placed at the bright and dark port of the interferometer respectively, creating a composite resonator. The results show that both the power and the signal-to-noise ratio (SNR) are greatly enhanced in a wider range of experimental parameters compared to the power-recycling scheme. This work considerably loosens the constraint of the system setup and further explores the real advantage of weak measurement over traditional schemes.
_Introduction_. Weak measurement, first introduced by Aharonov, Albert and Vaidman in 1988[1], has shown significant potential in small effect estimations. In contrast to traditional measurements, the coupling between the system and the probe is slight, permitting small changes in the parameters to be visibly measurable. This amplification comes at the cost of a low detection probability. Therefore, it can be seen that the Fisher information of the parameters to be measured is concentrated in a few collected signals, resulting in the corresponding SNR being almost the same as the SNR in traditional measurements. With these characteristics, weak measurement has been successfully used in the detection of many small physical effects such as the spin Hall effect of light[2; 3; 4], optical beam deflection[5], phase shift[6; 7], Goos-Hanchen shift[8; 9], ultrasensitive velocity[10], and magnetic resonance[11]. Generally, the weak value can be written as \(A_{w}=\left\langle\psi_{f}\right|\hat{A}\left|\psi_{i}\right\rangle/\left\langle \psi_{f}|\psi_{i}\right\rangle\) where \(\left|\psi_{i}\right\rangle\) and \(\left|\psi_{f}\right\rangle\) are the pre- and post-selected states of the system, respectively. \(A_{w}\) may be very large if \(\left|\psi_{i}\right\rangle\) and \(\left|\psi_{f}\right\rangle\) are nearly orthogonal, resulting in a very large amount of associated parameters. This is refer to as weak value amplification. However, the post-selected probability written as \(P=|\left\langle\psi_{f}|\psi_{i}\right\rangle|^{2}\) is a second-order small quantity that causes the signal to being too weak to detect. A solution named joint weak measurement[[12; 13]] divides the incident light by the Mach-Zehnder interferometer into two paths for detection. The probability of each path is almost 50%, which is a significant enhancement of the detected signal. Another problem is that the SNR is limited by shot noise. The power-recycling technique, initially introduced by Drever[14], is useful for precision improvement. By placing a partially transmitting mirror at the bright port of an interference to form a resonant cavity, it allows the failed post-selection photons to be repeatedly reselected, thus increasing the Fisher information of the signal to improve the SNR. Another technique is placing a partially transmitting mirror at the dark port of an interferometer. In this technique, which is named signal-recycling [15], the signal scales linearly with the number of cycles to improve the precision of the detection.
In the power-recycled weak value amplification scheme, all of the input light can be acquired while maintaining the large point shift associated with previous weak value amplification in principle[16; 17; 18]. However, the enhancement is strictly limited by the system loss \(\gamma\) and the reflection coefficient of the power recycling mirror[19]. The dual-recycling technique, first introduced by Meers for gravitational wave detection[20], combines power-recycling and signal-recycling in one Michelson interferometer to enhance the wave signal. It was demonstrated by Strain in an experiment in 1991[21] and successfully applied to Advanced LIGO[22; 23]. In this study, we implement the dual-recycling technique into standard weak-value-based metrology to improve the signal and SNR.
_Power recycling_. We first review the interferometric weak-value setup of using the Sagnac interferometer to measure the beam deflection in Ref. [5]. A continuous wave laser with a transverse Gaussian profile \(E_{0}(x)=(N^{2}/2\pi\sigma^{2})exp(-x^{2}/4\sigma^{2})\), in which \(\sigma\) is the modulated length and \(N\) is the average number of available photons per unit time, is sent through the interfer-meter. A Soleil-Bainet Compensator (SBC) controls the phase difference \(\phi\) between the clockwise \(\circlearrowleft\) and counter-clockwise \(\circlearrowleft\) light. The small transverse momentum is induced by a piezo-driven mirror and can be measured by a split detector placed at the dark port. In this weak-value-based measurement, the system is which-path of light of the interference. We introduce the which-path operator \(\hat{W}=\left|\circlearrowleft\rangle\left\langle\circlearrowleft\right|- \left|\circlearrowright\right\rangle\left\langle\circlearrowright|\right|\) defined with two orthogonal circulating states \(\left|\circlearrowleft\rangle\right\rangle\) and \(\left|\circlearrowright\rangle\). The initial state can be written as \(\left|\psi_{i}\right\rangle=1/\sqrt{2}\left[iexp(i\phi/2)\left|\circlearrowleft \rangle+exp(-i\phi/2)\left|\circlearrowright\right\rangle\right]\), and the post-selected state is \(\left|\psi_{f}\right\rangle=1/\sqrt{2}\left[\left|\circlearrowleft\rangle +i\left|\circlearrowright\right\rangle\right]\). Thus, the weak value, which amplifies the kick \(k\) for each collected photons, is given by \(\left\langle\psi_{f}|\hat{W}\left|\psi_{i}\right\rangle/\left\langle\psi_{f} |\psi_{i}\right\rangle=-i\cot\left(\phi/2\right)\approx-2i/\phi\). Under the weak value parameter range of \(k\sigma\ll\phi/2\ll 1\), the signal-to-noise ratio(SNR)
\(\mathcal{R}\) is limited by the detected shot noise \(N_{det}\):
\[\mathcal{R}_{s}=\frac{\left\langle S\right\rangle}{\sqrt{N_{det}}}\approx 2\sqrt{ \frac{2}{\pi}}\sqrt{N_{det}}\frac{2k\sigma}{\phi}. \tag{1}\]
As shown in Fig.1, the power recycling technique is used by placing a partially transmitting mirror at the bright port of the interferometer to reuse the photons that return to the laser. In the absence of the cavity, the number of detected photons is \(N_{det}=PN\), where \(P=\left|\left\langle\psi_{f}|\psi_{i}\right\rangle\right|^{2}\approx\left(\phi /2\right)^{2}\) is a very small amount so that \(N_{det}\ll N\). With the cavity, the detected number can be ideally increased to N, meaning that all incident photons exit the dark port with the amplified deflection[16]. Therefore, the SNR is improved by \(2/\phi\) times, which is also a multiple of the quantum limit improvement. Next, we briefly explain why a resonator can allow entire photons to be detected. We consider an initial amplitude \(E_{0}\) that enters a cavity formed by two same partially transmitting mirrors whose reflection coefficient is r and transmission coefficient is p (\(r^{2}+p^{2}=1\)). With negligible loss, the amplitude of the light in the cavity can be expressed as:
\[E_{cav}=E_{0}p\left[1+r^{2}e^{i\theta}+(r^{2}e^{i\theta})^{2}+\dots\right]= \frac{p}{1-r^{2}e^{i\theta}}E_{0}. \tag{2}\]
The amplitude of reflected light is
\[E_{r}=\left(-r+\frac{p^{2}re^{i\theta}}{1-r^{2}e^{i\theta}}\right)E_{0} \tag{3}\]
Regarding a resonant cavity, \(\theta\) is satisfied with \(\theta=2n\pi\), where n is any integer. Thus, \(E_{r}=0\) and \(E_{cav}=\frac{1}{p}E_{0}\). No photons are reflected back to the laser and the gain factor of the signal is \(G=\left|E_{cav}\right|^{2}/\left|E_{0}\right|^{2}=1/p^{2}\).
This calculation of the general model can be used in the power-recycling scheme. The'system' state \(\left|\psi\right\rangle\) is formed by path states \(\left|\circlearrowright\rangle\) and \(\left|\circlearrowright\rangle\), and the'meter' state \(\left|\varphi\right\rangle\), with the initial pointer for a single photon expressed as \(\left\langle x|\varphi_{0}\right\rangle=(1/2\pi\sigma^{2})exp(-x^{2}/4\sigma^ {2})\), represents the transverse profile of the beam. Considering the loss caused by optical traversals, we introduce the operator \(\hat{L}=\sqrt{1-\gamma}\hat{1}\), where \(\gamma\) is the average power loss of one traversal. The spatial filter(SF) placed at the dark port of the interferometer is used to refresh photons in power-recycled weak value metrology, which correct the traverse shift of the beam each pass. [16] calculates the steady state amplitude exiting the detection port, which is given by the sum of amplitudes from all traversal numbers
\[\left|\varphi\right\rangle = \frac{ip\sqrt{1-\gamma}\sin\left(\phi/2-k\hat{x}\right)}{1-r\sqrt {\left(1-\gamma\right)P_{+}}}\left|\varphi_{0}\right\rangle \tag{4}\]
where \(P_{+}\approx\cos^{2}\left(\phi/2\right)\). Under the impedance matching conditions of \(r=\sqrt{\left(1-\gamma\right)P_{+}}\) and \(p\approx\phi/2\), the amplitude of light reflected back toward the laser is 0:
\[\left|\varphi_{r}\right\rangle = \left[-r+\frac{p^{2}\sqrt{1-\gamma}\cos\left(\phi/2-k\hat{x} \right)}{1-r\sqrt{\left(1-\gamma\right)P_{+}}}\right]\left|\varphi_{0}\right\rangle \tag{5}\]
By substituting the impedance matching conditions into \(\left|\varphi\right\rangle\), the total number of detected photons is determined to be
\[N_{det}=N\int_{-\infty}^{\infty}dx\left|\left\langle x|\varphi\right\rangle \right|^{2}\approx N\left(1-\frac{4\gamma}{\phi^{2}}\right). \tag{6}\]
Ignoring higher order terms, the detector has an SNR of
\[\mathcal{R}_{p}\approx 4\sqrt{\frac{2}{\pi}}\sqrt{N}\frac{k\sigma}{\phi}\left(1- \frac{2\gamma}{\phi^{2}}\right). \tag{7}\]
The factor \(2\gamma/\phi^{2}\) is derived from the filter loss, which is the minimum loss \(\gamma_{min}\approx k^{2}\sigma^{2}\phi^{2}/4\) given by the SF. In the absence of this SF, the photons traveling in the cavity have an opposite transversal move, which tends to diminish the detected signal, resulting in a decrease in the weak value amplification. This is called the walk-off effect[6; 24]. The filter can eliminate this effect by acting as a projection back onto the initial state with a small adverse impact on the intensity of the signal. When the loss (including the filter loss) \(\gamma\ll\phi^{2}/4\) is small, the SNR is ideally increased by the weak value factor of \(2/\phi=1/\sqrt{p}\) times that of single pass weak measurement, while the weak value amplification does not change.
However, the \(1/\sqrt{p}\) fold improvement of the SNR is almost unreachable in the experiments. The optical loss \(\gamma\), which is non-negligible, profoundly affects the gain. In Ref.[19], under the conditions of \(\gamma\approx 0.33\) and \(\phi/2\approx 0.32\), the intensity of the signal just increases 2.36 times, which can reach 10 if lower loss is provided. In addition, even at negligible loss, optimal region is narrow in the power-recycling mode. For example, when \(\phi/2=0.01\), the matched reflection coefficient is approximately \(0.99995\pm 0.00004\), which is not easy to achieve technically. We will soon show that the dual-recycling scheme can be used to overcome the aforementioned problems to some extent and has more potential in improving the SNR in nearly realistic conditions.
_dual recycling._ Our initial intention for the dual-recycled weak value amplification scheme originated from the feasibility analysis of integrating the power-recycling technique and the signal-recycling technique into one weak measurement setup. Both techniques have their own advantages in improving the \(N_{det}\): the partially transmitting mirror placed at the bright port of the interferometer can reflect the photons returned to the laser to the dark port, and all these photons obtain the gain of the resonator at the dark port in turn. It seems that the dual recycling scheme is feasible from a preliminary perspective. However, we need to determine its limitations. When applied to the weak value measurement, the dual-recycling technique cannot break the limit of power recycling because the final number of detected photons \(N_{det}\) cannot be larger than the incident photons \(N(N_{det}\leq N)\) regardless of the design of cavity. Therefore, we pay more attention to its tolerance for \(\gamma\) and r, and especially its improvement under nearly realistic loss where we set \(\gamma\in[0.1,0.3]\) in this paper. As
shown in Fig. 1, we slightly modify the experimental devices by adding a partially transmitting mirror \(M_{2}\) at the dark port based on the setup in Ref.[16]. To simplify the model, we assume that \(M_{1},M_{2}\) are two mirrors with the same parameters and their distances to the BS are equal. In [16]'s power-recycling scheme, The SF is proposed to refresh the transverse profile of cyclic photons, thus eliminating the walk-off effect. This causes a minimum filter loss \(\gamma_{min}\approx k^{2}\sigma^{2}\phi^{2}/4\) for ideal optics. However, in this dual-recycling model, the cyclic photons refreshing and transverse shift seen incompatible. We need the SF that refresh all cyclic photons before their last pass through the PM, which is impossible in this two paths, clockwise and counterclockwise, system. Therefore, we remove the SF and consider the walk-off effect in subsequent calculations.
We also consider a transverse Gaussian profile \(E_{0}(x)=(N^{2}/2\pi\sigma^{2})exp(-x^{2}/4\sigma^{2})\) so that the corresponding initial meter state of a single photon is written as \(\left|\,\varphi_{0}\right\rangle=\left(1/2\pi\sigma^{2}\right)^{1/4}exp\left( -x^{2}/4\sigma^{2}\right)\left|x\right\rangle\). We introduce two unitary operators \(\hat{U}_{SBC}=e^{-i\theta\hat{W}/2}\) and \(\hat{U}_{PM}=e^{ik\hat{x}\hat{W}}\), which represent a net shift \(\phi\) produced by the SBC and the weak coupling between the system and meter induced by the PM respectively. The system state of the bright port is defined as \(\left|\,\psi_{+}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\,\circlearrowright\rangle +i\left|\,\circlearrowright\rangle\right)\) so that the corresponding state of the dark port is described by the orthogonal state \(\left|\,\psi_{-}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\,\circlearrow\right \rangle-i\left|\,\circlearrow\right\rangle\right)\). The process of each time the photon entering BS and coming out can be seen as a post-selection projection. Considering that the system state can only be projected onto \(\left|\,\psi_{+}\right\rangle\) or \(\left|\,\psi_{-}\right\rangle\), there are four possible projections:
\[\begin{split} M_{11}&=\left\langle\psi_{+}\right| \hat{U}_{PM}\hat{U}_{SBC}\Big{|}\psi_{+}\Big{\rangle}=\cos\left(kx-\phi/2 \right)\\ M_{12}&=\left\langle\psi_{-}\right|\hat{U}_{PM} \hat{U}_{SBC}\Big{|}\psi_{+}\Big{\rangle}=i\sin\left(kx-\phi/2\right)\\ M_{21}&=\left\langle\psi_{+}\right|\hat{U}_{PM} \hat{U}_{SBC}\Big{|}\psi_{-}\Big{\rangle}=i\sin\left(kx-\phi/2\right)\\ M_{22}&=\left\langle\psi_{-}\right|\hat{U}_{PM} \hat{U}_{SBC}\Big{|}\psi_{-}\Big{\rangle}=\cos\left(kx-\phi/2\right)\end{split} \tag{8}\]
The numbers of Subscripts 1 and 2 represent \(\left|\psi_{+}\right\rangle\) and \(\left|\psi_{-}\right\rangle\), and Subscript 12 indicates that the initial state \(\left|\psi_{+}\right\rangle\) is projected into \(\left|\psi_{-}\right\rangle\) in one traversal. Therefore, we define a measurement matrix \(M\)
\[M=\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}=\begin{bmatrix}\cos\left(kx-\phi/2\right)&i\sin \left(kx-\phi/2\right)\\ i\sin\left(kx-\phi/2\right)&\cos\left(kx-\phi/2\right)\end{bmatrix} \tag{9}\]
Figure 1: (Color online) Schematic of weak-value-based deflection measurement via a dual recycling technique. (a)A continuous Gaussian-shaped wave laser enters to an optical cavity formed by two partially transmitting mirrors(\(M_{p},M_{s}\)) and a Sagnac interferometer. The Soleil-Bainet Compensator (SBC) together with a half wave plate (HWP) controls the phase difference \(\phi\) between the two counterpropagating arms. The small transverse momentum is induced by a piezo-driven mirror (PM) and finally measured by a split detector placed at the dark port. (b)Optical mode matching. The lenses \(L_{1}\), \(L_{2}\) and \(L_{3}\) with matched focal lengths are set on proper positions to ensure that the place of the waist and self-reproduction of Gaussian beam are at \(M_{s}\) and \(M_{p}\). BS: 50/50 beamsplitter, DP: Dove prism
Figure 2: (Color online) Comparison of the power improvement factors of the dual recycling \(G_{d}\) and power recycling \(G_{p}\). (a) \(\gamma=0\); \(G_{d}\) (dash-dotted red line) and \(G_{p}\) (dashed blue line) vary with r. (b)\(\gamma=0.1,\ 0.2\); \(G_{d}\) and \(G_{p}\) vary with r. The red star, red triangle, solid blue line and blue dot correspond to \(G_{d}(\gamma=0.1)\), \(G_{d}(\gamma=0.2)\), \(G_{p}(\gamma=0.1)\) and \(G_{p}(\gamma=0.2)\) respectively. (c) \(G_{d}\) in 3D. (d) \(G_{p}\) in 3D. \(\gamma\): the system loss. r: mirror reflection coefficient.
\(\left(M^{n}\right)_{12}\) is the change in the state amplitude from \(\left|\psi_{+}\right\rangle\) to \(\left|\psi_{-}\right\rangle\) after n traversals, corresponding to the physical process in which incident light starts from the bright port traveling through the cavity n times before reaching the dark port for detection. Therefore, the steady state amplitude detected by the meter is given by the sum of the amplitudes from all traversal numbers,
\[\left|\varphi_{d}\right\rangle=p\sqrt{1-\gamma}\sum_{n=0}^{\infty}\left(r\sqrt {1-\gamma}\right)^{n}\left(M^{n+1}\right)_{12}p\left|\left.\varphi_{0}\right\rangle. \tag{10}\]
It is a summation of the convergence series because both \(\gamma\) and r are less than 1. Thus, there is a maximum value of n denoted by \(n_{max}\), where \((r\sqrt{1-\gamma})^{n_{max}}\approx 0\). The formula above can be simplified as
\[\left|\varphi_{d}\right\rangle=p\sqrt{1-\gamma}\sum_{n=0}^{n_{ max}}\left(r\sqrt{1-\gamma}\right)^{n}\left(M^{n+1}\right)_{12}p\left| \left.\varphi_{0}\right\rangle\right. \tag{11}\] \[=p^{2}\sqrt{1-\gamma}\left(\frac{M-M(r\sqrt{1-\gamma}M)^{n_{ max}}}{I-r\sqrt{1-\gamma}M}\right)_{12}\left|\left.\varphi_{0}\right\rangle\right.\] \[\approx p^{2}\sqrt{1-\gamma}\left(\frac{M}{I-r\sqrt{1-\gamma}M} \right)_{12}\left|\left.\varphi_{0}\right\rangle\right.\] \[=\frac{ip^{2}\sqrt{1-\gamma}\sin\left(kx-\phi/2\right)}{1+r^{2} \left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\left(kx-\phi/2\right)}\left| \left.\varphi_{0}\right\rangle\right.\]
where \(I\) is the identity matrix. We perform a Taylor expansion on the function \(f\left(x\right)=\sin\left(kx-\phi/2\right)/(1+r^{2}\left(1-\gamma\right)-2r \sqrt{1-\gamma}\cos\left(kx-\phi/2\right))\) and make an approximation \(f\left(x\right)\approx f\left(0\right)+xf^{\prime}\left(0\right)\approx exp \left(-f^{\prime}\left(0\right)x/f\left(0\right)\right)\) on the condition \(2kx/\phi\ll 1\) so that the final amplitude of detected state is
\[\left\langle x|\varphi_{d}\right\rangle\approx A\left(1/2\pi\sigma^{2}\right) ^{1/4}\sin\left(\phi/2\right)exp[-\left(x-x_{2}\right)^{2}/4\sigma^{2}], \tag{12}\]
where
\[A=\frac{ip^{2}\sqrt{1-\gamma}}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma }\cos\left(\phi/2\right)} \tag{13}\]
and
\[x_{2}=\frac{2k\sigma^{2}\cos\left(\phi/2\right)\left[1+r^{2}\left(1-\gamma \right)\right]-2r\sqrt{1-\gamma}}{\sin\left(\phi/2\right)\left[1+r^{2}\left(1 -\gamma\right)-2r\sqrt{1-\gamma}\cos\left(\phi/2\right)\right]}. \tag{14}\]
The corresponding number of detected photons is
\[\begin{split} N_{det}&=N\int_{-\infty}^{\infty}dx \left|\left\langle x|\varphi_{d}\right\rangle\right|^{2}\\ &\approx N\left[\frac{p^{2}\sqrt{1-\gamma}\sin\left(\phi/2 \right)}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\left(\phi/2 \right)}\right]^{2}\end{split} \tag{15}\]
To obtain the maximum \(N_{det}\), we need a matched r to make the light reflected back to the laser satisfy \(\left|\varphi_{r}\right\rangle=0\). Similarly, \(\left(M^{n}\right)_{11}\) represents the change in the amplitude of light reflected back to the laser after n traversals. The meter state amplitude back to the laser is
\[\left|\left.\varphi_{r}\right\rangle\right.=\left[-r+p^{2}\sum_{n=0 }^{n_{max}}\left(r\sqrt{1-\gamma}\right)^{n+1}\left(M^{n+1}\right)_{11}\right] \left|\left.\varphi_{0}\right\rangle\right. \tag{16}\] \[=\left\{\frac{p^{2}\sqrt{1-\gamma}\left[\cos\left(kx-\phi/2 \right)-r\sqrt{1-\gamma}\right]}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma }\cos\left(kx-\phi/2\right)}-r\right\}\left|\left.\varphi_{0}\right\rangle\right.\]
where the impedance matching condition after neglecting the higher-order terms is
\[r\approx\frac{2-\gamma-2\sqrt{1-\gamma}\sin\left(\phi/2\right)}{2\sqrt{1- \gamma}\cos\left(\phi/2\right)}. \tag{17}\]
By substituting (17) into (15), we can obtain the number of detected photons
\[N_{det}\approx N\left(1-\frac{2\gamma}{\phi}\right), \tag{18}\]
giving all of them, minus losses. Therefore, the corresponding SNR of the detector is
\[\mathcal{R}_{d}\approx 4\sqrt{\frac{2}{\pi}}\sqrt{N}\frac{k\sigma}{\phi}\left(1- \frac{\gamma}{\phi}\right). \tag{19}\]
It can be seen from (18) that \(N_{det}\) of dual-recycling reaches a maximum \(N\) when \(\gamma\ll\phi/2\). \(N_{det}\) of power-recycling, given by (6), also reaches a maximum \(N\) when \(\gamma\ll\phi^{2}/4\). Regarding the same weak coupling approximation \(\gamma\ll\phi/2\), the dual-recycling has wider optimal region of loss. Considering the experimental loss in practice, the dual-recycling model maintain its advantages. For example, when \(\gamma=0.1\) and \(\phi/2=0.1\), a common setup, the dual-recycling enables about 40.5 times power improvement with the matched reflection coefficient \(r\approx 0.90\) while the power-recycling just enables 9.2 times power improvement with a larger matched parameter \(r\approx 0.95\), where the ideal maximum power improvement is \((2/\phi)^{2}=100\). The similar conclusion is obtained for the SNR analysis.
\(N_{det}\) _comparison_. Here we introduce two parameters \(G\) and \(Q\), which correspond to the power and SNR improvement factor compared to the standard weak value
Figure 3: (Color online) The correction of the beam transverse shift based on the walk-off effect. (a) \(\gamma=0\); \(x_{2}/x_{1}\) varies with r. (b) \(\gamma\in\left[0.1,0.3\right]\); \(x_{2}/x_{1}\) in 3D. \(x_{1}\) and \(x_{2}\) are the beam transverse shift before and after considering the walk-off effect. \(\gamma\): the system loss. r: mirror reflection coefficient.
setup, for further comparison.
\[G_{p}=\left[\frac{p}{1-r\sqrt{\left(1-\gamma\right)}\cos\left(\phi/2\right)} \right]^{2}, \tag{20}\]
\[G_{d}=\left[\frac{p^{2}}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos \left(\phi/2\right)}\right]^{2}. \tag{21}\]
where the subscripts \(d\) and \(p\) correspond to dual-recycling and power-recycling, respectively. In this paper, We set \(\phi/2=0.01\), a small angle, to show the large advantages of weak-value-amplification and cavity gain. However, this small post-selected angle corresponds to a high-finesse matched cavity, causing difficulty in controlling experimental accuracy. Thus, in real experiment, proper angle adjustment is necessary. Next, we draw: (1) \(G_{p}\) and \(G_{d}\) vary with the mirrors reflection coefficient r when \(\gamma=0\), an ideal loss; (2) \(G_{p}\) and \(G_{d}\) vary with r when \(\gamma\in\left[0.1,0.3\right]\), common losses.
Shown in Fig. 2(a), both \(G_{p}\) and \(G_{d}\) can reach the maximum value \(\left(2/\phi\right)^{2}=10000\) as expected. Notably, the mirror reflection coefficients corresponding to the peak values of \(G_{p}\) and \(G_{d}\) are \(r_{pm}\) and \(r_{dm}\), respectively. The slope of \(G_{p}\) near the peak value is very large, which means that \(r_{pm}\) is limited in a narrow range or the gain effect decreases sharply. \(G_{d}\) changes gently in a wider range of r, making it more tolerant for unavoidable mirror errors. In addition, \(r_{dm}\) is approximately 0.99, which is easier to obtain than 0.99995 of \(r_{pm}\). From this perspective, the dual recycling scheme with a higher tolerance for r is more advantageous. Figs. 2(b), (c) and (d) intuitively show the comparison of signal amplification effects between the two schemes when \(\gamma\in\left[0.1,0.3\right]\). It is obvious that \(G_{d}\) is larger than \(G_{p}\) at the same loss \(\gamma\), indicating that the dual recycling scheme is better in overcoming the weak detected signal in the experimental setup.
_The SNR correction._ Due to the walk-off effect, the amplification of the weak value factor involved in the SNR is weakened, resulting in an opposite transverse shift of the Gaussian profile and decreasing the SNR. We can predict that the smaller \(\gamma\) is, the larger the walk-off effect is based on the fact that photons travel more times with small losses, which will increase the proportion of the cyclic term \(\cos\left(kx-\phi/2\right)\). In addition, the walk-off effect in the dual-recycling scheme is larger than that of the power-recycled system. Thus, the SNR described in (19), which is not sufficiently rigorous when considering the non-negligible walk-off effect, should be corrected. In the standard weak value setup, the intensity of the detected light is given by
\[I_{d}\approx\frac{N}{\sqrt{2\pi\sigma^{2}}}\left(1-\gamma\right)\sin^{2}{( \phi/2)}exp\left[-\frac{1}{2\sigma^{2}}\left(x-x_{1}\right)^{2}\right], \tag{22}\]
where \(x_{1}=-4k\sigma^{2}/\phi\) is a transverse shift of the Gaussian wave caused by weak interaction, which can be observed by measuring the waveform change. With the dual recycling cavity, this observable shift changes to \(x_{2}\). The corresponding SNR is then corrected to:
\[\mathcal{R}_{c}=\frac{x_{2}}{\triangle x_{2}}\approx\sqrt{\frac{2}{\pi\sigma ^{2}}}\sqrt{N_{det}}\cdot x_{2}. \tag{23}\]
Similarly, we can obtain the SNR improvement factor in the power recycling scheme:
\[Q_{p}=\frac{p}{1-r\sqrt{\left(1-\gamma\right)}\cos\left(\phi/2\right)}. \tag{24}\]
The factors of the dual recycling scheme before and after considering the walk-off effect are noted as \(Q_{d1}\) and \(Q_{d2}\), which are respectively given by
\[Q_{d1}=\frac{p^{2}}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\left( \phi/2\right)} \tag{25}\]
and
\[Q_{d2}=\frac{p^{2}}{1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\left( \phi/2\right)}\frac{x_{2}}{x_{1}}, \tag{26}\]
where
\[\frac{x_{2}}{x_{1}}=\frac{\phi}{2}\frac{\cos\left(\phi/2\right)\left[1+r^{2} \left(1-\gamma\right)\right]-2r\sqrt{1-\gamma}}{\sin\left(\phi/2\right)\left[1 +r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\left(\phi/2\right)\right]}. \tag{27}\]
Figure 4: (Color online) The SNR improvement factors of dual recycling (\(Q_{d1}\) and \(Q_{d2}\)) and power recycling (\(Q_{p}\)). (a) represents the SNR correction of dual recycling. \(Q_{d1}\)(dash-dotted line) and \(Q_{d2}\)(dashed line) vary with r when \(\gamma=0\). (b) shows the comparison of the corrected SNR improvement factors between dual recycling and power recycling. The red star, red triangle, solid blue line and blue dot correspond to \(Q_{d2}\)(\(\gamma=0.1\)), \(Q_{d2}\)(\(\gamma=0.2\)), \(Q_{p}\)(\(\gamma=0.1\)) and \(Q_{p}\)(\(\gamma=0.2\)) respectively. (c) \(Q_{d2}\) in 3D. (d) \(Q_{p}\) in 3D. \(Q_{d1}\) and \(Q_{d2}\) are the SNR improvement factors before and after considering the walk-off effect. \(\gamma\): the system loss. r: mirror reflection coefficient.
We plot \(x_{2}/x_{1}\) as a function of r for different losses (1) \(\gamma=0\) in Fig. 3(a) and (2) \(\gamma\in[0.1,0.3]\) in Fig. 3(b). As expected, we can obtain a smaller opposite shift with a larger loss. Under ideal lossless conditions, the opposite shift corresponding to r can be of the same magnitude as the transverse shift of weak interactions, and can even reverse the direction of the detected shift. This means that the average number of cycles n of the final probe light to be sufficiently large to allow the cyclic term to dominate is possible. As shown in Fig. 4(a), we compare the SNR before and after correction when \(\gamma=0\). This opposite shift results in almost no transverse shift of the detected light under the impedance matching condition (\(r_{dm}\)). This means that the walk-off effect completely cancels out the weak interaction, leading to the weak-value-amplification effect disappear. In this situation, the noise is infinite and the corresponding SNR is 0. However, this case will not occur because of the existence of losses in the actual system(often between 0.1 and 0.3), where the opposite shift is very small. For example, as shown in Fig.3(b) and Fig. 4(b), (c) and (d) with \(\gamma=0.2\) and \(r=0.9\), the SNR only decreases by less than 0.5%, which hardly affects the considerable improvement of the SNR of dual recycling model.
_Loss analysis._ To date, we have just provided the basic calculations of 'unstable' dual-recycling cavity. Generally, the beam will be a diffracting Gaussian beam with a beam waist \(w_{0}\) rather than the parallel beam treated above. Straightly using a flat mirror for beam interference will be exciting multiple modes, leading to the mode loss. Therefore, we need to ensure that the waist size and waist position of the incident laser are the same to that of the resonant cavity itself, forming stable self-reproduction. The partially transmitting mirrors should be curved mirrors or flat mirrors with some matched lenses instead of only flat mirrors. In this paper, we choose the combination of flat mirrors and lenses, which is shown in Fig. 1(b), for a stable cavity configuration. Similar to the previous power-recycling schemes[16], a Dove prism inside the Sagnac interferometer provides a transverse parity flip to correct the momentum kick and we put the momentum kick instead on the beam splitter for sensitive response. The lenses \(L_{1}\), \(L_{2}\) and \(L_{3}\) with matched focal lengths are set on proper positions to ensure that the place of the waist and self-reproduction of Gaussian beam are at \(M_{s}\) and \(M_{p}\), thus completing the mode matching[19].
In actual experiments, even if the experimental parameters have been set as required, the length of the resonant cavity is unstable and time-dependent due to the influence of optical platform jitter, temperature, pressure and so on. In [19], Wang and others proposed using the Pound-Drever-Hall(PDH) system for power-recycling cavity locking: An electro-optic modulator modulates the incident laser into one signal carrier and two sidebands of equal size and opposite phase. By utilizing the influence of cavity length on sideband symmetry, the error signal is output to reflect the change of cavity length and then input into the servo controller for feedback. Next, the feedback currents act on the piezoelectric ceramic to change the length of the cavity, which ensures the stability cavity locking. This PDH system also applies to our dual-recycling scheme, but with some differences. Considering that this is a two-partially-transmitting-mirrors, \(M_{s}\) and \(M_{p}\), dependent cavity, both the paths from \(M_{s}\) and \(M_{p}\) to the beam splitter, noted as \(l_{s}\) and \(l_{p}\), are unstable. A possible way is to adjust the post-selected angle and parameters of \(M_{s}\), \(M_{p}\) to make the light reflected to the laser approximately independent of \(M_{s}\) and use this part of the light to lock the path \(l_{p}\) first. Then add another PDH system to modulate the output light, thus correcting the path \(l_{s}\).
_Conclusion._ In summary, we have proposed a dual recycling scheme to further improve the precision of an interferometric weak-value-based beam deflection measurement compared to the power-recycling scheme. By adding a signal-recycling mirror to the power-recycling cavity, we can similarly acquire all input light to be detected in principle. The results show that the SNR can be greatly improved in a wider range of possible system losses and matched mirror reflection coefficients. However, a small walk-off effect still occurs.
This dual recycling scheme can also be applied to some other experimental realizations because of the ubiquitous problems of probe losses and weak signal exiting in all types of weak value amplification experiments. However, due to the complicated combination of the cavity, the feedback signals require multiple lengths to be adjusted, resulting in the difficult cavity locking, which must be addressed. In addition, We analyze that the filter can be used to eliminate the walk-off effect while leading to the failure of interferometric dual-recycling schemes attributed to the lack of a fixed order of filtering and weak coupling. The polarization-based scheme by using the polarization-beam-splitter may ensure that the light passes through the weak-interaction-site in one direction, thus enabling a filter to refresh photons. In addition, we can combine this technique with some quantum resources such as entangled and squeezed light for further precision improvement in future work.
_acknowledgments._ This work was supported by the National Natural Science Foundation of China (Grants No. 11734015).
|
2308.16908 | Quantized thermal and spin transports of dirty planar topological
superconductors | Nontrivial bulk topological invariants of quantum materials can leave their
signatures on charge, thermal and spin transports. In two dimensions, their
imprints can be experimentally measured from well-developed multiterminal Hall
bar arrangements. Here, we numerically compute the low temperature ($T$)
thermal ($\kappa_{xy}$) and zero temperature spin ($\sigma^{sp}_{xy}$) Hall
conductivities, and longitudinal thermal conductance ($G^{th}_{xx}$) of various
prominent two-dimensional fully gapped topological superconductors, belonging
to distinct Altland-Zirnbauer symmetry classes, namely $p+ip$ (class D), $d+id$
(class C) and $p \pm ip$ (class DIII) paired states, in mesoscopic six-terminal
Hall bar setups from the scattering matrix formalism using Kwant. In both clean
and weak disorder limits, the time-reversal symmetry breaking $p+ip$ and $d+id$
pairings show half-quantized and quantized $\kappa_{xy}$ [in units of
$\kappa_0=\pi^2 k^2_B T/(3h)$], respectively, while the latter one in addition
accommodates a quantized $\sigma^{sp}_{xy}$ [in units of
$\sigma^{sp}_0=\hbar/(8 \pi)$]. By contrast, the time-reversal invariant $p \pm
ip$ pairing only displays a quantized $G^{th}_{xx}$ at low $T$ up to a moderate
strength of disorder. In the strong disorder regime, all these topological
responses ($\kappa_{xy}$, $\sigma^{sp}_{xy}$, and $G^{th}_{xx}$) vanish.
Possible material platforms hosting such paired states and manifesting these
robust topological thermal and spin responses are discussed. | Sanjib Kumar Das, Bitan Roy | 2023-08-31T17:59:29Z | http://arxiv.org/abs/2308.16908v2 | # Quantized thermal and spin transports of dirty planar topological superconductors
###### Abstract
Nontrivial bulk topological invariants of quantum materials can leave their signatures on charge, thermal and spin transports. In two dimensions, their imprints can be experimentally measured from well-developed multi-terminal Hall bar arrangements. Here, we numerically compute the low temperature (\(T\)) thermal (\(\kappa_{xy}\)) and zero temperature spin (\(\sigma_{xy}^{sp}\)) Hall conductivities, and longitudinal thermal conductance (\(G_{xx}^{th}\)) of various paradigmatic two-dimensional fully gapped topological superconductors, belonging to distinct Altland-Zirnbauer symmetry classes, namely \(p+ip\) (class D), \(d+id\) (class C) and \(p\pm ip\) (class DIII) paired states, in mesoscopic six-terminal Hall bar setups from the scattering matrix formalism using Kwant. In both clean and weak disorder limits, the time-reversal symmetry breaking \(p+ip\) and \(d+id\) pairings show half-quantized and quantized \(\kappa_{xy}\) [in units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\)], respectively, while the latter one in addition accommodates a quantized \(\sigma_{xy}^{sp}\) [in units of \(\sigma_{0}^{sp}=h/(8\pi)\)]. By contrast, the time-reversal invariant \(p\pm ip\) pairing only displays a quantized \(G_{xx}^{th}\) at low \(T\) up to a moderate strength of disorder. In the strong disorder regime, all these topological responses (\(\kappa_{xy}\), \(\sigma_{xy}^{sp}\) and \(G_{xx}^{th}\)) vanish. Possible material platforms hosting such paired states and manifesting these robust topological thermal and spin responses are highlighted.
## I Introduction
Classification of quantum materials according to the geometry and topology of underlying fermionic wavefunctions, when combined with three non-spatial symmetries, gives rise to the ten-fold periodic table of topological phases of matter [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. The participating non-spatial symmetry operations are (a) the time-reversal symmetry (TRS), (b) the anti-unitary particle-hole symmetry (PHS), and (c) the unitary particle-hole or chiral or sublattice symmetry (SLS). Among several fascinating features of this topological periodic table of quantum matters, such as the Bott periodicity [10] and the dimensional reduction [15] to name a few, a remarkable one is the following. Out of ten possible Altland-Zirnbauer (AZ) symmetry classes, only five are accompanied by non-trivial bulk topological invariants in every dimensions. These mathematical quantities (the bulk topological invariants) can nonetheless leave its fingerprint on experimentally measurable transport quantities, and eminently they are expected to be robust against symmetry preserving weak perturbations, such as random impurities. Furthermore, it turns out that such a classification scheme, although tailored for noninteracting fermionic systems, is equally applicable for strongly coupled phases of matter, such as Kondo insulators [16; 17; 18] and superconductors [19; 20; 21; 22], however in terms of emergent weakly correlated Hartree-Fock quasiparticles. Specifically for superconductors, the band topology is computed for weakly interacting emergent neutral Bogoliubov de Gennes (BdG) quasiparticles inside the paired state.
A one-to-one correspondence between the bulk topological invariant and quantized transport quantity is fascinating especially in two spatial dimensions, where they can be directly measured experimentally in multi-terminal Hall bar arrangements. In \(d=2\), five topological AZ symmetry classes are A and AII, corresponding to quantum Hall and quantum spin Hall insulators, respectively, and class D, C and DIII, each of which represents a superconductor. Their symmetry properties are summarized in Table 1. Here we exclusively focus on two-dimensional topological superconductors. The prominent examples are (a) TRS breaking \(p+ip\) pairing among spinless or equal-spin fermions (class D), (b) TRS breaking spin-singlet \(d+id\) pairing (class C), and (c) TRS preserving triplet \(p\pm ip\) pairing (class DIII).
Violation of the charge conservation in a superconducting ground state forbids any meaningful measurement of charge transport quantities. We, therefore, have to solely rely on thermal transport (always well defined due to the energy conservation) and in some cases spin transport (when the spin rotational symmetry is maintained).
\begin{table}
\begin{tabular}{|c c c c c c|} \hline \multirow{2}{*}{System} & \multirow{2}{*}{Class} & \multicolumn{3}{c}{Symmetries} & \multirow{2}{*}{Examples} \\ \cline{3-4} & & TRS & PHS & SLS \\ \hline \multirow{2}{*}{TIs} & A & 0 & 0 & 0 & Quantum Hall insulator \\ & AII & -1 & 0 & 0 & Quantum spin Hall insulator \\ \hline \multirow{3}{*}{TSCs} & D & 0 & +1 & 0 & \(p+ip\) superconductor \\ & C & 0 & -1 & 0 & \(d+id\) superconductor \\ \cline{1-1} & DIII & -1 & +1 & 1 & \(p\pm ip\) superconductor \\ \hline \end{tabular}
\end{table}
Table 1: Five nontrivial Altland-Zirnbauer symmetry classes in two spatial dimensions, encompassing topological insulators (TIs) and topological superconductors (TSCs) [3; 9; 12]. Symmetry transformations of the corresponding effective single-particle Hamiltonian under the time-reversal symmetry (TRS), particle-hole symmetry (PHS) and sublattice symmetry (SLS). Here, (0) (1) implies the absence (presence) of a specific symmetry, while \(\pm\) indicate whether the corresponding symmetry operator squares to \(\pm 1\), respectively. In the last column, we show one representative physical system from each Altland-Zirnbauer class.
Although, the (half-)quantized thermal and spin transport quantities in some of the aforementioned topological paired states are well-appreciated in the literature from the field-theoretic approach and the Kubo formalism [20; 23], here we compute these quantities in finite size mesoscopic six-terminal Hall bar arrangements from scattering matrix formalism using Kwant [24]. Our key findings are summarized below.
Two-dimensional \(p+ip\) and \(d+id\) paired states respectively support half-quantized [Fig. 1] and quantized [Fig. 2] thermal Hall conductivity \(\kappa_{xy}\) [in units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\)] at low temperature (\(T\)), intimately tied with the first Chern number of the associated effective single-particle BdG Hamiltonian. Here, \(h\) (\(k_{B}\)) is the Planck's (Boltzmann) constant and \(T\) is the system temperature. The \(d+id\) paired state in addition also features quantized spin Hall conductivity \(\sigma_{xy}^{sp}\) [Fig. 3] in units of \(\sigma_{0}^{sp}=\hbar/(8\pi)\), where \(\hbar=h/(2\pi)\). Finally, we show that the topological invariant of a \(p\pm ip\) superconductor can only be revealed from the quantized (in units of \(\kappa_{0}\)) longitudinal thermal conductance \(G_{xx}^{th}\) [Fig. 4], as this paired state has net zero first Chern number. All these (half)-quantized thermal and spin topological responses of planar topological superconductors are shown to be robust (due to a finite bulk gap) in the presence of weak random charge impurities, the dominant source of elastic scattering in any real material. Only in the strong disorder regime \(\kappa_{xy},\sigma_{xy}^{sp}\) and \(G_{xx}^{th}\to 0\), when the disorder strength becomes comparable or larger than the bulk topological gap. See panels (d) and (e) of Figs. 1, 2 and 4, and panels (c) and (d) of Fig. 3.
### Organization
We now outline the organization principle of the rest of the paper. Sec. II is devoted to the discussion on the TRS breaking \(p+ip\) superconductor (class D), and its hallmark half-quantized thermal Hall conductivity in the clean and dirty systems. Finite temperature thermal and zero temperature spin Hall conductivities of a class C spin-singlet \(d+id\) paired state (both clean and disordered) are discussed in Sec. III. Topological responses of class DIII \(p\pm ip\) paired state in terms of quantized longitudinal thermal conductance in both clean and dirty systems are presented in Sec. IV. Concluding remarks and material pertinence of our study are promoted in Sec. V.
## II Topological \(p+ip\) superconductor
We embark on the journey with a two-dimensional class D topological system, namely, the TRS breaking \(p+ip\) superconductor [20]. The effective single-particle BdG Hamiltonian for such a system is described by \(\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k})=\mathbf{\tau}\cdot\mathbf{d}(\mathbf{k})\), where [25]
\[\mathbf{d}(\mathbf{k})=\bigg{(}t\sin(k_{x}a),t\sin(k_{y}a),m_{0}-t_{0}\sum_{j=x,y}\cos (k_{j}a)\bigg{)}. \tag{1}\]
The two-dimensional Pauli matrices \(\mathbf{\tau}\) operate on the Nambu indices. Throughout, we set \(t=t_{0}=1\), and the lattice constant \(a=1\). The term proportional to \(\tau_{3}\) conduces Fermi surfaces near the \(\Gamma\) and M points of the Brillouin zone (BZ), respectively for \(0<m_{0}/t_{0}<2\) and \(-2<m_{0}/t_{0}<0\). The terms proportional to \(\tau_{1}\) and \(\tau_{2}\) capture topological superconductivity in the system.
The superconducting pairing is determined from the electron's spin angular momentum (\(\ell\)), which forms the Cooper pairs. It can be of spin-singlet (even \(\ell\)) or spin-triplet (odd \(\ell\)) in nature. For a system of spinless or spin polarized fermions, the requisite Pauli exclusion principle permits only odd \(\ell\) paired states (odd-parity pairing). Here, in Eq. (1) we consider its simplest example, the TRS breaking \(p+ip\) pairing. From the phase diagram of the model, we notice that the topological superconductivity develops in the regime \(-2<m_{0}/t_{0}<2\) in the presence of an underlying Fermi surface. See Fig. 1(a). Outside this domain, the system describes a trivial thermal insulator. Within the topological regime, the system supports chiral Majorana edge modes propagating at the boundaries of the system. They manifest in the transport calculation which we will discuss shortly. Since, the formation of the Cooper pairs violates the charge conservation, we rely on the energy conservation principle and compute the thermal Hall conductivity (THC). The THC for this model shows robust half quantization \(\kappa_{xy}=\pm 0.5\) (in units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/3h\)) when the system falls inside the topological phase, whereas \(\kappa_{xy}=0\) otherwise.
We note that the model Hamiltonian for the TRS breaking \(p+ip\) superconducting \(\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k})\) possesses an anti-unitary PHS, generated by \(\Xi=\tau_{x}\mathcal{K}\), such that \(\{\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k}),\Xi\}=0\), where \(\mathcal{K}\) is the complex conjugation and \(\Xi^{2}=+1\). Additionally, we observe that all three Pauli matrices appear in \(\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k})\). Hence, there exists no unitary operator that anticommutes with \(\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k})\) (thus no SLS), thereby justifying the class D nature of the system within the AZ classification scheme [3; 10; 12; 9; 14]. Therefore, the THC response for this system tracks the value of the first Chern number (\(C\)), which is defined within the first BZ as
\[C=\int_{\rm BZ}\frac{d^{2}\mathbf{k}}{4\pi}\ \left[\partial_{k_{x}}\hat{\mathbf{d}}( \mathbf{k})\times\partial_{k_{y}}\hat{\mathbf{d}}(\mathbf{k})\right]\cdot\hat{\mathbf{d}}(\mathbf{k}), \tag{2}\]
where, \(\hat{\mathbf{d}}(\mathbf{k})=\mathbf{d}(\mathbf{k})/|\mathbf{d}(\mathbf{k})|\). Typically, for a TRS breaking topological insulator, the first Chern number is related to the electrical Hall conductivity by the Kubo formula \(\sigma_{xy}=C\frac{\varepsilon^{2}}{h}\) (\(C\in\mathbb{Z}\) group in two dimensions) [26]. Nonetheless, by taking into account the Nambu doubling which results in a factor of \(1/2\) in the BdG Hamiltonian
\[H_{\rm BdG}^{p+ip}=\frac{1}{2}\sum_{\mathbf{k}}\left(C_{\mathbf{k}}^{\dagger}\ C_{-\mathbf{k}} \right)\mathcal{H}_{\rm BdG}^{p+ip}(\mathbf{k})\begin{pmatrix}C_{\mathbf{k}}\\ C_{-\mathbf{k}}^{\dagger}\end{pmatrix}, \tag{3}\]
the first Chern number can also be related to the half-quantized THC for a \(p+ip\) topological superconductor, which we discuss next.
### Thermal Hall response: Clean \(p+ip\) superconductor
Superconducting system having Cooper pairs does not adhere to the principle of charge conservation, which in turn implies that the electrical Hall conductivity (\(\sigma_{xy}\)) is a moot quantity. However, we can resort to the energy conservation principle, and therefore, compute the THC which is a meaningful topological response. We consider a six-terminal Hall bar geometry for the calculation of the THC, as shown in Fig. 1(b). Six leads are attached to the rectangular scattering region, maintained at a fixed temperature \(T\). A longitudinal thermal current (\(I_{th}\)) then traverses through the system, when a temperature gradient is applied between lead 1 (at a temperature \(T_{1}=-\Delta T/2\)) and lead 4 (at a temperature \(T_{4}=\Delta T/2\)), generating transverse temperatures in the perpendicular leads (namely, lead 2, lead 3, lead 5 and lead 6), which serve as the temperature leads. With this setup, we note the current-temperature relation reads as \(\mathbf{I}_{th}=\mathbf{A}\mathbf{T}\), where \(\mathbf{T}_{th}^{\top}=(I_{th},0,0,-I_{th},0,0)\), and \(\mathbf{T}^{\top}=(-\Delta T/2,T_{2},T_{3},\Delta T/2,T_{5},T_{6})\). The matrix elements of \(\mathbf{A}\) are calculated from [27; 28; 29]
\[A_{ij}=\int_{0}^{\infty}\frac{E^{2}}{T}\left(-\frac{\partial f(E,T)}{\partial E }\right)\left[\delta_{ij}\mu_{j}-\text{Tr}(t_{ij}^{\dagger}t_{ij})\right]dE. \tag{4}\]
Here, \(\mu_{j}\) represents the number of propagating channels in the \(j\)th lead, \(f(E,T)=1/(1+\exp\left[E/(k_{B}T)\right])\) denotes the Fermi-Dirac distribution function, \(E\) is the energy, \(t_{ij}\) is the transmission block of the scattering matrix between the lead \(i\) and lead \(j\), and the trace (Tr) is performed over all the transmission channels.
Once we get the matrix \(\mathbf{A}\) using Kwant [24], the transverse thermal Hall resistance can be obtained as \(R_{xy}^{th}=(T_{2}+T_{3}-T_{5}-T_{6})/(2I_{th})\), and the inverse of this quantity is termed the THC
\[\kappa_{xy}=\pi^{2}k_{B}^{2}T/(3h)\left(R_{xy}^{th}\right)^{-1}. \tag{5}\]
For all our numerical THC calculations, we set \(k_{B}=h=1\). From the Fig. 1(c), we notice that the THC remains half-quantized to the values \(\kappa_{xy}=C/2\) in the topological regime with \(C=\pm 1\), but vanishes otherwise.
### Thermal Hall response: Dirty \(p+ip\) pairing
Topology of insulating systems (electrical and thermal) is robust against weak perturbations that preserve the
Figure 1: (a) The phase diagram for the \(p+ip\) superconductor [Eq. (1)] as a function of \(m_{0}/t_{0}\) in terms of the first Chern number \(C\) [Eq. (2)]. (b) The six-terminal thermal Hall transport setup for the \(p+ip\) superconductor, also showing its chiral Majorana edge mode (red arrow). A longitudinal thermal current (\(I_{th}\)) flows between lead 1 and lead 4. The perpendicular leads serve as the thermal Hall probes which allow us to calculate the transverse thermal Hall conductivity (THC). The THC (\(\kappa_{xy}\)), computed for a rectangular system of length \(L=200\) and width \(D=100\) at the temperature \(T=0.01\) as a function of \(m_{0}/t_{0}\), is half-quantized to \(\kappa_{xy}=C/2\) [in the units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\)] in the topological regime, whereas \(\kappa_{xy}=0\) in the trivial regime where \(C=0\). The dotted black lines are guide to the eye. (d) The disorder averaged THC (\(\langle\kappa_{xy}\rangle\)) as a function of the disorder strength (\(W\)). Each data point (in red), corresponding to a particular value of \(W\), is averaged over 70 independent disorder realizations. (e) The disorder averaged THC as a function of the number of independent disorder realizations (\(n\)) for various strengths of disorder (mentioned inside the plot), showing that they saturate after averaging over typically 70 independent disorder realizations.
necessary symmetries of the system or at least when the symmetry is protected on average [30; 31; 32; 33; 34; 35; 36; 37; 38]. On the other hand, disorder is unavoidably present in real materials, which can in principle be detrimental for the topological features of quantum materials (such as \(\kappa_{xy}\)). Therefore, we investigate the robustness of the half-quantized \(\kappa_{xy}\) of the \(p+ip\) paired state against symmetry preserving disorder. We add random charge impurities to each site of the two-dimensional lattice, the dominant source of elastic scattering in any real material, which in the Nambu-doubled basis enter as the on-site _mass_ disorder \(V(\mathbf{r})\tau_{3}\) for Dirac fermions, featured by the \(p+ip\) pairing terms. The quantity \(V(\mathbf{r})\) is uniformly and randomly distributed within the range \([-W/2,W/2]\) for every site belonging to the rectangular scattering region, and \(W\) is the disorder strength. We compute the disorder averaged THC \(\langle\kappa_{xy}\rangle\) for the class D \(p+ip\) superconductor following the prescription mentioned in the previous sections [Sec. II.1]. The results are shown in Fig. 1(d).
The convergence of \(\langle\kappa_{xy}\rangle\) is ensured after averaging over a large number of independent disorder configurations \(n\sim 70\) (typically), around which \(\langle\kappa_{xy}\rangle\) becomes insensitive to \(n\) within the numerical accuracy. See Fig. 1(e). The topological half-quantization of \(\langle\kappa_{xy}\rangle\) persists in the weak disorder regime (\(W\lesssim 4\)), and decays to \(\langle\kappa_{xy}\rangle=0\) for large disorder strength (\(W\gtrsim 10\)), while acquiring non-universal and non-quantized values for intermediate strength of disorder. See Fig. 1(d). The robustness of the THC in the weak and its disappearance in the strong disorder regimes can be qualitatively understood in the following way. We note that the topological response quantity \(\langle\kappa_{xy}\rangle\) is protected by the bulk gap of the BdG fermions. Despite disorder tending to diminish the bulk gap, it continues to protect half-quantized \(\langle\kappa_{xy}\rangle\) for weak disorder. However, in the presence of strong disorder (when \(W\gtrsim\) bulk gap), the system becomes a trivial thermal insulator, resulting in \(\langle\kappa_{xy}\rangle=0\).
## III Topological \(d+id\) superconductor
Next we investigate the thermal and spin responses of a spin-singlet \(d+id\) topological superconductor [39; 20; 20]. Notice that, similar to the \(p+ip\) superconductors, the \(d+id\) pairing also breaks the TRS and lacks the SLS, but retains the anti-unitary PHS (\(\Xi\)). However, for the \(d+id\) paired state \(\Xi^{2}=-1\). The normal state Hamiltonian for spinful fermions, required for their condensation in a spin-singlet channel, is described by
\[\mathcal{H}_{\text{nor}}(\mathbf{k})=[m_{0}-t_{0}\big{(}\cos(k_{x}a)+\cos(k_{y}a) \big{)}]\sigma_{0}\equiv d_{3}(\mathbf{k})\sigma_{0}, \tag{6}\]
where the Pauli matrices \(\mathbf{\sigma}\) encode the spin degrees of freedom. The corresponding two-component spinor reads \(\Psi^{\top}_{\mathbf{k}}=(c_{\mathbf{k}\uparrow},c_{\mathbf{k}\downarrow})\), where \(c_{\mathbf{k}\sigma}\) is the fermion annihilation operator with momentum \(\mathbf{k}\) and spin projection \(\sigma=\uparrow,\downarrow\). Incorporation of the superconductivity into this model amounts to the Nambu doubling of the theory, for which we now define a regular Nambu spinor as \(\Psi^{\top}_{\text{Nam}}=(\Psi_{\mathbf{k}},\Psi^{*}_{-\mathbf{k}})\). In this basis, the normal state Hamiltonian takes the following form
\[\mathcal{H}^{\text{Nam}}_{\text{nor}}(\mathbf{k})=d_{3}(\mathbf{k})\tau_{3}\sigma_{0}, \tag{7}\]
where the newly introduced Pauli matrices \(\mathbf{\tau}\) operate on the Nambu or particle-hole index.
In such a system, in this section, we focus on the even parity pairings with the pairing amplitude \(\Delta(\mathbf{k})\), such that \(\Delta(-\mathbf{k})=\Delta(\mathbf{k})\). Two prominent choices which satisfy the even parity criterion are the \(s\)-wave pairing with \(\Delta(\mathbf{k})=\Delta_{0}\) (constant) and \(\ell=0\), and the \(d\)-wave pairing with \(\Delta(\mathbf{k})=\Delta_{0}[\cos(k_{x}a)-\cos(k_{y}a)]\equiv d_{1}(\mathbf{k})\) and \(\Delta_{0}\sin(k_{x}a)\sin(k_{y}a)\equiv d_{2}(\mathbf{k})\), corresponding to \(\ell=2\). These two terms respectively stand for the lattice regularized version of the \(d_{x^{2}-y^{2}}\) and \(d_{xy}\) pairings. Here, we only consider the \(d\)-wave pairings, as the fully gapped uniform \(s\)-wave counterpart is topologically trivial. Note that the requirement of the Pauli exclusion principle demands that the spin-singlet pairing terms must appear with the \(\sigma_{2}\) matrix in the spin space, which by virtue of \(\sigma_{2}^{\top}=-\sigma_{2}\) ensures the antisymmetric nature of the pairing wave function. The effective single-particle BdG Hamiltonian for the \(d+id\) pairing then takes a compact form
\[\mathcal{H}^{d+id}_{\text{BdG}}=d_{1}(\mathbf{k})\tau_{1}\sigma_{2}+d_{2}(\mathbf{k}) \tau_{2}\sigma_{2}+d_{3}(\mathbf{k})\tau_{3}\sigma_{0}, \tag{8}\]
with the components of the \(\mathbf{d}\)-vector already announced in this section. The effective BdG Hamiltonian can be cast in a more elegant form by defining a slightly decorated Nambu-doubled basis \(\big{(}\Psi^{\text{dec}}_{\text{Nam}}\big{)}^{\top}=(\Psi_{\mathbf{k}},\sigma_{ 2}\Psi^{*}_{-\mathbf{k}})\) by absorbing the unitary part of the time-reversal operator (\(\sigma_{2}\)) in the hole sector of the spinor. In this basis
\[\mathcal{H}^{d+id}_{\text{BdG}}=d_{1}(\mathbf{k})\tau_{1}\sigma_{0}+d_{2}(\mathbf{k}) \tau_{2}\sigma_{0}+d_{3}(\mathbf{k})\tau_{3}\sigma_{0}. \tag{9}\]
Appearance of the Pauli matrix \(\sigma_{0}\) in the spin sector unfolds the singlet nature of the \(d+id\) paired state. Thus, the spin degrees of freedom leads to a mere doubling of the BdG Hamiltonian, endowing an SU(2) spin rotational symmetry to \(\mathcal{H}^{d+id}_{\text{BdG}}\), generated by \(\tau_{0}\mathbf{\sigma}\). Notice that \(\mathcal{H}^{d+id}_{\text{BdG}}\) enjoys an antiunitary PHS, generated by \(\Xi=\tau_{2}\sigma_{0}\mathcal{K}\), such that \(\{\mathcal{H}^{d+id}_{\text{BdG}},\Xi\}=0\) and \(\Xi^{2}=-1\). But, there is no unitary operator that anticommutes with \(\mathcal{H}^{d+id}_{\text{BdG}}\), which is thus devoid of the sublattice or chiral symmetry. Hence, the \(d+id\) paired state belongs to the AZ class C.
### Thermal Hall responses of \(d+id\) pairing
Since after a suitable unitary rotation, each term in the effective BdG Hamiltonian \(\mathcal{H}^{d+id}_{\text{BdG}}\) is accompanied by the identity matrix \(\sigma_{0}\) in the spin sector, the first Chern number (\(C\)) associated with the \(d+id\) paired state can be directly computed from Eq. (2) in terms of the components of the appropriate \(\mathbf{d}\)-vector appearing in the \(2\times 2\)
BdG Hamiltonian involving only the \(\mathbf{\tau}\) matrices (Nambu degrees of freedom), and it is given by \(2C\). The extra factor of \(2\) arises from the mere doubling of the Hamiltonian in the spin part [see Eq. (9)]. Within the entire topological regime, namely \(-2<m_{0}/t_{0}<2\), the first Chern number associated with \(\mathcal{H}_{\text{BdG}}^{d+id}\) is thus \(2C=4\), while it is trivial (\(C=0\)) for \(|m_{0}/t_{0}|>2\). The resulting phase diagram is shown in Fig. 2(a).
The nontrivial Chern number leaves its signature on the transverse THC, which in this case reads as
\[\kappa_{xy}=C\times\frac{1}{2}\times 2\times\left(\frac{\pi^{2}k_{B}^{2}T}{3h} \right)\equiv C\kappa_{0}. \tag{10}\]
The factor of \(1/2\) compensates the Nambu doubling, and the factor of \(2\) arises due to the spin degeneracy. The THC of a clean \(d+id\) paired state can readily be computed in a six-terminal setup, as shown in Fig. 2(b), employing the same method we previously discussed for the \(p+ip\) superconductor in Sec. II.1. The results are shown in Fig. 2(c). Indeed, we find \(\kappa_{xy}=2\) (in units of \(\kappa_{0}\)) in the topological regime, otherwise \(\kappa_{xy}=0\).
We also test the robustness of the quantized nature of \(\kappa_{xy}\) in the \(d+id\) state against a symmetry preserving disorder. It is important to emphasize that the computation of \(\kappa_{xy}\) is performed separately on the individual \(2\times 2\) blocks involving the \(\mathbf{\tau}\) matrices only. We add an on-site disorder (random charge scatterer) term \(V(\mathbf{r})\tau_{3}\), which is drawn from a uniform box distribution in the range \([-W/2,W/2]\) for every site belonging to the real space lattice, and \(W\) is the disorder strength. Then we follow the identical steps highlighted in Sec. II.2 to compute the disorder averaged THC \(\langle\kappa_{xy}\rangle\). Once again we find that \(\langle\kappa_{xy}\rangle\) retains a quantized value of \(2\) (in units of \(\kappa_{0}\)) for small to moderate disorder strength (\(W\lesssim 5\)) and drops to \(\langle\kappa_{xy}\rangle=0\) eventually for large disorder values (\(W\gtrsim 8\)). See Fig. 2(d). Since \(\langle\kappa_{xy}\rangle\) falls rather rapidly in the intermediate disorder range, we consider a finer mesh in disorder values in this regime. Typically, \(\langle\kappa_{xy}\rangle\) becomes insensitive to the number of independent disorder realizations for \(n\sim 70-150\) (depending on \(W\)), as shown in Fig. 2(e).
### \(d+id\) pairing: Spin Hall Conductivity
So far the spin degrees of freedom only doubled the amplitude of \(\kappa_{xy}\) for the \(d+id\) pairing. The broken TRS and the spin-singlet nature of this paired state also mani
Figure 2: (a) The phase diagram for the \(d+id\) superconductor [Eq. (9)] as a function of \(m_{0}/t_{0}\) in terms of the first Chern number \(2C\) [Eq. (2)] being equal to \(4\) in the topological regime (\(|m_{0}/t_{0}|<2\)) and \(0\) otherwise. The factor of \(2\) comes from the spin degeneracy of the effective BdG Hamiltonian [Eq. (9)]. (b) The six-terminal thermal Hall transport setup for the \(d+id\) superconductor, also showing unidirectional chiral Majorana edge modes for opposite spin projections. A longitudinal thermal current (\(I_{th}\)) flows between lead \(1\) and lead \(4\). The perpendicular leads serve as the thermal Hall probes which allow us to calculate the transverse thermal Hall conductivity (THC). (c) The THC (\(\kappa_{xy}\)) as a function of \(m_{0}/t_{0}\) is quantized to \(2\) (in units of \(\kappa_{0}\)) in the topological regime as two unidirectional edge spin channels contribute to the THC [see panel (b)], whereas \(\kappa_{xy}=0\) in the trivial regime. Here, we compute the THC with a rectangular scattering region of length \(L=200\) and width \(D=100\) at temperature \(T=0.01\) for \(t_{0}=\Delta_{0}=1\). The dotted black lines are guide to the eye. (d) The disorder averaged THC \(\langle\kappa_{xy}\rangle\) as a function of the disorder strength (\(W\)). For each \(W\) (red points), \(\kappa_{xy}\) is averaged over \(70-150\) independent disorder configurations (depending on \(W\)). (e) The variation of \(\langle\kappa_{xy}\rangle\) with the number of independent disorder realizations (\(n\)) for various strength of \(W\), ensuring its numerical convergence for large \(n\).
fest quantized spin Hall conductivity, realized at the cost of the spin SU(2) symmetry by applying a _weak_ external magnetic field. Then the external magnetic field (\(\mathbf{H}\)) only couples to the spin degrees of freedom (no orbital coupling due to the Meissner effect) via the Zeeman term, which reads as \((\hbar/2)\mathbf{H}\cdot\mathbf{\sigma}\), where \(\hbar/2=h/(4\pi)\) plays the role of the magnetic charge (analogous to \(e\) being the electrical charge). In the decorated Nambu basis (\(\Psi^{\rm dec}_{\rm Nam}\)), the Zeeman term reads as \((\hbar/2)\tau_{0}\mathbf{H}\cdot\mathbf{\sigma}\). Without any loss of generality, we chose the spin quantization \(z\) axis along the external magnetic field, yielding \(\mathbf{H}=(0,0,H)\), and then the Zeeman term takes a simpler form \((\hbar/2)H\tau_{0}\sigma_{3}\). Therefore, at zero temperature the the spin Hall conductivity of the \(d+id\) paired state is given by
\[\sigma^{sp}_{xy}=\frac{(\hbar/2)^{2}}{h}\times\frac{1}{2}\times C=\frac{\hbar} {8\pi}C, \tag{11}\]
where the factor of \(1/2\) compensates the Nambu doubling, \(\sigma^{sp}_{0}=\hbar/(8\pi)\) is the quantum of the spin Hall conductance, and \(C\) is the first Chern number of the \(d+id\) paired state, computed from the \(2\times 2\) BdG Hamiltonian involving only the \(\mathbf{\tau}\) matrices [20; 23; 39]. Throughout, we compute the spin Hall conductivity in units of \(\hbar/(8\pi)\). Therefore, within the topological regime, namely \(|m_{0}/t_{0}|<2\), we expect \(\sigma^{sp}_{xy}=2\), while \(\sigma^{sp}_{xy}=0\) for \(|m_{0}/t_{0}|>2\), which we next confirm from a six-terminal setup.
The six-terminal transport geometry for the computation of the spin Hall conductivity (\(\sigma^{sp}_{xy}\)) is depicted in Fig. 3(a). A magnetic bias field (\(-\Delta H/2,\Delta H/2\)) is applied between the lead \(1\) and lead \(4\), due to which a longitudinal spin current (\(I_{sp}\)) flows across the system (the scattering region). Such a spin current results in generating different \(z\) directional magnetization values (\(m_{j}\)s) in the transverse leads, which can be obtained from the spin current-magnetization relation \(\mathbf{I}_{sp}=\mathbf{G}_{sp}\mathbf{M}\), where \(\mathbf{I}^{\top}_{sp}=(I_{sp},0,0,-I_{sp},0,0)\) and \(\mathbf{M}^{\top}=(-\Delta H/2,m_{2},m_{3},\Delta H/2,m_{5},m_{6})\). The spin conductance matrix \(\mathbf{G}_{sp}\) contains only the transmission block of the scattering matrix, which we extract using Kwant [24]. Subsequently, we compute the transverse spin Hall resistance \(R^{sp}_{xy}=(m_{2}+m_{3}-m_{5}-m_{6})/(2I_{sp})\), which leads to the spin Hall conductance \(\sigma^{sp}_{xy}=\big{(}R^{sp}_{xy}\big{)}^{-1}\) [in units of \(\hbar/(8\pi)\)]. The results are shown in Fig. 3(b), confirming \(\sigma^{sp}_{xy}=2\) and \(0\), respectively, in the topological and trivial regimes. Next we scrutinize the robustness of quantized \(\sigma^{sp}_{xy}\) in the presence of random charge impurities.
With the same motivation as in the earlier models, we now investigate the robustness of the quantized \(\sigma^{sp}_{xy}\) against a symmetry preserving disorder term, namely the random charge impurities. The computation of the disorder averaged spin Hall conductance \(\langle\sigma^{sp}_{xy}\rangle\) now involves on-site disorder term \(V(\mathbf{r})\tau_{3}\) for each spin projection. The quantity \(V(\mathbf{r})\) is uniformly and independently distributed in the range \([-W/2,W/2]\) for every site belonging to the lattice, and \(W\) is the disorder strength. The disorder averaged spin Hall conductance \(\langle\sigma^{sp}_{xy}\rangle\) showcases the robustness of this topological response against weak
Figure 3: (a) The six-terminal spin Hall transport setup for the \(d+id\) superconductor at zero temperature, also showing unidirectional spin degenerate chiral Majorana edge modes. A longitudinal spin current (\(I_{sp}\)) flows between lead \(1\) and lead \(4\). The perpendicular leads work as the spin Hall or the magnetization probes which allow us to calculate the transverse spin Hall conductivity (SHC). (b) The SHC (\(\sigma^{sp}_{xy}\)) as a function of \(m_{0}/t_{0}\) is quantized to \(2\) (in units of \(\sigma^{sp}_{0}=\hbar/(8\pi)\)) in the topological regime, whereas yields \(0\) in the trivial regime. Here, we compute \(\sigma^{sp}_{xy}\) in a rectangular system of length \(L=200\) and width \(D=100\), for \(t_{0}=\Delta_{0}=1\). The dotted black lines are guide to the eye. (c) The disorder averaged SHC \(\langle\sigma^{sp}_{xy}\rangle\) as a function of the disorder strength \(W\) shows the survival of its robust quantization up to a moderate disorder, while eventually decaying to \(\langle\sigma^{sp}_{xy}\rangle=0\) for large \(W\). (d) The same quantity \(\langle\sigma^{sp}_{xy}\rangle\) is plotted with the number of independent disorder realizations (\(n\)) for a few \(W\), showing that it saturates for large \(n\sim 70\) ensuring the numerical convergence.
and moderate disorder, which eventually decays to zero in the strong disorder regime. See Fig. 3(c). While computing \(\langle\sigma_{xy}^{sp}\rangle\), we typically average over 70 independent disorder realizations for each value of \(W\). As the number of disorder realizations (\(n\)) is increased, the values of \(\langle\sigma_{xy}^{sp}\rangle\) get saturated around \(n\sim 70\). See Fig. 3(d).
## IV Topological \(p\pm ip\) pairing
Finally, we turn to the situation when a spin degenerate Fermi surface [Eq. (6)], discussed in the previous section, becomes susceptible toward the nucleation of a TRS preserving spin-triplet \(p\pm ip\) paired state. As we will show shortly that this system besides the TRS, also preserves the anti-unitary particle-hole and the unitary sublattice or chiral symmetry, and belongs to class DIII in the AZ classification scheme. The effective single-particle BdG Hamiltonian for the \(p\pm ip\) paired state takes the form \(\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})=\mathbf{\Gamma}\cdot\mathbf{d}(\mathbf{k})\) with the \(\mathbf{d}\)-vector already given in Eq. (1). The mutually anticommuting \(4\times 4\) Hermitian Dirac \(\Gamma\) matrices, involving the spin and Nambu indices, take the explicit form \(\Gamma_{1}=\sigma_{0}\otimes\tau_{1},\Gamma_{2}=\sigma_{3}\otimes\tau_{2}\), and \(\Gamma_{3}=\sigma_{0}\otimes\tau_{3}\). The Pauli matrices \(\mathbf{\sigma}\) (\(\mathbf{\tau}\)) operate on the spin (Nambu) sector.
The TRS of \(\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})\) is generated by \(\mathcal{T}=(\sigma_{2}\otimes\tau_{3})\mathcal{K}\), such that \([\mathcal{T},\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})]=0\) and \(\mathcal{T}^{2}=-1\). Its anti-unitary PHS is generated by \(\Xi=(\sigma_{0}\otimes\tau_{1})\mathcal{K}\), such that \(\{\Xi,\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})\}=0\) and \(\Xi^{2}=+1\). Finally, there are two unitary operators, namely \(\Gamma_{4}=\sigma_{1}\otimes\tau_{2}\) and \(\Gamma_{5}=\sigma_{1}\otimes\tau_{2}\), such that \(\{\Gamma_{j},\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})\}=0\) and \(\Gamma_{j}^{2}=+1\) for \(j=4\) and \(5\). They generate the SLS of \(\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})\). Hence, the \(p\pm ip\) paired state belongs to class DIII.
Physically, the model Hamiltonian \(\mathcal{H}_{\text{BdG}}^{p\pm ip}(\mathbf{k})\) can be understood as the superposition of the \(p+ip\) pairing for the spin up component and the \(p-ip\) pairing for the spin down component. As the opposite spin projections are Kramers partners of each other, the TRS of the resulting \(p\pm ip\) pairing pins its first Chern number to zero and due to the helical nature of the edge modes, the resultant THC is also zero. Nonetheless, we can define an invariant, named the spin Chern number for this state
\[C_{sp}=|C_{\uparrow}-C_{\downarrow}|, \tag{12}\]
where \(C_{\uparrow}\) (\(C_{\downarrow}\)) is the first Chern number associated with the \(\uparrow\) (\(\downarrow\)) spin component. Notice that \(C_{sp}\) is non-trivial and \(C_{sp}=2\) in the entire topological regime (\(|m_{0}/t_{0}|<2\)) and \(C_{sp}=0\) for \(|m_{0}/t_{0}|>2\). See Fig. 4(a). However, we cannot apply an external magnetic field to probe nontrivial \(C_{sp}\), as it breaks the TRS, which is a conserved symmetry for the class DIII. Hence, the only
Figure 4: (a) The phase diagram of the \(p\pm ip\) superconductor [Sec. IV] as a function of \(m_{0}/t_{0}\), showing the topologically trivial and nontrivial regimes in terms of the spin Chern number \(C_{sp}\) [Eq. (12)]. (b) The six-terminal thermal transport setup for the \(p\pm ip\) superconductor, showing counter-propagating helical Majorana edge modes for opposite spin projections. A longitudinal thermal current (\(I_{th}\)) flows between lead 1 and lead 4. Here, the temperatures in the pair of horizontal probes (namely between lead 2 and lead 3 or lead 5) allow us to calculate the longitudinal thermal conductance (\(G_{xx}^{th}\)). (c) The \(G_{xx}^{th}\) as a function of \(m_{0}/t_{0}\) is quantized to 1 (in units of \(\kappa_{0}\)) in the topological regime, whereas yielding 0 in the trivial regime. The computation of \(G_{xx}^{th}\) is perform with a rectangular scattering regime of length \(L=120\) and width \(D=60\), at a temperature \(T=0.01\) and for \(t=t_{0}=1\). The dotted lines are guide to the eye. (d) The disorder averaged longitudinal thermal conductance \(\langle G_{xx}^{th}\rangle\) with varying disorder strength \(W\). For each disorder strength, we average over \(n\sim 70\) independent disorder realizations, for which \(\langle G_{xx}^{th}\rangle\) becomes independent of \(n\). See the panel (e).
meaningful experimental transport response of the \(p\pm ip\) pairing is the longitudinal thermal conductance \((G_{xx}^{th})\). A six-terminal setup is employed to compute \(G_{xx}^{th}\), as shown in Fig. 4(b). The computational procedure is identical to the one described in details in Sec. II.1. From the longitudinal thermal resistance \(R_{xx}^{th}=(T_{3}-T_{2})/I_{th}\), we compute \(G_{xx}^{th}=\left(R_{xx}^{th}\right)^{-1}\) and find that
\[G_{xx}^{th}=\frac{C_{sp}}{2}\kappa_{0}, \tag{13}\]
where the factor of \(1/2\) compensates the Nambu doubling. The results are shown in Fig. 4(c).
To examine the robustness of the quantized longitudinal thermal conductance against random charge impurities, we compute its disorder averaged values \(\langle G_{xx}^{th}\rangle\) by adding a term \(V(\mathbf{r})\Gamma_{3}\) to each site of the scattering region. Here as well \(V(\mathbf{r})\) is uniformly and randomly distributed in the range \([-W/2,W/2]\) on every site of the scattering region, and \(W\) denotes the disorder strength. We observe that \(\langle G_{xx}^{th}\rangle\) retains its quantized value (in units of \(\kappa_{0}\)) up to a moderate disorder strength, as shown in Fig. 4(d), beyond which it acquires nonuniversal and nonquantized values before vanishing at sufficiently strong disorder. The values of \(\langle G_{xx}^{th}\rangle\) typically saturate after averaging over 70 independent disorder realizations, irrespective of its strength. See Fig. 4(e).
## V Summary & Discussions
To summarize, here we numerically compute the (half-)quantized thermal (\(\kappa_{xy}\)) and spin (\(\sigma_{xy}^{sp}\)) Hall responses, as well as the quantized longitudinal thermal conductance (\(G_{xx}^{th}\)) of prominent gapped two-dimensional topological paired states from different AZ symmetry classes [Table. 1] by employing scattering matrix formalism using the Kwant software package on finite lattice regularized mesoscopic systems. The transverse thermal Hall and longitudinal thermal conductance are computed at sufficiently low temperatures (\(T=0.01\)), and their (half-)quantization values are quoted in units of \(\kappa_{0}=\pi^{2}k_{B}^{2}T/(3h)\). On the other hand, the zero temperature spin Hall conductance is reported in units of \(\sigma_{0}^{sp}=\hbar/(8\pi)\). These quantities in clean systems are shown to be tied with the bulk topological invariants of the corresponding effective single-particle BdG Hamiltonian, which continue to feature robustness in the presence of weak random charge impurities, manifesting the stability of the bulk topology in the weak disorder regime. In particular, weakly disordered class D \(p+ip\) and class DIII \(p\pm ip\) paired states respectively display \(\langle\kappa_{xy}\rangle=\pm\kappa_{0}/2\) [Fig. 1] and \(\langle G_{xx}^{th}\rangle=\kappa_{0}\) [Fig. 4]. Finally, class C spin-singlet \(d+id\) state supports both quantized \(\langle\kappa_{xy}\rangle\) [Fig. 2] and \(\langle\sigma_{xy}^{sp}\rangle\) [Fig. 3], and their ratio defines the modified Lorentz number
\[L_{m}=\frac{\lim_{\overline{T-0}}(\langle\kappa_{xy}\rangle/\kappa_{0})}{ \langle\sigma_{xy}^{sp}\rangle/[\hbar/(8\pi)]}=1, \tag{14}\]
which remains pinned at this universal value of _unity_ in the weak disorder regime. By contrast, in the strong disorder regime all these topological responses disappear, indicating onset of trivial thermal insulators. Altogether, here we develop concrete numerical methodologies for the computation of various quantized thermal and spin responses in the clean and the dirty topological superconductors, encompassing all three allowed AZ symmetry classes in two dimensions. Recent success in the experimental measurements of the thermal Hall conductivity in spin liquids, integer and fractional quantum Hall states in six-terminal Hall bar geometry [40; 41; 42; 43; 44] should make this analysis pertinent in real materials for which the effective model Hamiltonian can be constructed from lattice-based symmetry constraints. On such available model Hamiltonian our methodology can be directly applied. In future, it will be worthwhile extending the notion of thermal and spin responses to _noncrystalline_ topological superconductors on fractals, amorphous materials and quasicrystals [45], for example.
We note that the disorder averaged low temperature thermal Hall conductivity (\(\kappa_{xy}\)) for the \(p+ip\) and \(d+id\) paired states, and the longitudinal thermal conductance (\(G_{xx}^{th}\)) of the \(p\pm ip\) superconductor initially increase from their (half-)quantized values at moderate disorder strength before they decay to zero. Their quoted disorder averaged values are saturated with respect to the number of independent disorder realizations (\(n\)). See panel (d) and (e) of Figs. 1, 2 and 4. This feature is, however, absent for the zero temperature spin Hall conductivity for the \(d+id\) paired state. See Fig. 3(c) and (d). The microscopic origin of such a peculiar observation is presently not clear. We suspect that at moderate disorder the system fragments into multiple _thermally excited_ islands or droplets of topological and trivial paired states, each interface between them within the scattering region gives rise to Majorana edge modes with weak hybridization among them, yielding enhanced, but non-quantized values of \(\kappa_{xy}\) and \(G_{xx}^{th}\). In the strong disorder limit, the number of such islands possibly increases, causing a strong hybridization among a large number of such Majorana edge modes living within the system, leading to vanishing \(\kappa_{xy}\) and \(G_{xx}^{th}\). Such a scenario can be tested by computing the local topological marker of disorder topological superconductors, which can in principle be extracted for any AZ symmetry class [46; 47]. We expect that the local topological marker can reveal such droplet structure in disordered topological superconductors at finite temperature (if exists at all). We leave this topic for a future investigation.
One of the practical challenges in the field of planar topological superconductivity involves the identification of real quantum materials that can harbor such exotic quantum phase of matter at low temperatures [21; 22;
48], with, however, Sr\({}_{2}\)RuO\({}_{4}\) standing as one prominent candidate [49; 50; 51], for example. Nonetheless, over the past several years quantum crystals potentially fostering topological insulators emerged as promising materials where on-site or local or momentum-independent paired states can nucleate at low temperature and represent topological superconductors, especially when these materials are doped or chemically substituted or intercalated to sustain an underlying Fermi surface, conducive for the Cooper pairing. As such, local pairings in these family of materials inherit topology from the normal state band structure of charged fermions [52; 53; 54; 29; 55; 29]. In addition, proximity effect can be an efficient, realistic and experimentally viable route to induce topological superconductivity in various two-dimensional topological insulator materials.
In particular, doped or proximetized quantum anomalous Hall insulators with already broken TRS in the normal state (class A) can be an ideal place to harness a \(p+ip\) paired state [29]. Recently realized quantum anomalous Hall insulators in magnetically doped (by Cr or V or Fe, for example) thin layer of three-dimensional topological insulators, such as Bi\({}_{2}\)Se\({}_{3}\), Bi\({}_{2}\)Te\({}_{3}\) and Sb\({}_{2}\)Te\({}_{3}\)[57; 58; 59], are, therefore, promising in this respect. In the same spirit, doped or proximetized TRS preserving quantum spin Hall insulators (class AII), such as CdTe-HgTe [60; 61] and InAs-SbTe [62] quantum wells, constitute to a suitable ground to realize a \(p\pm ip\) paired state [54]. Finally, high-\(T_{c}\) cuprate superconductors (in particular, Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8}\)) are promising to realize a \(d+id\) paired state [63; 64; 65]. Although, a clear signature of the \(d+id\) paired state in Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8}\) thus far remains illusive, a _twisted_ interface between bilayer Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+x}\) can host such exotic paired state near 45\({}^{\circ}\) twist angle [66].
Our direct computation of the quantized longitudinal thermal conductance \(G_{xx}^{th}\) for the class DIII \(p\pm ip\) paired state with net zero first Chern number (\(C\)) can have far reaching consequences. For example, if it happens that inside a two-dimensional topological paired state the total first Chern number is zero, which may occur due to the presence of spin or other (such as orbital and/or sublattice) degrees of freedom or multiple Fermi pockets around which the net first Chern number cancels out, still \(G_{xx}^{th}\) can probe the total number of topologically robust Majorana edge modes in the superconducting ground state, each of which contributes \(\kappa_{0}/2\) to \(G_{xx}^{th}\). In the same spirit, \(G_{xx}^{th}\) can also probe _weak_ planar topological superconductors, devoid of any strong topological invariant [29]. Such a scenario may appear commonly in the superconducting ground state of doped crystalline topological materials [67; 68; 69], typically featuring band inversion at even number of high symmetry points in the BZ connected via crystal symmetries, that are nowadays routinely found in nature by employing topological quantum chemistry [70; 71; 72; 73; 74; 75].
###### Acknowledgements.
S.K.D. was supported by a Startup Grant of B.R. from Lehigh University. B.R. was supported by NSF CAREER Grant No. DMR- 2238679.
|
2309.10088 | Analyzing the Endeavours of the Supreme Court of India to Transcribe and
Translate Court Arguments in Light of the Proposed EU AI Act | The Supreme Court of India has been a pioneer in using ICT in courts through
its e-Courts project in India. Yet another leap, its recent project, Design,
Development, and Implementation of Artificial Intelligence (AI) solution, tools
for transcribing arguments and Court proceedings at Supreme Court of India, has
potential to impact the way AI algorithms are designed in India, and not just
for this particular project. In this paper, we evaluate the endeavours of the
Supreme Court of India in light of the state of AI technology as well as the
attempts to regulate AI. We argue that since the project aims to transcribe and
translate the proceedings of the constitutional benches of the Supreme Court,
it has potential to impact rule of law in the country. Hence, we place this
application in High Risk AI as per the provisions to the proposed EU AI Act. We
provide some guidelines on the approach to transcribe and translate making the
maximum use of AI in the Supreme Court of India without running into the
dangers it may pose. | Kshitiz Verma | 2023-09-18T19:03:08Z | http://arxiv.org/abs/2309.10088v1 | Analyzing the Endeavours of the Supreme Court of India to Transcribe and Translate Court Arguments in Light of the Proposed EU AI Act
###### Abstract
The Supreme Court of India has been a pioneer in using ICT in courts through its e-Courts project in India. Yet another leap, its recent project, _Design, Development, and Implementation of Artificial Intelligence (AI) solution, tools for transcribing arguments and Court proceedings at Supreme Court of India_, expressed through bid number **Ref No. AI Solutions/2023/SCI** has potential to impact the way AI algorithms are designed in India, and not just for this particular project. In this paper, we evaluate the endeavours of the Supreme Court of India in light of the state of AI technology as well as the attempts to regulate AI. We argue that since the project aims to transcribe and translate the proceedings of the constitutional benches of the Supreme Court, it has potential to impact _rule of law_ in the country. Hence, we place this application in High Risk AI as per the provisions to the proposed EU AI Act. We provide some guidelines on the approach to transcribe and translate making the maximum use of AI in the Supreme Court of India without running into the dangers it may pose.
## 1 Introduction
Indian judiciary is burdened by a heavy backlog of cases and it is in a desperate searchfor innovative methods to address the backlog. The use of technology has helped Indian judiciary to battle the situation (Verma, 2018). The technology is growing at a very rapid pace and hence it should leverage new technologies like artificial intelligence. In a written reply in Rajya Sabha on 07 April 2022, the then Union Minister of Law and Justice, Shri Kiren Rijiju had highlighted various use cases of AI in judiciary in India (PIB-Delhi, 2022). The Supreme Court of India has decided to do a pilot project on the use of AI for transcribing constitutional court arguments in real time (SupremeCourtofIndia, 2023). When this paper was written, the Supreme Court had already asked for participation of technology companies in a bid to use AI in the court proceedings.
### Our Focus on the Supreme Court Bid Document
As the Supreme Court mentions in its bid document that the goal is to leverage artificial intelligence, machine learning and deep learning for various purposes including the automated transcription of the proceedings of the the Supreme Court's constitutional benches SupremeCourtofIndia (2023). The tasks that are of interest to us, and are discussed in the paper, are enumerated and for the sake of completeness, reproduced below.
1. Page 40, Section V(b)(i.) _Transcribing of arguments and court proceedings on real time basis and displaying the same on monitors in the courtroom._
2. Page 40, Section V(b)(iii.) _The transcription generated primarily must be in English language. However, the transcription generated must also be capable of being translated into the languages stated in the Eight Schedule of the Constitution of India, 1950._
3. Page 40, Section V(b)(iv.) _The AI tool to be developed and deployed must have an advance level of natural language processing, to understand legal terms, documents, petitions, judgments, etc. and to automatically classify them in the relevant specialization._
4. Page 40, Section V(b)(v.) _The AI tool to be developed and deployed must have software and machine learning capabilities, to build a sophisticated hierarchy of classification models to analyse the contents of documents transcribed contained in unstructured text, rich text, html, PDF documents, to have a prediction, intelligent processing, smart classification, content extraction and summarization._
AI is a constantly evolving scientific area right now. There is no consensus on any single definition of AI or even threats that it may pose (Hinton, 2023), (Bengio, 2023) (LeCun, 2023), (Ng, 2023). In such times, it may be said that it is very courageous of the Supreme Court to invite bids for using AI to improve access to justice in India. However, these endeavours have to be properly evaluated on legal and technical basis so as to appropriately balance the need of the judiciary to use AI and the robustness of the technology to be fit for the purpose.
### Elements of High Risk AI in the Supreme Court Project
The early and landmark step of the European Union to frame a law for the use of artificial intelligence is commendable that has been lauded throughout the world. The proposed EU AI Act of the European-Parliament (2021) considers that the AI systems that adversely impact the following may be classified as high risk AI:
1. Health and safety of natural persons
2. Adverse impact on fundamental rights of natural persons
3. Democracy and rule of law
4. Environment
The intention of the Supreme Court is to use AI for constitutional bench matters, that are binding on courts all over India, including the Supreme Court benches that have a strength lesser than the said constitutional bench. Since the jurisdiction of Supreme Court constitutional benches is really supreme, it may directly impact health and safety of the citizens if a case with it is about health or any other rights, including fundamental rights. It may impact the environment if the case is of such nature. Even if all such impacts may look far fetched, being used in constitutional benches, use of AI may certainly impact rule of law. In any case, playing the devil's advocate and assuming a worst case scenario will only help the Supreme Court to assess the gravity of application of AI in its proceedings. Such an application of AI, therefore, may be placed under the high risk category as defined in the proposed EU AI Act.
Thus, the stakes are high if the Supreme Court wants to utilize the capabilities of AI for transcribing and translating constitutional benches proceedings as any mistake made by the AI may really lead to a blunder. For this reason, borrowing some procedures for conformity assessment from the proposed EU AI Act may be helpful.
### Anil Ambani's Case
Anil Ambani's case (EconomicTimes, 2019) is an excellent example of sensitivity of words used in the Supreme Court judgments and how AI may catastrophically change meanings. Transcribing of voice may not be a big deal when a machine learning algorithm on YouTube tries to generate the transcripts of a random video. However, when it is utilized by the Supreme Court of India, it no longer remains an ordinary application. It has potential to touch millions, if not billions of lives in India. We all are aware of the infamous case in which the word "NOT" was removed from the order and removal of this one word had resulted in two staff members being terminated from their services (EconomicTimes, 2019).
#### 1.3.1 Reasonably Foreseeable Misuse
A wrong transcription may have a tremendous impact on rule of law. Human beings may suffer from automation bias. This may mean that the burden of correctness may be placed on the technology. Thus, even if the error should have been caught by human beings, it may creep in due to automation bias. However, since the transcriptions will be displayed on screens in the court, someone should be able to figure out such mistakes.
#### 1.3.2 Malicious Manipulation
As above, a wrong transcription may still be acceptable as it may be corrected in the better versions or by a careful use of the technology. However, if the technology is weaker in cybersecurity and it is possible to manipulate the AI models maliciously, then
it might be easy to attribute a deliberate malicious manipulation to wrong transcription by the model. This may lead to more cases like above and may make the error more difficult to attribute to a human being.
## 2 Use of AI in Judiciary in Other Jurisdictions
Many researchers in USA have found use cases of applications of AI in judicial system where too much discretionary power is provided to the courts. For example in granting bails and asylums. Humans are known to suffer from various biases and such biases often reflect while granting bail and asylums. In such cases use of algorithms may be better (Sunstein, 2021) (Jung et al., 2020)(Kleinberg et al., 2018). Many of these proposals are being considered for improving access to justice in such scenarios. After a yearlong trial, Charlotte's jail population is down almost 20 percent without increasing crime. At the same time, use of AI in policing has been criticized by many researchers (Castelvecchi, 2020). In the UK, Durham HART model that was developed jointly by Durham Constabulary and the University of Cambridge, has been criticized because of adverse impacts on society and legal perspectives (Oswald et al., 2018).
Singapore courts have been announcing to use AI in automatic transcription of court proceedings and a use of case of legal area classification is discussed in Howe et al. (2019).
We do not focus on providing an exhaustive list here, a more comprehensive discussion on such related work may be found in Jauhar et al. (2021).
## 3 State of AI for various Applications
AI is not mature yet. It is still evolving and a steady state has not been achieved so as to say something about its use definitively. One recent example is degradation in the performance of GPT-4 on some tasks (Chen et al., 2023). AI's applications in judiciary may suffer from bias, discrimination, lack of transparency, loss of autonomy of judicial actors (Leslie et al., 2021).
The intentions of the Supreme Court may be right but the technology is not yet there where it can be seamlessly used in the Supreme Court for transcribing or translation. More so when the environment in the court may be noisy or the goal may be to translate documents to all the languages recognized by the Constitution of India. Hence, any project carried out in a period of 60 days is certainly going to fall short of the quality that the hon'ble court is looking for (SupremeCourtofIndia, 2023). The solution and solution provider must undergo certain conformity assessments to ensure that the system works the way it is alleged to be working and all foreseeable risks have been recognized, acknowledged, mitigated and eliminated. The model should have some degree of explainability than being a black box like most of the AI models currently are. Hence, the Supreme Court should create a framework for the companies solving such complex problems. We are afraid that the current bid document of the Supreme
Court of India is not specific enough to provide guidelines on the design of the AI to the solution providers.
The Supreme Court can be a torch bearer in the development of explainable AI in India. If Supreme Court puts forth a requirement of explainability, it might be an inspiration for many other government agencies to do so, allowing integration of AI in mainstream governance in a swift manner mitigating the risks that AI may pose.
We now discuss the state of AI in the two applications that are mentioned in the Supreme Court bid.
### AI in Relation to Transcription of Proceedings
The speech recognition technology is relatively late to take off compared to vision and natural language processing. One of the revolutionary ideas in automatic speech recognitions have been implemented by Baevski et al. (2020). This advance has enabled scaling up to 1,000,000 hours of training data (Zhang et al., 2022). Even in YouTube, that has got a lot of videos to train on, accuracy is lesser than desired (Lin, 2022). One of the state of the art models of speech recognition, developed by OpenAI - Whisper, was trained on 680,000 hours of data (Radford et al., 2023). It still has an average word error rate (WER) of 12% on various speech datasets. In a more curated Large Vocabulary Continuous Speech Recognition (LVCSR) dataset the errror rates are still around 6%. Automatic speech recognition (ASR), like other AI, is also known to exhibit bias (Martin and Wright, 2022)(Feng et al., 2021). Thus, we have enough mainstream research literature from automatic speech recognition to show that the technology is far from being 100% accurate. This calls for evaluation of the transcription done by any system, particularly, if it is for judicial use.
Different accents add further complexities to the problem of automatic speech recognition (Viglino et al., 2019), (Prasad and Jyothi, 2020), (Hinsvark et al., 2021). The judges in the Supreme Court may come from different states and may have different accents. Moreover, the accent of Indians is different compared to those in English speaking nations. This comparison is important because the most of the success of Automatic Speak Recognition is from western English datasets and such datasets are baised against the non-native speakers. Hence, the Supreme Court proceeding is a very particular scenario, much different from usual transcription scenarios. The datasets required should be from such distribution only. Hence, it is imperative for the Supreme Court to provide not just 5 videos but as many as are available so that a good training dataset may be prepared which may be further used by researchers and companies alike. AI models are important but so is data (Sambasivan et al., 2021).
### AI in Relation to Translating Documents
The success of natural language processing and machine translation has seen amazing results for English, French, Spanish, Japanese, etc. However, it is not the same for all languages in the world. Joshi et al. (2020) study the disparity between languages from
the point of view of research conducted and resources created for processing. This has created a big technological divide between the languages. They categorize languages in six classes, class 5 being the maximally resource rich and class 0 being the least. Table 1 provides the list of the languages provided in the Eighth Schedule of the Constitution of India along with the class they belong to as per Joshi et al. (2020). As per classification of languages based on the datasets and labels available, the success of class 5 languages is less likely to be replicated for other classes. There is no Indian language in class 5. Hindi and other Indian languages fall in class 4 or lower. This means that datasets as well as labels for these languages are several orders of magnitude smaller compared to English. Thus, the same success stories are not likely to be sung for the translation of English documents to the languages recognized by the Constitution of India.
Hence, one has to be really careful translating legal texts. Translating legal documents comes with their own pros and cons. While translations enable a wider access of the legal provisions, use of various words and their interpretations in other languages may create confusions too. However, the pros clearly outweigh cons, it is worth doing it but again some guidelines and checks have to be put in.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Language** & **Class** \\ \hline Assamese & 1 \\ \hline Bengali & 3 \\ \hline Gujarati & 1 \\ \hline Hindi & 4 \\ \hline Kannada & 1 \\ \hline Kashmiri & 1 \\ \hline Konkani & 2 \\ \hline Malayalam & 1 \\ \hline Manipuri & 1 \\ \hline Marathi & 2 \\ \hline Nepali & 1 \\ \hline Oriya & 1 \\ \hline Punji & 2 \\ \hline Sanskrit & 2 \\ \hline Sindhi & 1 \\ \hline Tamil & 3 \\ \hline Telugu & 1 \\ \hline Urdu & 3 \\ \hline Bodo & 0 \\ \hline Santhali & 1 \\ \hline Maithili & 1 \\ \hline Dogri & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Class of 22 languages recognized by the Eighth Schedule of the Constitution of India as per Joshi et al. (2020). Higher class means higher availability of data that may enable a high quality processing.
### Other Complex Legal Tasks
The bid document mentions to build systems that understand legal terms, documents, repetitions, judgments, etc. and to automatically classify them in the relevant specialization. The bid document also aims to build a sophisticated hierarchy of classification models to analyze the contents of documents transcribed contained in unstructured text, rich text, html, PDF documents, to have a prediction, intelligent processing, smart classification, content extraction and summarization.
AI systems have no "understanding" as such. It has even sparked a philosophical debate on what understanding even means. While classification is certainly a doable task, preparing training datasets for it is time consuming and the exact problems that need to be solved also needs to be defined first. The current text in the bid document is not clear. So, we do not address these issues in the current paper and focus on transcription and translation only.
## 4 Guidelines for the Implementation Inspired by the Proposed EU AI Act
The stakes are high as constitutional benches are going to use AI in their proceedings for transcription and translation. Any mistake, if goes unnoticed, may lead to a blunder. Hence, we need to minimize errors in the output of the AI system. For this, we need to follow some rules. Our further analysis is inspired by the proposed EU AI Act (European-Parliament, 2021). Thus, the application of AI in the Supreme Court Constitutional Bench proceedings may be classified as a high risk AI that may impact rule of law. For this reason, some procedures for conformity of high risk AI may be borrowed from the proposed EU AI Act. The core principles for AI systems as provided in the proposed EU AI Act are reproduced below:
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity, non-discrimination and fairness
6. Social and environmental well-being
A typical machine learning application has an average cycle of around 6-12 months. Hence, our first suggestion is that instead of making it as a two months project for a solution provider, it should be much longer project for the Supreme Court itself. In this paper, by adhering to the principles enshrined in the proposed EU AI Act that are specific for the requirements of the Supreme Court, our goal is to provide some
guiding principles to the solution provider that is going to implement the AI project. We enumerate the required conformity assessment below as provided from Articles 9 to 15 of the proposed EU AI Act.
### Risk Management by the Solution Provider
A risk management system is a continuous iterative process that runs throughout the entire life cycle of a high-risk AI system. In our context, it means until the system for automatic transcription of the Supreme Court arguments is in place, it should be monitored for its output. Once the application becomes so robust that it looses high risk status, vigilance may be reduced. The idea is to monitor for identification, estimation and evaluation of the known and the reasonably foreseeable risks that the high risk AI system can pose. There is clearly a reasonably foreseeable risk associated with the applications of AI for transcribing and translation. We have also seen it before that just omitting one word may lead to catastrophes and Automatic Speech Recognition systems are known to error on words while creating transcriptions, i.e., their transcriptions are far from being 100% accurate. Hence, our suggestions are as follows.
1. There must be a continuous monitoring and evaluation of the AI system during development by the solution provider, as well as for the whole duration it is in use by the Supreme Court.
2. The specific risks that may be posed are identified, estimated and evaluated. In this case, the system may transcribe words wrongfully, insert or delete words. Estimation of such errors needs to be done well and the solution must be evaluated for improving. This may create situations like the Anil Ambani case. In case of translations, wrong translations may create legal discrepancies for interpretation which may create more problems than solving the existing ones.
3. Look for effective ways to eliminate, mitigate or reduce the risks identified. Improve the model and the training data. One of the best ways is to put the output for the public inspection, which the Supreme Court is already doing by running the transcriptions on the screens in the court. For translations, it may upload unreliable copies and ask the relevant legal fraternity to check for the quality of translation and help improve the quality of the training dataset by providing better translations.
### Data and Data Governance Followed by the Solution Provider
There must be a check on the quality of data being used for training the transcription and translation AI systems. The Supreme Court must ensure that the data used for training, validation and testing are appropriate for the intended purpose of the system. The data used should not exceed what is required for training the desired system and should not devoid the solution provider of necessary data. There must be a reasonable
attempt to remove all kinds of biases from the system. The biases may be due to region, accent, gender or any other basis.
1. The foremost thing for the solution provider is to prepare a dataset that is worthy of solving the problem reasonably well. For the modern deep learning systems, the bigger the dataset, the better. However, the quality also plays a crucial role. Transcribing and translation are both supervised learning tasks, so we need the quality and labelled datasets for both. Given that these tasks may pose high risk to the rule of law, the solution provider needs to pay special attention to the quality of the datasets prepared and it should have the same environment as a typical Supreme Court Constitutional Bench proceeding has.
2. The data should be clearly defined for the purpose at hand, which in this case is transcribing and translation.
3. There is a live streaming of various high courts too. Even that data may be used for preparing training dataset for transcriptions.
4. For translations, the high courts may also be asked to prepare translations of the judgments in the respective regional language so that the issues of less data for parallel corpus of English with Indian languages may be addressed.
### Transparency of Machine Learning Operations
The said transcription and translation AI systems should be designed and developed in such a way to ensure that their operations are sufficiently transparent to enable the solution provider and the Supreme Court and its registry to reasonably understand the system's functioning in accordance with the intended purpose of the AI system.
The degree to which the AI system can provide an explanation for decisions it takes should also be maximized. In the present case, the explanations for why some sound was transcribed as a particular word is important. Also, why a particular translation was chosen for a particular sentence is important. Thus, relevant information about the Supreme Court actions that may influence performance, including the type or quality of input data should be included. Any necessary maintenance and care measures to ensure the proper functioning of that AI system should also be deployed.
1. The Supreme Court is already planning to show the transcriptions so generated on the screens in the courtroom on the real-time basis. This is a really good step for enforcing the solution provider towards transparent machine learning operations, as the AI system's performance may be seen by the legal fraternity and any errors may be readily found.
2. It may be a even better idea to use colours to interpret the output of the system. For example, if the system has a high confidence in generating some transcription, it may colour it in black, output with intermediate confidence may be coloured
yellow and outputs with low confidence may be coloured red. It may highlight deletions by underscores. It will help the readers to catch errors effortlessly. However, automation bias may creep in as the text with high confidence of AI may get less attention from the human readers than it deserves. A similar colouring scheme may be used for translation too.
3. Evaluations should be transparent. The AI system needs to understand if its performance changes as a function of changes in judge, accent, gender, region, etc. This should be taken as input in designing the AI systems.
### Record keeping by the AI System
The AI system should keep a record of the way it is working by automatic recording of events ('logs'). This is to ensure the traceability of the way AI system is functioning.
1. The application should be designed to provide extensive logs for useful and important events. For example, it is good to keep logs for such instances in which the systems output is marked as underscore, yellow or red. This enables to explain better the way the algorithm is behaving.
2. In the translation process, each sentence must be assigned a low, intermediate or high confidence and assigned colours as for the transcription process. The exact way in which the confidence may be assigned will depend on the implementation of the AI model.
### Human Oversight
As the risk posed by an AI system increases, increased amount of human oversight is required, proportionate to the risks associated with those systems. Natural persons in charge of ensuring human oversight need to have sufficient level of AI literacy and the necessary support and authority to exercise that function and to allow for thorough investigation, if required. Human oversight shall aim to minimize the risks to health, safety, fundamental rights and rule of law. Human oversight shall take into account the specific risks, the level of automation, and context of the AI system. This is to avoid automation bias so that we do not consider the output of the system as correct without questioning or until a damage occurs.
1. The Supreme Court should appoint persons with sufficient AI literacy for this purpose. Their task will be just to evaluate the functioning of the AI sytems.
2. The solution provider should explicitly assign the task of human oversight to its team implementing the solution.
### Cyber Security
Since it has impact on the rule of law in India, it is very sensitive from cybersecurity perspective. Such systems are sensitive to attacks as changing just one word may lead
to catastrophes and cyber criminals may try to change the transcriptions deliberately by attacking the AI model and its parameters.
1. Security should be built in design by default and the security standards should be equivalent to financial institutions like banks may be put in force.
2. There should be a due diligence on making systems secure by keeping a dedicated team of cyber-security experts from either industry or academia.
3. The system should be resilient to an unauthorized change of the model parameters or any faults or inconsistencies.
4. Once a system is compromised, it may be easy to disguise a deliberate act as a mistake made by the AI model. Hence, the solution should take steps to minimize malicious manipulation.
### Technical Documentation Accompanying the Solution
A technical documentation showing that all the provisions regarding the risk management, data governance, transparency of machine learning operations, record keeping, human oversight and cybersecurity are met should be a mandatory deliverable by the solution provider before putting the AI system in use by the Supreme Court. It should provide information on all the principles mentioned above. It should include a general description of the AI system including:
1. The nature of the data likely or intended to be processed by the system.
2. The description of the hardware on which the AI system is intended to run.
3. A detailed and easily intelligible description of the system's main optimization goal or goals.
4. A detailed and easily intelligible description of the system's expected output and expected output quality.
5. A detailed and easily intelligible instructions for interpreting the system's output. Report errors in the output of the AI in black/yellow/red transcriptions mentioned before.
6. Some examples of scenarios for which the system should not be used. An explicit declaration from the solution provider if no such scenario exists.
7. A detailed description of the elements of the AI system and the process for its development, including: 1. methods and steps performed for the development of the AI system, 2. a description of the architecture, design specifications, algorithms and the data structures, how they related to one another and provide the overall processing of the AI system,
3. a description of the data obtained, labelling procedures, data cleaning methodologies, etc., 4. assessment of human oversight, i.e., a detailed report on how human oversight helped in mitigating the errors that otherwise may creep in, 5. validation and testing procedures used, including information about the validation and testing data used and their main characteristics, metrics used to measure accuracy, robustness, etc., 6. cybersecurity measures put in place, does deploying this AI exposes the SCI or its website to cybercriminals in any manner?
8. A description of the appropriateness of the performance metrics for the specific AI system.
9. A detailed description of the risk management system.
10. A report should be made public for the inspection of academics and civil society.
## 5 Conclusion
This paper presents the state of the art principles to be applied on AI systems whose application is in sensitive areas like judiciary. We conclude that the pilot project taken up by the Supreme Court of India to implement AI for transcribing the Supreme Court constitutional bench proceedings is commendable but a bit earlier in time. The success of the project largely depends on the datasets prepared for the said task. We also discussed the use of AI to translate legal texts from English to other Indian languages. We concluded that it is even more difficult task to do such translation. In our opinion, the Supreme Court of India may take up a long term project for creation of datasets which may be made available to the public at large. This will foster research. This will certainly take more than 60 days, as required by the Supreme Court in the current bid document but it will be a much more robust solution. Finally, we conclude by suggesting that the company providing the solution must be asked to provide details and a technical report as mentioned in Section 4.7 should be submitted.
|
2306.17604 | Broken ray transform for twisted geodesics on surfaces with a reflecting
obstacle | We prove a uniqueness result for the broken ray transform acting on the sums
of functions and $1$-forms on surfaces in the presence of an external force and
a reflecting obstacle. We assume that the considered twisted geodesic flows
have nonpositive curvature. The broken rays are generated from the twisted
geodesic flows by the law of reflection on the boundary of a suitably convex
obstacle. Our work generalizes recent results for the broken geodesic ray
transform on surfaces to more general families of curves including the magnetic
flows and Gaussian thermostats. | Shubham R. Jathar, Manas Kar, Jesse Railo | 2023-06-30T12:26:09Z | http://arxiv.org/abs/2306.17604v2 | # Broken ray transform for twisted geodesics on surfaces with a reflecting obstacle
###### Abstract.
We prove a uniqueness result for the broken ray transform acting on the sums of functions and \(1\)-forms on surfaces in the presence of an external force and a reflecting obstacle. We assume that the considered twisted geodesic flows have nonpositive curvature. The broken rays are generated from the twisted geodesic flows by the law of reflection on the boundary of a suitably convex obstacle. Our work generalizes recent results for the broken geodesic ray transform on surfaces to more general families of curves including the magnetic flows and Gaussian thermostats.
**Keywords.** geodesic ray transform, magnetic flows, Gaussian thermostats, broken rays, inverse problems.
**Mathematics Subject Classification (2020)**: 44A12, 58C99, 37E35
###### Contents
* 1 Introduction
* 1.1 Main result
* 2 Preliminaries
* 2.1 Basic notation
* 2.2 Analysis on the sphere bundle
* 2.3 Twisted geodesic flows
* 2.4 Dual \(\lambda\)-geodesic flow
* 2.5 Lemmas for twisted geodesic flows
* 2.6 Even and odd decomposition with respect to the reflection map
* 3 Transport equation for the broken \(\lambda\)-geodesics
* 3.1 Broken ray transforms and the transport equation
* 3.2 Dual transport equation
* 3.3 Jacobi fields for broken \(\lambda\)-geodesics
* 3.4 Regularity of solutions to the transport equation
* 4 Uniqueness for scalar functions and \(1\)-forms
* 4.1 Revisiting the boundary terms in the Pestov identity
* 4.2 Proof of Theorem 1.1
* A Geometry of twisted geodesic flows on surfaces
* A.1 Proof of Proposition 2.1
* A.2 Jacobi fields for \(\lambda\)-geodesics
* A.3 Convexity, concavity and signed \(\lambda\)-curvature
* A.4 Curvature of the dual \(\lambda\)-geodesic flow
* A.5 Proof of Lemma 2.4
## 1. Introduction
This article studies generalizations of the geodesic ray transform to general families of curves. Our main focus will be in broken ray tomography where the trajectories of particles may reflect from the boundary of a reflecting obstacle according to the law of reflection. Furthermore, we consider a situation where the trajectories are influenced by an external force such a magnetic field. Our study limits to the two dimensional case. Our main result, stated later in Theorem 1.1, shows that a function is uniquely determined from the collection of all of its line integrals over the twisted broken rays. We also obtain an analogous result corresponding to vector field tomography with its natural gauge.
Let \((M,g)\) be a smooth Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). We say that a curve \(\gamma\) is a twisted geodesics if it satisfies the \(\lambda\)-geodesic equation
\[D_{t}\dot{\gamma}=\lambda(\gamma,\dot{\gamma})i\dot{\gamma}\]
where \(i\) is the rotation by \(90\) degrees counterclockwise. The term \(\lambda(\gamma,\dot{\gamma})i\dot{\gamma}\) represents an external force that pushes a particle out from its usual geodesic trajectory where a particle without any influence from an external force would continue its motion. The case when \(\lambda(x,v)\) does not depend on the vertical variable \(v\) is called magnetic and the case where \(\lambda(x,v)\) is linear in \(v\) is called Gaussian thermostatic, both widely studied in dynamics [15, 15, 16, 17, 18, 19].
The geodesic ray transform and closely related Radon transforms are studied by many authors and the area has a long history starting from the early twentieth century [14, 17, 18]. More recent advances include the solenoidal injectivity of tensor tomography in two dimensions [19, 18, 17]. In higher dimensions, the geodesic ray transforms are fairly well-understood in negative curvature [13, 14, 15], and when a manifold has a strictly convex foliation [1, 18, 19, 20]. The geodesic ray transform is closely related to the boundary rigidity problem [18, 20] and the spectral rigidity of closed Riemannian manifolds [10, 11, 17]. Other recent considerations include generalizations of many existing results to some classes of open Riemannian manifolds [1, 11, 12, 13] and to the matrix weighted ray transforms [16, 17] as well as their statistical analysis [20, 21].
The ray transforms for twisted geodesics and general families of curves have been studied recently in [1, 14] and in the appendix of [19] by Hanming Zhou. It is shown in [1] that the twisted geodesic ray transform is injective on the simple Finslerian surfaces. For the most recent other studies, we refer to [14]. We give a more detailed account of the works that study inverse problems for the magnetic or Gaussian thermostat flows in Sections 2.3.1 and 2.3.2, respectively.
The broken ray transform has been studied extensively. In the case of a strictly convex obstacle, the uniqueness result for the broken ray transform of scalar functions on Riemannian surfaces of nonpositive curvature were obtained in [10]. This result was later generalized to higher dimensions and tensor fields of any order in [16]. In the case of rotational (or spherical) symmetry, one may sometimes solve these and related problems using local results and data avoiding the obstacle when the manifold satisfies the Herglotz condition [1, 15, 16]. Broken lens rigidity was studied recently in [11], and a broken non-Abelian ray transform in Minkowski space in [1]. Other geometric results include boundary determination from a broken ray transform [15] and a reflection approach using strong symmetry assumptions [15], for example letting to solve the broken ray transforms on flat boxes over closed billiard trajectories [15, 16]. Numerical reconstruction algorithms and stability for the mentioned problem on the flat boxes would follow directly from [13, 14]. Artifacts appearing in the inversion of
a broken ray transform was studied recently in the flat geometry [11]. We refer to the work [10] regarding a possibility to have more than just one reflecting obstacle. Finally, some related results without proofs are stated in the setting of curve families in the Euclidean disk [12].
### Main result
We briefly recall the setting of our work, for further details we point to the later sections. Let \((M,g)\) be a compact oriented smooth Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint subsets. Let \(\nu\) denote the inward unit normal. We define the reflection map \(\rho:\partial SM\to\partial SM\) by
\[\rho(x,v)=(x,v-2\langle v,\nu\rangle_{g}\nu)\,.\]
A curve \(\gamma\) on \(M\) is called a _broken \(\lambda\)-ray_ if \(\gamma\) is a \(\lambda\)-geodesic in \(\operatorname{int}(M)\) and reflects on \(\mathcal{R}\) according to the law of reflection
\[\rho(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))=(\gamma(t_{0}+),\dot{\gamma}(t_{0}+))\]
whenever there is a reflection at \(\gamma(t_{0})\in\mathcal{R}\).
We call a dynamical system \((M,g,\lambda,\mathcal{E})\)_admissible_ (cf. Definition 3.4 with more details) if
1. the emitter \(\mathcal{E}\) is strictly \(\lambda\)-convex;
2. the obstacle \(\mathcal{R}\) has admissible signed \(\lambda\)-curvature;
3. the Gaussian \(\lambda\)-curvature of \((M,g,\lambda)\) is nonpositive;
4. the broken \(\lambda\)-geodesic flow is nontrapping;
5. there exists \(a>0\) such that every broken \(\lambda\)-ray \(\gamma\) has at most one reflection at \(\mathcal{R}\) with \(|\langle\nu,\dot{\gamma}\rangle|<a\).
The _broken \(\lambda\)-ray transform_ of \(f\in C^{2}(SM)\) is defined by
\[If(x,v)=\int_{0}^{\tau_{x,v}}f(\phi_{t}(x,v))dt,\quad(x,v)\in\pi^{-1}\mathcal{ E},\]
where \(\phi_{t}(x,v)=(\gamma_{x,v}(t),\dot{\gamma}_{x,v}(t))\) is the broken \(\lambda\)-geodesic flow, \(\tau_{x,v}\) is the travel time of \(\phi_{t}(x,v)\) and \(\pi:SM\to M\) is the projection \(\pi(x,v)=x\). Our main theorem is the following uniqueness result for the sums of functions and \(1\)-forms.
**Theorem 1.1**.: _Let \((M,g,\lambda,\mathcal{E})\) be an admissible dynamical system with \(\lambda\in C^{\infty}(SM)\). Let \(f(x,\xi)=f_{0}(x)+\alpha_{j}(x)\xi^{j}\) where \(f_{0}\in C^{2}(M)\) is a function and \(\alpha\) is a \(1\)-form with coefficients in \(C^{2}(M)\). If \(If=0\), then \(f_{0}=0\) and \(\alpha=dh\) where \(h\in C^{3}(M)\) is a function such that \(h|_{\mathcal{E}}=0\)._
Our proof is based on the ideas introduced in [13, 14] and the Pestov identity for the twisted geodesic flows in [1, 1]. The proof could be split into three main parts, each of them having their own technical challenges, resolved in our work for a general \(\lambda\in C^{\infty}(SM)\).
1. One has to analyze a generalized Pestov identity with boundary terms and decompose the boundary terms into the even and odd parts with respect to the reflection;
2. One has to reduce the problem into a related transport problem for the broken \(\lambda\)-geodesics and observe if the Pestov identity and the analysis of the first step applies to the solutions of the transport equation, then the problem can be solved;
3. One has to show sufficient regularity for the solutions of the broken transport equation. This can be done by analyzing carefully the behaviour of the broken \(\lambda\)-rays,
broken \(\lambda\)-Jacobi fields at the reflection points, and utilizing a time-reversibility property for the pair of flows with respect to \(\lambda(x,v)\) and \(-\lambda(x,-v)\).
### Acknowledgments
M.K. would like to thank Mikko Salo for suggesting research on the magnetic broken ray transforms and helpful discussions. J.R. thanks Gabriel P. Paternain for many helpful discussions related to this work. S.R.J. and J.R. would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, for support and hospitality during _Rich and Nonlinear Tomography - a multidisciplinary approach_ in 2023 where part of this work was done (supported by EPSRC Grant Number EP/R014604/1). S.R.J. acknowledges the Prime Minister's Research Fellowship (PMRF) from the Government of India for his PhD work. M.K. was supported by MATRICS grant (MTR/2019/001349) of SERB. J.R. was supported by the Vilho, Yrjo and Kalle Vaisala Foundation of the Finnish Academy of Science and Letters.
## 2. Preliminaries
In this section, we introduce our notation and concisely present many primary definitions employed throughout the article. We closely follow the notation used in the recently published book by Paternain, Salo and Uhlmann [4]. We also refer to [11] for the basics of Riemannian geometry.
### Basic notation
Throughout the article, we denote by \((M,g)\) a complete oriented smooth Riemannian manifold with or without boundary. We always assume that \(M\) is a surface, i.e. \(\dim(M)=2\). We denote the Levi-Civita connection or covariant derivative of \(g\) by \(\nabla\), and the determinant of \(g\) by \(|g|\). When the covariant derivative is restricted to a smooth curve \(\gamma\), we simply write \(D_{t}=\nabla_{\dot{\gamma}}\). We sometimes emphasize the base point \(x\in M\) in the notation of the metric as \(g_{x}\) and other operators but this is often omitted. We denote the volume form of \((M,g)\) by \(dV^{2}:=\left|g\right|^{2}dx_{1}\wedge dx_{2}\), expressed in any local positively oriented coordinates. For any vector \(v\in T_{x}M\), let \(v^{\perp}\in T_{x}M\) denote the unique vector obtained by rotating \(v\) counterclockwise by \(90^{\circ}\). This vector satisfies:
\[\left|v^{\perp}\right|_{g}=|v|_{g},\quad\left\langle v,v^{\perp}\right\rangle=0\]
and forms a positively oriented basis of \(T_{x}M\) with \(v\), when \(v\neq 0\). Note that, often, we may also write \(v^{\perp}\) as \(iv\). Given a vector \(v\in T_{x}M\) and a positive orientation on \(M\), we denote by \(v_{\perp}=-v^{\perp}\) the clockwise rotation. We denote by \(K\) the Gaussian curvature of \((M,g)\).
The signed curvature of the boundary \(\partial M\) is defined as
\[\kappa:=\langle D_{t}\dot{\delta}(t),\nu\rangle_{g}\]
where \(\delta(t)\) represents an oriented unit-speed curve that parametrizes the boundary \(\partial M\) and \(\nu\) is the inward unit normal along \(\delta(t)\). For a comprehensive explanation, please refer to [11, Chapter 9, p. 273]. Furthermore, we define the second fundamental form of the boundary \(\partial M\) as follows:
\[\mathrm{I\!I}_{x}(v,w):=-\left\langle\nabla_{v}\nu,w\right\rangle_{g}\]
where \(x\in\partial M\) and \(v,w\in T_{x}\partial M\) (cf. [4, p. 56]). We say that \(\partial M\) is strictly convex at \(x\in\partial M\) if \(\mathrm{I\!I}_{x}\) is positive definite, i.e. \(\mathrm{I\!I}_{x}(v,v)>0\) for any \(v\in T_{x}\partial M\setminus\{0\}\). We say that \(M\) has a strictly convex boundary if \(\partial M\) is strictly convex for any \(x\in\partial M\). We say that \(\partial M\) is strictly concave at \(x\in\partial M\) if \(\mathrm{I\!I}_{x}\) is negative definite, i.e. \(\mathrm{I\!I}_{x}(v,v)<0\) for any \(v\in T_{x}\partial M\setminus\{0\}\). The relation between the signed curvature and the second fundamental form is given by \(\kappa=\mathrm{I\!I}(\dot{\delta},\dot{\delta})\) (cf. [11, Chapters 8 and 9]). If the boundary is strictly
convex, it implies that the signed curvature is positive, whereas if it is strictly concave, the signed curvature is negative.
### Analysis on the sphere bundle
In this article, we often assume that \(M\) is a Riemannian manifold with smooth boundary \(\partial M\). We denote by \(SM\) the unit sphere bundle of \(M\), i.e.
\[SM:=\{\,(x,v)\,;\,v\in T_{x}M,\left|v\right|_{g}=1\,\}.\]
The boundary of \(SM\) is given by
\[\partial SM:=\{\,(x,v)\in SM\,;\,x\in\partial M\,\}.\]
We define the influx and outflux boundaries of \(SM\) as the following sets
\[\partial_{\pm}SM:=\{\,(x,v)\in\partial SM\,;\,\pm\left\langle v,\nu(x)\right\rangle _{g}\geq 0\,\}.\]
The glancing region is defined as \(\partial_{0}SM:=\partial_{+}SM\cap\partial_{-}SM=S(\partial M)\). We denote by \(dS_{x}\) the volume form of \((S_{x}M,g_{x})\) for any \(x\in M\). The sphere bundle \(SM\) is naturally associated with the measure
\[d\Sigma^{3}:=dV^{2}\wedge dS_{x}\]
called the Liouville form.
Let us denote by \(X\) the geodesic vector field, \(V\) the vertical vector field and the orthogonal vector field \(X_{\perp}:=[X,V]\) (cf. [4, Chapter 3.5] for more details). The following structure equations hold
\[[X,V]=X_{\perp},\quad[X_{\perp},V]=-X,\quad[X,X_{\perp}]=-KV \tag{2.1}\]
where \(K\) is the Gaussian curvature of \((M,g)\)[4, Lemma 3.5.5]. The sphere bundle \(SM\) is equipped with the unique Riemannian metric \(G\) such that \(\{X,-X_{\perp},V\}\) forms a positively oriented orthonormal frame. The metric \(G\) is called the Sasaki metric, and it holds that \(dV_{G}=d\Sigma^{3}\)[4, Lemma 3.5.11]. We denote by \(dV^{1}\) the volume form of \((\partial M,g)\). This leads to the definition of the volume form
\[d\Sigma^{2}:=dV^{1}\wedge dS_{x}\]
on \(\partial SM\). We note that \(d\Sigma^{2}=dV_{\partial SM}\) where on the right hand side the volume form is induced by the Sasaki metric.
We define the following \(L^{2}\) inner products
\[(u,w)_{SM}:=\int_{SM}u\overline{w}d\Sigma^{3},\quad(f,g)_{\partial SM}:=\int_ {\partial SM}f\overline{g}d\Sigma^{2}.\]
We next recall simple integration by parts formulas. For any \(u,w\in C^{1}(SM)\), the following formulas hold [4, Proposition 3.5.12]:
\[\begin{split}(Xu,w)_{SM}&=-(u,Xw)_{SM}-(\left\langle v,\nu\right\rangle u,w)_{\partial SM}\\ (X_{\perp}u,w)_{SM}&=-\left(u,X_{\perp}w\right)_{SM} -\left(\left\langle v_{\perp},\nu\right\rangle u,w\right)_{\partial SM}\\ (Vu,w)_{SM}&=-(u,Vw)_{SM}.\end{split} \tag{2.2}\]
Finally, we recall the vertical Fourier decomposition. We define the following spaces of eigenvectors of \(V\)
\[H_{k}:=\{\,u\in L^{2}(SM)\,;\,-iVu=ku\,\},\quad\Omega_{k}:=\{\,u\in C^{\infty} (SM)\,;\,-iVu=ku\,\} \tag{2.3}\]
for any integer \(k\in\mathbb{Z}\). It holds that any \(u\in L^{2}(SM)\) has a unique \(L^{2}\)-orthogonal decomposition
\[u=\sum_{k=-\infty}^{\infty}u_{k},\quad\|u\|^{2}=\sum_{k=-\infty}^{\infty}\|u_{k} \|^{2},\quad u_{k}\in H_{k}. \tag{2.4}\]
If \(u\in C^{\infty}(SM)\), then \(u_{k}\in C^{\infty}(SM)\) and the series converges in \(C^{\infty}(SM)\).
We next discuss some important boundary operators following [4, Lemma 4.5.4]. We define the tangential vector field \(T\) on \(\partial SM\) by setting that
\[T:=(V\mu)X+\mu X_{\perp}|_{\partial SM}\]
where \(\mu(x,v):=\left\langle\nu(x),v\right\rangle_{g}\). The Pestov identity with boundary terms is due to Ilmavirta and Salo [13, Lemma 8]; see also [4, Proposition 4.5.5] and the higher dimensional generalization in [11, Lemma 8]. For any \(u\in C^{2}(SM)\) it holds that
\[\|VXu\|_{SM}^{2}=\|XVu\|_{SM}^{2}-(KVu,Vu)_{SM}+\|Xu\|_{SM}^{2}+(Tu,Vu)_{ \partial SM} \tag{2.5}\]
whenever \((M,g)\) is a compact Riemannian surface with smooth boundary. We generalize (2.5) in Proposition 2.1 to the case where \(X\) is replaced by the generator of a twisted geodesic flow.
### Twisted geodesic flows
We first recall the concept of \(\lambda\)-geodesic flows from the lectures of Merry and Paternain [12, Chapter 7]. Let \((M,g)\) be a complete oriented Riemannian surface (with or without boundary) and \(\lambda\in C^{\infty}(SM)\) be a smooth real valued function. We say that a curve \(\gamma:[a,b]\to M\) is a _\(\lambda\)-geodesic_ if it satisfies the _\(\lambda\)-geodesic equation_
\[D_{t}\dot{\gamma}=\lambda(\gamma,\dot{\gamma})i\dot{\gamma}. \tag{2.6}\]
When \(\lambda\equiv 0\), then the \(\lambda\)-geodesics are the usual geodesics of \((M,g)\). One may think that the function \(\lambda\) twists the usual geodesics in order to model trajectories of particles when moving in the presence of external forces. When \(\lambda\) is a smooth function on \(M\) or \(1\)-form, then the twisted geodesics correspond to the magnetic and thermostatic geodesics, respectively. For other advances in the context of inverse problems for \(\lambda\)-geodesics, we refer to [1, 10]. In particular, the class of \(\lambda\)-geodesics is large and can be characterized with only three natural properties of a curve family, for details see [1, Theorem 1.4].
Let \(\gamma_{x,v}\) be the unique \(\lambda\)-geodesic with the initial condition \((\gamma_{x,v}(t),\dot{\gamma}_{x,v}(t))=(x,v)\) and solving (2.6). As in [12, Exercise 7.2], one may define the _\(\lambda\)-geodesic flow_ by setting that
\[\phi_{t}:SM\to SM,\quad\phi_{t}(x,v)=(\gamma_{x,v}(t),\dot{\gamma}_{x,v}(t)).\]
The _infinitesimal generator of the \(\lambda\)-geodesic flow_\(F\) is given by
\[F=X+\lambda V.\]
Using (2.1), one may derive the following commutator formulas [11, p. 537]:
\[[V,F]=-X_{\perp}+V(\lambda)V,\quad[V,X_{\perp}]=F-\lambda V,\quad[F,X_{\perp} ]=\lambda F-(K+X_{\perp}(\lambda)+\lambda^{2})V. \tag{2.7}\]
We use the following notations
\[P:=VF,\quad\widetilde{P}:=FV.\]
Notice that the formal \(L^{2}\) adjoint of the operator \(P\) is \(P^{*}=(F+V(\lambda))V\neq\widetilde{P}\). Let us define the _\(\lambda\)-curvature_ of \((M,g,\lambda)\) as a map in \(C^{\infty}(SM)\) by the formula
\[K_{\lambda}:=K+X_{\perp}(\lambda)+\lambda^{2}+F(V(\lambda)).\]
We can now recall the following generalized Pestov identity by Dairbekov and Paternain for closed surfaces \(M\)[14, Theorem 3.3]:
\[\|Pu\|_{SM}^{2}=\|\widetilde{P}u\|_{SM}^{2}-(K_{\lambda}Vu,Vu)_{SM}+\|Fu\|_{SM}^{2} \tag{2.8}\]
for any \(u\in C^{\infty}(SM)\). This also holds on compact surfaces with boundary if one additionally assumes that \(u|_{\partial SM}=0\). We generalize (2.5) and (2.8) to the surfaces with smooth boundary without making the simplifying assumption that \(u|_{\partial SM}=0\) (cf. Proposition 2.1). We also remark that similar Pestov identities, but in a slightly less explicit form, were derived by Asylbekov and Dairbekov for the \(\lambda\)-geodesic flows on Finslerian surfaces [1, Theorem 2.3]. In turn, the generalized Pestov identity with boundary terms is used to study generalized broken ray transforms.
We define the _signed \(\lambda\)-curvature_ by
\[\kappa_{\lambda}(x,v):=\kappa+\left\langle v_{\perp},\nu\right\rangle\lambda( x,v)=\kappa-\left\langle\lambda(x,v)iv,\nu(x)\right\rangle=\kappa-\left\langle \nu(x),\lambda(x,v)iv\right\rangle. \tag{2.9}\]
and a related term by
\[\eta_{\lambda}(x,v):=\left\langle V(\lambda)(x,v)v,\nu\right\rangle, \tag{2.10}\]
appearing later in the Pestov identity (2.13). We remark that \(\kappa_{\lambda}\) and \(\eta_{\lambda}\) depend only on the values of \(\lambda\) on \(\partial SM\).
#### 2.3.1. Magnetic flows
We refer to the articles of Arnold [1] and Anosov-Sinai [1] as first mathematical studies of magnetic flows. We will mainly follow the notation used by Ainsworth in the series of works [1, 1, 2], considering the integral geometry of magnetic flows. We further note the following works related to different inverse problems for magnetic flows [1, 1, 2, 3], including the boundary, lens and scattering rigidity problems.
Let \(\Omega\) be a closed 2-form on \(M\) modeling a magnetic field. The Lorentz force \(Y:TM\to TM\) associated with the magnetic field \(\Omega\) is the unique bundle map such that
\[\Omega_{x}(\xi,\eta)=\left\langle Y_{x}(\xi),\eta\right\rangle_{g},\quad \forall x\in M,\ \xi,\eta\in T_{x}M.\]
We say that \(\gamma\) is a _magnetic geodesic_ if it satisfies the magnetic geodesic equation
\[D_{t}\dot{\gamma}=Y(\dot{\gamma}). \tag{2.11}\]
Notice now that since \(M\) is orientable, there exists a unique function \(\tilde{\lambda}:M\to\mathbb{R}\) such that \(\Omega=\tilde{\lambda}dV^{2}\). We may define \(\lambda=\tilde{\lambda}\circ\pi\). Now it holds that \(\gamma\) solves (2.11) if and only if it solves (2.6). We may define the _magnetic flow_ simply as the corresponding \(\lambda\)-geodesic flow with the fixed energy level \(1/2\) corresponding to the unit speed curves.
One may also view the magnetic flow as the Hamiltonian flow of \(H(x,v)=\frac{1}{2}\left|v\right|_{g}^{2}\) under the symplectic form
\[\omega:=\omega_{0}+\pi^{*}\Omega\]
where \(\omega_{0}\) is the symplectic structure of \(TM\) generated by the metric pullback of the canonical symplectic form on \(T^{*}M\). The magnetic geodesics are known to be constant speed and different energy levels lead to different curves. We also remark that the magnetic flow is time-reversible if and only if \(\Omega\equiv 0\). Therefore, \(\gamma_{x,v}(-t)\) is not a magnetic geodesic of \((M,g,\Omega)\). However, we have that the magnetic field with flipped sign \(-\Omega\) reverses the orientation of geodesics, i.e. \(\gamma_{x,-v}^{-\Omega}(t)=\gamma_{x,v}^{\Omega}(-t).\) One may check that \(-\Omega\) is the dual of \(\Omega\) in the sense of Section 2.4.
#### 2.3.2. Thermostatic flows
The Gaussian thermostats concept was proposed by Hoover for the analysis of dynamical systems in mechanics [14], but it appears also earlier, for example, in the work of Smale [13]. The inverse problem for Gaussian thermostats has been more recently explored in the works of Dairbekov and Paternain [12, 13]. Other contributions have also been made by Assylbekov and Zhou [1], Assylbekov and Dairbekov [2], and Assylbekov and Rea [1]. In addition, the dynamical and geometrical properties of Gaussian thermostats have been extensively studied, as demonstrated in the contributions by Wojtkowski [15, 16], Paternain [17], Assylbekov and Dairbekov [2], and Mettler and Paternain [12, 13]. Gaussian thermostat also arises in Weyl geometry, see for instance [14].
Consider a smooth vector field \(E\) on \(M\), representing an external field. A _thermostatic geodesic_ satisfies the equation
\[D_{t}\dot{\gamma}=E(\gamma)-\frac{\langle E(\gamma),\dot{\gamma}\rangle}{| \dot{\gamma}|^{2}}\dot{\gamma}. \tag{2.12}\]
The flow \(\phi_{t}=(\gamma(t),\dot{\gamma}(t))\) is callad as thermostatic flow. It is noteworthy to mention that the thermostatic geodesics are time-reversible, which means that \(\phi_{t}(x,-v)=\phi_{-t}(x,v)\). When \(E=0\), then the thermostatic geodesics are the usual geodesic. Given a 1-form \(\lambda\) defined by \(\lambda(x,v):=\langle E(x),iv\rangle\), the equation (2.12) can be rewritten as the corresponding \(\lambda\)-geodesic equation
\[D_{t}\dot{\gamma}=\lambda(\gamma,\dot{\gamma})i\dot{\gamma}.\]
### Dual \(\lambda\)-geodesic flow
It is well known that the usual geodesics are time-reversible. The magnetic geodesics are not time reversible unless \(\Omega\equiv 0\) (cf. [12, p. 537]). In [1], it is mentioned that the magnetic geodesics corresponding to \(\lambda\) and \(-\lambda\) are one-to-one correspondence through time reversal. This means that a curve \(t\mapsto\gamma_{x,v}(t)\) is a magnetic \(\lambda\)-geodesic if and only if the curve \(t\mapsto\gamma_{x,v}(-t)\) is a magnetic \((-\lambda)\)-geodesic. However, the thermostatic \(\lambda\)-geodesic flow is time-reversible, see for instance in [17, p. 88]). Therefore, the \(\lambda\)-geodesic flow in the case of \(\lambda\in C^{\infty}(SM)\) is in general not time-reversible.
Next, we will define the corresponding dual \(\lambda\)-geodesic flow. We call this dynamical system as the dual \(\lambda\)-geodesic flow. We can define the time-reversed dynamical system related to \(\lambda\) by setting
\[\lambda^{-}(x,v):=-\lambda(x,-v).\]
It now follows that \(\gamma_{x,-v}^{-}(t)=\gamma_{x,v}(-t)\) where \(\gamma_{x,-v}^{-}\) is a unique \(\lambda^{-}\)-geodesic with initial data \((x,-v)\). We call \(\lambda^{-}\) as the _dual_ of \(\lambda\). This time-reversibility property can be checked by substituting \(\gamma_{x,-v}^{-}(t):=\gamma_{x,v}(-t)\) to the \(\lambda^{-}\)-geodesic equation and using the fact that \(\gamma_{x,v}\) solves the \(\lambda\)-geodesic equation. In fact,
\[\nabla_{\dot{\gamma}^{-}}\dot{\gamma}^{-}|_{t=s} =\lambda^{-}(\gamma(-s),\frac{d}{dt}(\gamma(-t))|_{t=s})i\left[ \frac{d}{dt}(\gamma(-t))|_{t=s}\right]\] \[=-\lambda(\gamma(-s),-(-\dot{\gamma}(-s)))i(-\dot{\gamma}(-s))\] \[=\lambda(\gamma(-s),\dot{\gamma}(-s))i(\dot{\gamma}(-s))\]
and in local coordinates
\[\nabla_{\dot{\gamma}}\dot{\gamma}|_{t=-s} =\ddot{\gamma}^{l}(-s)+\Gamma^{l}_{jk}(\gamma(-s))\dot{\gamma}^{j} (-s)\dot{\gamma}^{k}(-s)\] \[=\nabla_{\dot{\gamma}^{-}}\dot{\gamma}^{-}|_{t=s}\]
where we write simply \(\gamma=\gamma_{x,v}\) and \(\gamma^{-}=\gamma_{x,-v}^{-}\). Since \(\nabla_{\dot{\gamma}}\dot{\gamma}=\lambda(\gamma,\dot{\gamma})i\dot{\gamma}\) holds for all times \(s\) in the maximal domain of the \(\lambda\)-geodesic \(\gamma_{x,v}\), we can conclude that \(\gamma^{-}\) is a \(\lambda^{-}\)-geodesic. On the other hand, \(\dot{\gamma}^{-}(0)=-v\) and \(\gamma^{-}(0)=x\), which justifies writing \(\gamma^{-}=\gamma_{x,-v}^{-}\).
The generator of the dual \(\lambda\)-geodesic flow \(\phi_{t}^{-}\) is simply given by \(F^{-}:=X+\lambda^{-}V\). Additionally, it is worth noting that \((\lambda^{-})^{-}=\lambda\). We will use the dual flow to establish regularity results for the solutions of a broken transport equation in Sections 3.2 and 3.4. For the sake of completeness, we will discuss in Section A.4 the curvature and signed curvature of \(\lambda\) and \(\lambda^{-}\).
### Lemmas for twisted geodesic flows
In the following proposition, we provide a generalized version of the Pestov identity for the generators of twisted geodesic flows. Similar identities are proved earlier in [14, Theorem 3.3] for closed Riemannian surfaces, in [13, Lemma 8] for surfaces with boundary under the condition \(\lambda\equiv 0\), and for Finslerian surfaces in terms of Lie derivatives on the boundary [1, Theorem 2.3]. A detailed proof is given in Appendix A.1.
**Proposition 2.1** (Generalized Pestov identity).: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). If \(u\in C^{2}(SM)\), then we have_
\[\begin{split}\|Pu\|_{SM}^{2}&=\|\widetilde{P}u\|_{ SM}^{2}-(K_{\lambda}Vu,Vu)_{SM}+\|Fu\|_{SM}^{2}\\ &\quad-((\nu_{\perp},\nu)Fu,Vu)_{\partial SM}-(\langle v,\nu \rangle(V(\lambda))Vu,Vu)_{\partial SM}\\ &\quad+(\langle v,\nu\rangle X_{\perp}u,Vu)_{\partial SM}.\end{split} \tag{2.13}\]
We say that a vector field \(J\) along a \(\lambda\)-geodesic \(\gamma\) is a \(\lambda\)-Jacobi field if it is a variation field of \(\gamma\) through \(\lambda\)-geodesics, i.e. \(J(t)=\partial_{s}\gamma_{s}(t)|_{s=0}\) where \(\gamma_{s}(t)\) is a smooth one-parameter family of \(\lambda\)-geodesics with \(\gamma=\gamma_{0}\) (for further details we refer to Section A.2). We will next state a useful estimate on the growth rate of \(\lambda\)-Jacobi fields. On compact Riemannian surfaces the norm of a Jacobi field and its covariant derivative grow at most exponentially (see e.g. [13, Lemma 10]). Such inequalities are useful in studying the stability of geodesics and their relation to the curvature of the manifold. In the following lemma, we establish a similar result for the Jacobi equation associated with \(\lambda\)-geodesics. A detailed proof is given in the end of Section A.2.
**Lemma 2.2**.: _Let \((M,g)\) be a compact Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). Let \(J\) be a \(\lambda\)-Jacobi field defined along a unit speed \(\lambda\)-geodesic \(\gamma:[a,b]\to M\). Then the following growth estimate holds for all \(t\in[a,b]\)_
\[|J(t)|^{2}+|D_{t}J(t)|^{2}\leq e^{Ct}\left(|J(a)|^{2}+|D_{t}J(a)|^{2}\right), \tag{2.14}\]
_where \(C\) is a uniform constant depending only on \(M,g\) and \(\lambda\)._
### Even and odd decomposition with respect to the reflection map
Given a Riemannian surface \((M,g)\) with smooth boundary. We define the reflection map \(\rho:\partial SM\to\partial SM\) by
\[\rho(x,v)=(x,v-2\langle v,\nu\rangle_{g}\nu)\,.\]
We denote by \(\rho^{*}\) the pull back of \(\rho.\) The even and odd parts of \(u:SM\to\mathbb{R}\) with respect to the reflection map \(\rho\) are defined by the formula
\[u_{e}=\frac{1}{2}(u+u\circ\rho),\qquad u_{o}=\frac{1}{2}(u-u\circ\rho).\]
We will next state a simple lemma related to the reflection and rotation maps. We omit presenting a proof.
**Lemma 2.3**.: _Let \((M,g)\) be a Riemannian surface with smooth boundary. The reflection and the rotation maps satisfy_
1. \(i^{-1}=-i\)_;_
2. \(\rho^{-1}=\rho\)_;_
3. \(i\rho i=\rho\)_._
The boundary operators \(\kappa_{\lambda}\) and \(\nu_{\lambda}\), as defined in (2.9) and (2.10), satisfy the following simple identities. These formulas will later on allow us to simplify the boundary terms in (2.1).
**Lemma 2.4**.: _Let \((M,g)\) be Riemanian surface with smooth boundary. Then \(\kappa_{\lambda}\) and \(\eta_{\lambda}\) satisfy the following properties_
1. \(\kappa_{\lambda\circ\rho}=\kappa_{\lambda}\circ\rho\)_;_
2. \(\eta_{\lambda\circ\rho}=\eta_{\lambda}\circ\rho\)_;_
3. \(\kappa_{\lambda_{e}}=\left(\kappa_{\lambda}\right)_{e}\) _and_ \(\eta_{\lambda_{e}}=\left(\eta_{\lambda}\right)_{e}\)_;_
4. \(\rho^{*}\left(\kappa_{\lambda_{e}}\right)=\kappa_{\lambda_{e}}\) _and_ \(\rho^{*}\left(\eta_{\lambda_{e}}\right)=\eta_{\lambda_{e}}\)_._
Proof of Lemma 2.4 is given in Appendix A.5.
## 3. Transport equation for the broken \(\lambda\)-geodesics
### Broken ray transforms and the transport equation
Let \((M,g)\) be an orientable, compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). We assume that \(\partial M\) can be decomposed to the union of two disjoint relatively open disjoint subsets \(\mathcal{E}\) and \(\mathcal{R}\). In particular, one could think of \(M\) as \(M=\hat{M}\setminus O\), where \(\hat{M}\) is a larger manifold containing \(M\) and \(O\) being an open obstacle. In this case, \(\mathcal{E}=\partial\hat{M}\) and \(\mathcal{R}=\partial O\) are the outer and inner boundaries of \(M\). We call \(\mathcal{E}\) the _emitter_ and \(\mathcal{R}\) the _reflector_ of \(M.\) In Section 2.4, we denoted by \(\phi_{t}\) the usual \(\lambda\)-geodesic flow and by \(\phi_{t}^{-}\) its dual flow. By abuse of notation, we now continue to write the same notation \(\phi_{t}\) for the broken \(\lambda\)-geodesic flow and \(\phi_{t}^{-}\) to its dual flow. For any \((x,v)\in SM\), we define the _forward_ and _dual travel times_ by
\[\tau_{x,v}:=\inf\{\,t\geq 0\,;\,\phi_{t}(x,v)\in\mathcal{E}\,\},\quad\tau_{x,v}^ {-}:=\inf\{\,t\geq 0\,;\,\phi_{t}^{-}(x,v)\in\mathcal{E}\,\}.\]
We note that for a typical \(\lambda\in C^{\infty}(SM)\) it actually holds that \(\tau_{x,v}\neq\tau_{x,v}^{-}\) for most of \((x,v)\in SM\) since the twisted geodesic flows are not reversible. On the other hand, the maximal domain of \(\gamma_{x,v}\) is \([-\tau_{x,-v}^{-},\tau_{x,v}]\), and that of \(\gamma_{x,-v}^{-}\) is \([-\tau_{x,v},\tau_{x,-v}^{-}]\). Let \(\pi:SM\to M\) be a projection map so that \(\pi(x,v)=x\). We define
\[\pi^{-1}\mathcal{E}:=\{(x,v):x\in\mathcal{E},\ v\in S_{x}M\},\quad\pi^{-1} \mathcal{R}:=\{(x,v):x\in\mathcal{R},\ v\in S_{x}M\}.\]
Let us denote by \(\rho:\pi^{-1}\mathcal{R}\to\pi^{-1}\mathcal{R}\) the _reflection map_ and define by the law of reflection
\[\rho(x,v):=\left(x,v-2\left\langle v,\nu\right\rangle_{g}\nu(x)\right).\]
Note that \(\rho\) is an involution in the sense that \(\rho\circ\rho=\mathrm{Id}\). Here and subsequently, \(\gamma(t_{0}-)\) stands for the left-hand limit of \(\gamma\) at some point \(t_{0}\) and \(\gamma(t_{0}+)\) denotes the right-hand limit of \(\gamma\) at \(t_{0}\), which are defined by \(\gamma(t_{0}-)=\lim_{t\to t_{0}^{-}}\gamma(t)\) and \(\gamma(t_{0}+)=\lim_{t\to t_{0}^{+}}\gamma(t)\). Similarly, \(\dot{\gamma}(t_{0}-)=\lim_{t\to t_{0}^{-}}\dot{\gamma}(t)\) and \(\dot{\gamma}(t_{0}+)=\lim_{t\to t_{0}^{+}}\dot{\gamma}(t)\).
**Definition 3.1**.: _Let \((M,g)\) be a Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint
subsets. A curve \(\gamma\) on \(M\) is called a _broken \(\lambda\)-ray_ if \(\gamma\) is a \(\lambda\)-geodesic in \(\operatorname{int}(M)\) and reflects on \(\mathcal{R}\) according to the law of reflection_
\[\rho(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))=(\gamma(t_{0}+),\dot{\gamma}(t_{0}+))\]
_whenever there is a reflection at \(\gamma(t_{0})\in\mathcal{R}\)._
The broken \(\lambda\)-ray transform of \(f\in C^{2}(SM)\) is defined by
\[If(x,v)=\int_{0}^{\tau_{x,v}}f(\phi_{t}(x,v))dt,\quad(x,v)\in\pi^{-1}\mathcal{ E}, \tag{3.1}\]
where \(\phi_{t}(x,v)=(\gamma_{x,v}(t),\dot{\gamma}_{x,v}(t))\) is the broken \(\lambda\)-geodesic flow. We next move towards studying the injectivity of the broken \(\lambda\)-ray transform, that is, is it possible to determine an unknown \(f\) from the knowledge of its integrals (3.1) over maximal broken \(\lambda\)-rays? To proceed that, we first reduce the integral equation (3.1) to a certain partial differential equation. Given any \(f\in C^{2}(SM)\), define
\[u(x,v):=\int_{0}^{\tau_{x,v}}f(\phi_{t}(x,v))dt,\quad(x,v)\in SM. \tag{3.2}\]
Notice that the exit time \(\tau_{x,v}\) is smooth near \((x,v)\in SM\) whenever the broken ray \(\gamma_{x,v}\) reflects and exits transversely. A simple application of the fundamental theorem of calculus together with the regularity properties of exit time, we deduce from (3.2) that \(u\) satisfies the transport equation
\[\begin{cases}(X+\lambda V)u=-f,&\text{ in }\operatorname{int}(SM),\\ u|_{\pi^{-1}\mathcal{E}}=If,&u|_{\pi^{-1}\mathcal{R}}=u\circ\rho|_{\pi^{-1} \mathcal{R}}.\end{cases} \tag{3.3}\]
We need to make some assumption on the geometry of \(M\) and its reflecting boundary parts to make the injectivity problem more approachable.
**Remark 3.2**.: _If \(\lambda\circ\rho=\lambda\) on \(\mathcal{R}\), then we have_
\[\left(\kappa_{\lambda}\right)_{o}+\left(\eta_{\lambda}\right)_{o}=0,\]
_where \(\left(\kappa_{\lambda}\right)_{o}\) and \(\left(\eta_{\lambda}\right)_{o}\) are odd parts of the functions \(\kappa_{\lambda}\) and \(\eta_{\lambda}\) respectively. This assumption holds, for example, when \(\lambda\) is independent of the direction \(v\) on \(\pi^{-1}\mathcal{R}\)._
**Definition 3.3**.: _Let \((M,g)\) be a Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). We say that a relatively open subset \(\mathcal{R}\subset\partial M\) of the boundary has an admissible \(\lambda\)-curvature if the following inequality holds:_
\[(\kappa_{\lambda})_{e}(x,v)+(\eta_{\lambda})_{e}(x,v)\leq 0\quad\text{ for all }(x,v)\in\pi^{-1}\mathcal{R}.\]
If \(V(\lambda)\) vanishes on \(\pi^{-1}\mathcal{R}\), i.e. \(\lambda\) is only a function of the base point on \(\mathcal{R}\), and \(\mathcal{R}\) is strictly \(\lambda\)-concave at any \(x\in\mathcal{R}\), then \(\mathcal{R}\) has an admissible \(\lambda\)-curvature.
From Corollary A.14 and Remark A.15, we have
\[(\kappa_{\lambda^{-}})_{e}(x,v) +(\eta_{\lambda^{-}})_{e}(x,v)=\kappa_{\lambda_{e}^{-}}(x,v)+ \eta_{\lambda_{e}^{-}}(x,v)\] \[=\kappa_{\lambda_{e}}(x,-v)+\eta_{\lambda_{e}}(x,-v)=(\kappa_{ \lambda})_{e}(x,-v)+(\eta_{\lambda})_{e}(x,-v),\]
i.e., an obstacle \(\mathcal{R}\) has admissible \(\lambda\)-curvature if and only if \(\mathcal{R}\) has admissible \(\lambda^{-}\)-curvature. If \(V(\lambda_{e})|_{\mathcal{R}}=0\), then the condition that the obstacle \(\mathcal{R}\) has admissible \(\lambda\)-curvature is equivalent to the strict \(\lambda\)-concavity of \(\mathcal{R}\).
In [16, Theorem 1], it was proved that one can recover an unknown function \(f\) from the knowledge of its geodesic broken ray transform \(If\). They assumed that the surface is nontrapping, having nonpositive Gaussian curvature, the reflecting part is strictly concave, and the broken rays allow at most one reflection with \(|\langle\dot{\gamma},\nu\rangle|<a\). See also
[12, Definition 1] for similar assumptions used to study the broken ray transforms in three and higher dimensions. We now define a similar class of admissible Riemannian surfaces with broken \(\lambda\)-geodesic flows.
**Definition 3.4**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint subsets. We call a dynamical system \((M,g,\lambda,\mathcal{E})\) admissible if_
1. _the emitter_ \(\mathcal{E}\) _is strictly_ \(\lambda\)_-convex in the sense of Definition_ A.11_;_
2. _the obstacle_ \(\mathcal{R}\) _has admissible_ \(\lambda\)_-curvature in the sense of Definition_ 3.3_;_
3. _the_ \(\lambda\)_-curvature_ \(K_{\lambda}\) _of_ \((M,g,\lambda)\) _is nonpositive;_
4. _the broken_ \(\lambda\)_-geodesic flow is nontrapping: there exists_ \(L>0\) _such that_ \(\tau_{x,v},\tau_{x,v}^{-}<L\) _for any_ \((x,v)\in SM\)_;_
5. _there exists_ \(a>0\) _such that every broken_ \(\lambda\)_-ray_ \(\gamma\) _has at most one reflection at_ \(\mathcal{R}\) _with_ \(|\langle\nu,\dot{\gamma}\rangle|<a\)_._
### Dual transport equation
Let us define the _dual transport equation_ of (3.3) as
\[\begin{cases}(X+\lambda^{-}V)u=-\tilde{f},&\text{ in }\operatorname{int}(SM), \\ u|_{\pi^{-1}\mathcal{E}}=I^{-}\tilde{f},&u|_{\pi^{-1}\mathcal{R}}=u\circ\rho|_{ \pi^{-1}\mathcal{R}},\end{cases} \tag{3.4}\]
where \(\tilde{f}(x,v):=f(x,-v)\) and \(I^{-}\) denotes the broken ray transform related to the \(\lambda^{-}\)-geodesic flow. For nontrapping broken \(\lambda\)-geodesic flows, we define the _scattering relation_\(\alpha:\partial SM\to\partial SM\) by
\[\alpha(x,v):=\phi_{\tau_{x,v}}(x,v).\]
**Lemma 3.5**.: _Let \((M,g)\) be a compact nontrapping Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). If \(f\in C^{2}(SM)\) and \(If=0\), then_
\[u(x,v)=-u^{-}(x,-v)\]
_where \(u\) is the unique solution of the transport equation (3.3) and \(u^{-}\) is the unique solution of the dual transport equation (3.4)._
Proof.: Let us consider \((x,v)\in SM\). Now the union of curves \(\gamma_{x,v}\) and \(\gamma_{x,-v}^{-}\) form a maximal broken \(\lambda\)-geodesic (cf. Section 2.4) with its endpoints on \(\partial M\). By assumption, \(If=0\), we can deduce that
\[0=\int_{0}^{\tau_{x,v}}f(\gamma_{x,v}(t),\dot{\gamma}_{x,v}(t))dt+\int_{0}^{ \tau_{x,-v}^{-}}f(\gamma_{x,-v}^{-}(t),-\dot{\gamma}_{x,-v}^{-}(t))dt. \tag{3.5}\]
By the definition \(u^{-}\) is the unique solution of (3.4). Note that
\[If(x,v)=I^{-}\tilde{f}(r\circ\alpha)(x,v)\]
where \(\alpha\) is the scattering relation and \(r\) is the reversion map. This implies that \(u^{-}|_{\pi^{-1}\mathcal{E}}=I^{-}\tilde{f}=0\). Since \(X+\lambda^{-}V\) is the generator of \(\lambda^{-}\)-geodesic flow \(\phi^{-}\), we get that
\[u^{-}(x,v)=\int_{0}^{\tau_{x,v}^{-}}\tilde{f}(\gamma_{x,v}^{-}(t),\dot{\gamma }_{x,v}^{-}(t))dt=\int_{0}^{\tau_{x,v}^{-}}f(\gamma_{x,v}^{-}(t),-\dot{\gamma }_{x,v}^{-}(t))dt. \tag{3.6}\]
The formulas (3.5) and (3.6) imply that
\[u(x,v)+u^{-}(x,-v)=0,\]
which completes the proof.
**Remark 3.6**.: _Lemma 3.5 also clearly holds in the setting of admissible dynamical systems \((M,g,\lambda,\mathcal{E})\)._
### Jacobi fields for broken \(\lambda\)-geodesics
In this subsection, we give a brief exposition of Jacobi fields along broken \(\lambda\)-geodesic flows. In Lemma A.8, we show that Jacobi fields along a \(\lambda\)-geodesic flow can be realised as the variation field of a unit speed \(\lambda\)-geodesic variation of \(\gamma\). Here, we will generalize similar properties of Jacobi fields to the case of broken \(\lambda\)-geodesic flows. The crucial point is to understand how the Jacobi fields behave at reflection points. The Jacobi fields along broken rays have been studied in the case of \(\lambda=0\) in [16, Section 5] and we follow some of the techniques from this article. Let \(x_{0}\in\partial M\) and \(\nu\) be the inward unit normal to it. We define a map \(\Phi_{\zeta}:T_{x_{0}}M\to T_{x_{0}}M\) by setting
\[\Phi_{\zeta}\xi=2\left(\left\langle\nabla_{\varphi_{\zeta}\xi}\nu,\zeta \right\rangle\nu+\left\langle\nu,\zeta\right\rangle\nabla_{\varphi_{\zeta}\xi }\nu\right),\]
for any vector \(\zeta\in T_{x_{0}}M\) that is not orthogonal to \(\nu\), where the map \(\varphi_{\zeta}:T_{x_{0}}M\to T_{x_{0}}M\) is defined by
\[\varphi_{\zeta}\xi=\xi-\frac{\left\langle\xi,\nu\right\rangle}{\left\langle \zeta,\nu\right\rangle}\zeta.\]
Since \(\varphi_{\zeta}\xi\perp\nu\), the derivative \(\nabla_{\varphi_{\zeta}\xi}\nu\) is well defined. To analyze Jacobi fields at reflection points, we first make an assumption on \(\lambda\) on the reflected part of the boundary. In particular, we require the condition on \(\lambda\) that \(\lambda\circ\rho=\beta\lambda\) on \(\mathcal{R}\), where \(\beta\in L^{\infty}(\pi^{-1}\mathcal{R})\). Taking \((x,v)\to\rho(x,v)\) we have
\[\lambda\circ\rho(\rho(x,v))=\beta(\rho(x,v))\lambda(\rho(x,v))\]
and
\[\lambda(x,v)=\beta(\rho(x,v))\lambda(\rho(x,v)),\]
which gives the condition on \(\beta\) that \(1=\beta(x,v)\beta(\rho(x,v))\). For example, if we consider \(Z=\left\{(x,v):\lambda(x,v)=0\right\}=\left\{(x,v):\lambda\circ\rho(x,v)=0\right\}\), then we have a \(\beta\) given as follows
\[\beta(x,v)=\begin{cases}\frac{\lambda\circ\rho(x,v)}{\lambda(x,v)}&\text{ in }\pi^{-1}\mathcal{R}\setminus Z\\ 1&\text{ in }Z.\end{cases}\]
However, in the end of this section, we are able to establish suitable growth estimates for \(\lambda\)-Jacobi fields without this additional \(\beta\)-reflection condition. The benefit of the \(\beta\)-reflection condition is that it allows to write down a clean reflection condition for the broken Jacobi fields.
**Definition 3.7**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint subsets and \(\lambda\circ\rho|_{\pi^{-1}\mathcal{R}}=\beta\lambda|_{\pi^{-1}\mathcal{R}}\) for some \(\beta\in L^{\infty}(\pi^{-1}\mathcal{R})\). Let \(\gamma\) be a broken \(\lambda\)-ray without tangential reflections. Then a vector field \(J\) along \(\gamma\) is a Jacobi field along \(\gamma\) if_
1. \(J\) _is a_ \(\lambda\)_-Jacobi field along the segments of_ \(\lambda\)_-geodesic_ \(\gamma\) _in_ \(\operatorname{int}(SM)\) _in the usual sense (cf. Section A.2);_
2. _if_ \(\gamma\) _has a reflection at_ \(\gamma\left(t_{0}\right)\in\mathcal{R}\)_, then the left and right limits of_ \(J\) _at_ \(t_{0}\) _are related via_ (3.7) \[\begin{split} J\left(t_{0}+\right)&=\rho J\left(t_{0} -\right),\text{ and }\\ D_{t}J\left(t_{0}+\right)&=\rho D_{t}J(t_{0}-)- \Phi_{\dot{\gamma}\left(t_{0}-\right)}J(t_{0}-)\\ &\hskip 142.26378pt-\left(\beta\left(\gamma\left(t_{0}-\right), \dot{\gamma}\left(t_{0}-\right)\right)+1\right)\frac{\left\langle J\left(t_{0}- \right),\nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle} \rho D_{t}I(t_{0}-),\end{split}\] _where_ \(I(t)=\dot{\gamma}(t)\)_._
It is clear that if \(\left(\lambda\circ\rho\right)=\beta\lambda\) on \(\pi^{-1}\mathcal{R}\), then we have \(\left(\lambda^{-}\circ\rho\right)=\frac{1}{\beta}\lambda^{-}\) on \(\pi^{-1}\mathcal{R}\) and hence the identity (3.7) is equivalent to
\[J\left(t_{0}-\right)=\rho J\left(t_{0}+\right),\text{ and }\] \[D_{t}J\left(t_{0}-\right)=\rho D_{t}J(t_{0}+)-\Phi_{\dot{\gamma} \left(t_{0}+\right)}J(t_{0}+)\] \[\qquad\qquad-\left(\frac{1}{\beta\left(\gamma\left(t_{0}+\right),\dot{\gamma}\left(t_{0}+\right)\right)}+1\right)\frac{\left\langle J\left(t_ {0}+\right),\nu\right\rangle}{\left\langle I\left(t_{0}+\right),\nu\right\rangle }\rho D_{t}I(t_{0}+).\]
**Remark 3.8**.: _In the case of usual geodesics it holds that \(D_{t}I=D_{t}\dot{\gamma}=0\)._
In [16, Lemma 12], it has been pointed out that if none of the broken geodesic rays \(\gamma_{s}\) have tangential reflections, then \(J(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}\) is a Jacobi field along \(\gamma_{0}\), where \(\gamma_{s}\) are the variations of \(\gamma_{0}\). Conversely, any Jacobi field can be understood as a variation of the broken geodesic \(\gamma_{0}.\) We can now state an analogue of [16, Lemma 12] in the case of broken \(\lambda\)-rays, where \(\lambda\in C^{\infty}(SM)\).
**Lemma 3.9**.: _Let \(\left(M,g\right)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint subsets. Let \(\gamma_{s}:\left[0,L\right]\to M\) be the broken \(\lambda\)-ray starting at \(\left(x_{s},v_{s}\right)\) where the parametrization \(\left(-\varepsilon,\varepsilon\right)\ni s\mapsto\left(x_{s},v_{s}\right)\in \operatorname{int}SM\) is a smooth map. If the broken \(\lambda\)-rays \(\gamma_{s}\) do not have tangential reflections and \(\left(\lambda\circ\rho\right)=\beta\lambda\) on \(\pi^{-1}\mathcal{R}\) where \(\beta,\frac{1}{\beta}\in L^{\infty}(\pi^{-1}\mathcal{R})\), then_
\[J(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}\]
_is a Jacobi field along the broken \(\lambda\)-ray \(\gamma_{0}\)._
Proof.: By Lemma A.1, it follows that \(J\) satisfies the \(\lambda\)-Jacobi equation on each \(\lambda\)-geodesic segment between the reflection points. Therefore, it is sufficient to prove that \(J\) satisfies (3.7) at the reflection points. Let \(\gamma:=\gamma_{0}\) be a broken \(\lambda\)-ray that does not have a tangential reflection at the reflection point \(t=t_{0}\). After possibly shrinking the domain of definition, we may assume that \(t=t_{0}\) is the only reflection point of \(\gamma_{0}\) and each of the broken rays \(\gamma_{s}\) have at most one reflection.
We begin by proving the lemma for a special family of curves corresponding to the tangential Jacobi fields. Let us consider a family of broken \(\lambda\)-geodesics \(\gamma_{s}(t)=\gamma(t+s)\) with the starting point and velocity given by \(\left(\gamma_{s}(0),\dot{\gamma}_{s}(0)\right)=\left(x_{s},v_{s}\right)\). We denote \(I\) by the vector field corresponding to \(\gamma_{s}\). Notice that
\[I(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}=\dot{\gamma}(t)\]
is a \(\lambda\)-Jacobi field except at the reflection point, see Lemma A.1.
We now analyze the behaviour of \(I\) at reflection point \(t_{0}\). By the definition of the broken \(\lambda\)-ray, we have \(I\left(t_{0}+\right)=\rho I\left(t_{0}-\right)\). Since \(\gamma_{s}\) satisfies the \(\lambda\)-geodesic equation, we have \(D_{t}I(t)=\lambda(\gamma(t),\dot{\gamma}(t))i\dot{\gamma}(t)\) outside the reflection point, and this leads to
\[D_{t}I\left(t_{0}+\right) =\lambda(\gamma(t_{0}+),\dot{\gamma}(t_{0}+))i\dot{\gamma}\left(t _{0}+\right)\] \[=\lambda\circ\rho(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))i\rho\dot{ \gamma}\left(t_{0}-\right)\] \[=\beta(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))\lambda(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))i\rho\dot{\gamma}\left(t_{0}-\right).\]
Applying Lemma 2.3 to the above identity, we obtain
\[-i\rho iD_{t}I\left(t_{0}+\right) =\beta(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))\lambda(\gamma(t_{0}- ),\dot{\gamma}(t_{0}-))i\dot{\gamma}\left(t_{0}-\right)\] \[=\beta(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))D_{t}I(t_{0}-)\]
which implies
\[D_{t}I\left(t_{0}+\right) =-i\rho i\beta(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))D_{t}I(t_{0}-)\] \[=-\beta(\gamma(t_{0}-),\dot{\gamma}(t_{0}-))\rho D_{t}I(t_{0}-).\]
Hence the vector field \(I\) satisfies (3.7).
Now it remains to prove the lemma in the case of general variations of broken \(\lambda\)-rays. Let \(J\) denote the vector field along \(\gamma\), as given in the statement of the lemma. In view of the proof of [16, Lemma 12], we define another vector field \(\tilde{J}\) along \(\gamma\) by setting
\[\tilde{J}(t)=J(t)-\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{ \left\langle I\left(t_{0}-\right),\nu\right\rangle}I(t).\]
Since \(\gamma(t)\) does not have a tangential reflection, we see that \(\left\langle I\left(t_{0}-\right),\nu\right\rangle=\left\langle\dot{\gamma} \left(t_{0}-\right),\nu\right\rangle\neq 0\).
Note that \(I(t)\) and \(J(t)\) satisfy the \(\lambda\)-Jacobi equation except at the point of reflection, see Lemma A.1. Therefore, by the linearity of the \(\lambda\)-Jacobi equation, the vector field \(\tilde{J}\) must satisfy the \(\lambda\)-Jacobi equation except at the reflection points. Similar to Lemma A.8, there exists a corresponding family of broken \(\lambda\)-geodesic variations associated with \(\tilde{J}\), say \(\tilde{J}(t)=\partial_{s}\gamma_{s}(t)|_{s=0}=\partial_{s}\gamma_{x_{s},v_{s} }(t)|_{s=0}\). One can make a change of order \(s^{2}\) to \((x_{s},v_{s})\) without changing \(\tilde{J}\). Since \(\tilde{J}(t_{0}-)\perp\nu\) and \(\gamma_{0}\) arrives to \(\mathcal{R}\) transversely at \(t_{0}\), we can introduce a second order change to \((x_{s},v_{s})\) such that \(\gamma_{s}(t_{0})\in\mathcal{R}\) for all \(s\in(-\epsilon^{\prime},\epsilon^{\prime})\) after choosing a sufficiently small \(\epsilon^{\prime}\in(0,\epsilon)\) (cf. [16, p. 396]). This variation after the second order change is explicitly given by \(s\mapsto\phi_{\tau_{s}}(x_{s},v_{s})\) where \(\tau_{s}\) is the unique element in
\[\left\{t\in\left[t_{0}-\delta,t_{0}+\delta\right];\,\phi_{t_{0}+t}(x_{s},v_{s} )\in\pi^{-1}\mathcal{R}\,\right\}\]
for a sufficiently small \(\delta>0\) depending upon the choice of \(\epsilon^{\prime}\). One can check that \(\tau_{0}=0\) and \(\partial_{s}\tau_{s}|_{s=0}=0\) using that \(\gamma_{0}(t_{0})\in\mathcal{R}\) and \(\tilde{J}(t_{0}-)\perp\nu\). Since the family of curves \(s\mapsto\gamma_{s}(t_{0}-\delta^{\prime})\), \(\delta^{\prime}>0\) sufficiently small, arrives in the limit \(\delta^{\prime}\to 0\) tangentially to \(\mathcal{R}\) at \(s=0\), it follows that \(\partial_{s}\tau_{s}|_{s=0}=0\).
We may assume without loss of generality that all \(\gamma_{s}\) have their unique reflection at \(t_{0}\). This implies that \(\gamma_{s}(t_{0})=\gamma_{x_{s},v_{s}}(t_{0})\) is smooth at \(s=0\) and
\[\tilde{J}\left(t_{0}+\right)=\tilde{J}\left(t_{0}-\right)=\rho\tilde{J}\left( t_{0}-\right).\]
By (A.6), we have
\[D_{t}\tilde{J}(t_{0}-)=D_{t}\partial_{s}\gamma_{s}(t_{0})\big{|}_{s=0,t=t_{0}- }=\left.D_{s}\dot{\gamma}_{s}(t_{0}-)\right|_{s=0} \tag{3.8}\]
and
\[D_{t}\tilde{J}(t_{0}+)=D_{t}\partial_{s}\gamma_{s}(t)\big{|}_{s=0,t=t_{0}+}= \left.D_{s}\dot{\gamma}_{s}(t_{0}+)\right|_{s=0}=\left.D_{s}\rho_{s}\dot{ \gamma}_{s}(t_{0}-)\right|_{s=0}. \tag{3.9}\]
Let us denote by \(y_{s}=\gamma_{s}(t_{0})\), \(u_{s}=\dot{\gamma}_{s}(t_{0}-)\) and \(\nu(y_{s})=\nu_{s}\). Thus we have
\[D_{s}\left(\rho_{s}u_{s}\right)= D_{s}\left(u_{s}-2\left\langle u_{s},\nu_{s}\right\rangle\nu_{s}\right)\] \[= \left[D_{s}u_{s}-2\left\langle D_{s}u_{s},\nu_{s}\right\rangle \nu_{s}\right]-2\left(\left\langle u_{s},D_{s}\nu_{s}\right\rangle\nu_{s}+ \left\langle u_{s},\nu_{s}\right\rangle D_{s}\nu_{s}\right) \tag{3.10}\] \[= \left[\rho_{s}\left(D_{s}u_{s}\right)\right]-2\left(\left\langle u _{s},\nabla_{\partial_{s}y_{s}}\nu_{s}\right\rangle\nu_{s}+\left\langle u_{s}, \nu_{s}\right\rangle\nabla_{\partial_{s}y_{s}}\nu_{s}\right).\]
Using the fact that \(\tilde{J}(t_{0}-)\perp\nu\), we obtain
\[\varphi_{\dot{\gamma}(t_{0}-)}\tilde{J}\left(t_{0}-\right)=\tilde{J}\left(t_{0} -\right)-\frac{\left\langle\tilde{J}\left(t_{0}-\right),\nu\right\rangle}{ \left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\dot{\gamma}\left(t _{0}-\right)=\tilde{J}\left(t_{0}-\right).\]
We also have
\[2\left(\left\langle u_{s},\nabla_{\partial_{s}y_{s}}\nu_{s}\right\rangle \nu_{s}+\left\langle u_{s},\nu_{s}\right\rangle\nabla_{\partial_{s}y_{s}}\nu_{s }\right)|_{s=0}\] \[\qquad\qquad=2\left(\left\langle\dot{\gamma}\left(t_{0}-\right), \nabla_{\tilde{J}(t_{0}-)}\nu\right\rangle\nu+\left\langle\dot{\gamma}\left(t_ {0}-\right),\nu\right\rangle\nabla_{\tilde{J}(t_{0}-)}\nu\right)\] \[\qquad\qquad=\Phi_{\dot{\gamma}(t_{0}-)}\tilde{J}(t_{0}-). \tag{3.11}\]
Taking \(s=0\) in (3.10), and combining (3.8) with (3.9) and (3.11), we get
\[D_{t}\tilde{J}(t_{0}+)=\rho D_{t}\tilde{J}(t_{0}-)-\Phi_{\dot{\gamma}(t_{0}-)} \tilde{J}(t_{0}-) \tag{3.12}\]
Finally, we have
\[J(t_{0}+) =\tilde{J}(t_{0}+)+\frac{\left\langle J\left(t_{0}-\right),\nu \right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle}I(t_{0}+)\] \[=\rho\tilde{J}(t_{0}-)+\frac{\left\langle J\left(t_{0}-\right), \nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle}\rho I(t _{0}-)=\rho J(t_{0}-).\]
Since
\[\varphi_{\dot{\gamma}(t_{0}-)}\frac{\left\langle J\left(t_{0}- \right),\nu\right\rangle}{\left\langle\dot{\gamma}\left(t_{0}-\right),\nu \right\rangle}\dot{\gamma}(t_{0}-) =\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{ \left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\dot{\gamma}(t_ {0}-)-\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left\langle \dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\frac{\left\langle\dot{\gamma }(t_{0}-),\nu\right\rangle}{\left\langle\dot{\gamma}(t_{0}-),\nu\right\rangle} \dot{\gamma}(t_{0}-)\] \[=0,\]
and \(\nabla_{\varphi_{\zeta}\zeta}=0\), it turns out that
\[\Phi_{\dot{\gamma}(t_{0}-)}\frac{\left\langle J\left(t_{0}-\right),\nu \right\rangle}{\left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle} \dot{\gamma}(t_{0}-)=\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle }{\left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\Phi_{\dot{ \gamma}(t_{0}-)}\dot{\gamma}(t_{0}-)=0. \tag{3.13}\]
Therefore
\[D_{t}J(t_{0}+) =D_{t}\tilde{J}(t_{0}+)+\frac{\left\langle J\left(t_{0}-\right), \nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle}D_{t}I(t_ {0}+)\] \[=\rho D_{t}\tilde{J}(t_{0}-)-\Phi_{\dot{\gamma}(t_{0}-)}\tilde{J} (t_{0}-)-\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left\langle I \left(t_{0}-\right),\nu\right\rangle}\beta\left(\gamma\left(t_{0}-\right), \dot{\gamma}\left(t_{0}-\right)\right)\rho D_{t}I(t_{0}-)\] \[=\rho D_{t}J(t_{0}-)-\Phi_{\dot{\gamma}(t_{0}-)}\tilde{J}(t_{0}-)\] \[\qquad-\left(\beta\left(\gamma\left(t_{0}-\right),\dot{\gamma} \left(t_{0}-\right)\right)+1\right)\frac{\left\langle J\left(t_{0}-\right), \nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle}\rho D_{t }I(t_{0}-).\]
From (3.13), the above identity becomes
\[D_{t}J(t_{0}+) =\rho D_{t}J(t_{0}-)-\Phi_{\dot{\gamma}(t_{0}-)}J(t_{0}-)\] \[\qquad-\left(\beta\left(\gamma\left(t_{0}-\right),\dot{\gamma} \left(t_{0}-\right)\right)+1\right)\frac{\left\langle J\left(t_{0}-\right), \nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle}\rho D_{t }I(t_{0}-),\]
which is our desired conclusion.
Recall that
\[\Phi_{\zeta}\xi =2\left(\left\langle\nabla_{\varphi_{\zeta}\xi}\nu,\zeta \right\rangle\nu+\left\langle\nu,\zeta\right\rangle\nabla_{\varphi_{\zeta}\xi} \nu\right),\] \[\varphi_{\zeta}\xi =\xi-\frac{\left\langle\xi,\nu\right\rangle}{\left\langle\zeta, \nu\right\rangle}\zeta.\]
Since \(\varphi_{\zeta}\xi\perp\nu\), it follows that \(\nabla_{\varphi_{\zeta}\xi}\nu\) is well defined. We also have \(\nabla_{\varphi_{\zeta}\xi}\nu\perp\nu\). We now simplify the map \(\Phi_{\dot{\gamma}(t_{0}-)}J\left(t_{0}-\right)\). To do so, we first compute
\[\varphi_{\dot{\gamma}(t_{0}-)}J\left(t_{0}-\right)=J\left(t_{0}-\right)-\frac{ \left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left\langle\dot{\gamma} \left(t_{0}-\right),\nu\right\rangle}\dot{\gamma}\left(t_{0}-\right).\]
By properties of covariant derivative along curves (cf. [18, Theorem 4.24]), we have
\[\nabla_{\varphi_{\dot{\gamma}(t_{0}-)}J(t_{0}-)}\nu =\nabla_{J(t_{0}-)-\frac{\left\langle J\left(t_{0}-\right),\nu \right\rangle}{\dot{\gamma}(t_{0}-)}\dot{\gamma}(t_{0}-)}\nu\] \[=\nabla_{J(t_{0}-)}\nu-\frac{\left\langle J\left(t_{0}-\right), \nu\right\rangle}{\left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle }\nabla_{\dot{\gamma}(t_{0}-)}\nu.\]
Since
\[\left\langle\nabla_{\varphi_{\dot{\gamma}(t_{0}-)}J(t_{0}-)}\nu,\dot{\gamma} \left(t_{0}-\right)\right\rangle=\left\langle\nabla_{J(t_{0}-)}\nu,\dot{ \gamma}\left(t_{0}-\right)\right\rangle-\frac{\left\langle J\left(t_{0}- \right),\nu\right\rangle}{\left\langle\dot{\gamma}\left(t_{0}-\right),\nu \right\rangle}\langle\nabla_{\dot{\gamma}(t_{0}-)}\nu,\dot{\gamma}\left(t_{0 }-\right)\rangle,\]
we see that
\[\Phi_{\dot{\gamma}(t_{0}-)}J\left(t_{0}-\right) =2\langle\nabla_{J(t_{0}-)}\nu,\dot{\gamma}\left(t_{0}-\right) \rangle\nu-2\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left \langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\langle\nabla_{\dot{ \gamma}(t_{0}-)}\nu,\dot{\gamma}\left(t_{0}-\right)\rangle\nu\] \[\qquad+2\langle\nu,\dot{\gamma}\left(t_{0}-\right)\rangle\nabla_{ J(t_{0}-)}\nu-2\langle\nu,\dot{\gamma}\left(t_{0}-\right)\rangle\frac{ \left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left\langle\dot{\gamma} \left(t_{0}-\right),\nu\right\rangle}\nabla_{\dot{\gamma}(t_{0}-)}\nu\] \[=2\langle\nabla_{J(t_{0}-)}\nu,\dot{\gamma}\left(t_{0}-\right) \rangle\nu-2\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{\left \langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle}\langle\nabla_{\dot{ \gamma}(t_{0}-)}\nu,\dot{\gamma}\left(t_{0}-\right)\rangle\nu\] \[\qquad+2\langle\nu,\dot{\gamma}\left(t_{0}-\right)\rangle s(J \left(t_{0}-\right))-2\left\langle J\left(t_{0}-\right),\nu\right\rangle s( \dot{\gamma}\left(t_{0}-\right)),\]
where \(s:T(\partial M)\to T(\partial M)\) is the shape operator of \(\partial M\subset M\) defined by setting \(s(X)=\nabla_{X}\nu\). The map \(\Phi_{\dot{\gamma}(t_{0}+)}J\left(t_{0}+\right)\) is linear in \(J\left(t_{0}+\right)\) and the shape operator is uniformly bounded on \(\pi^{-1}\mathcal{R}\). Thus we have
\[\Phi_{\dot{\gamma}(t_{0}-)}J\left(t_{0}-\right)=\left\langle\dot{\gamma}\left( t_{0}-\right),\nu\right\rangle^{-1}AJ\left(t_{0}-\right) \tag{3.14}\]
where the linear map \(A\) is uniformly bounded over set \(\pi^{-1}\mathcal{R}\) (cf. [19, p. 397]). Hence the reflection condition on a Jacobi field along a broken \(\lambda\)-ray is prescribed by
\[J\left(t_{0}+\right)=\rho J\left(t_{0}-\right),\text{ and }\\ \left(\beta\left(\gamma\left(t_{0}-\right),\dot{\gamma}\left(t_{0}- \right)\right)+1\right)\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle }{\left\langle I\left(t_{0}-\right),\nu\right\rangle}\rho D_{t}I\left(t_{0}- \right)\\ +\left\langle\dot{\gamma}\left(t_{0}-\right),\nu\right\rangle^{-1} A\tilde{J}\left(t_{0}-\right) \tag{3.15}\]
where the field of linear maps \(A\) is uniformly bounded on \(\pi^{-1}\mathcal{R}\).
**Lemma 3.10**.: _Let \(\left(M,g\right)\) be a compact Riemannian surface with smooth boundary. Under the assumptions of Lemma 3.9, a Jacobi field \(J\) along a broken \(\lambda\)-ray satisfies_
\[\left|J\left(t_{0}+\right)\right|^{2}+\left|D_{t}J\left(t_{0}+ \right)\right|^{2}\] \[\leq\frac{C}{\left\langle\dot{\gamma}\left(t_{0}+\right),\nu \right\rangle}\left(\left|J\left(t_{0}-\right)\right|^{2}+\left|D_{t}J\left(t_{ 0}-\right)\right|^{2}+\left|\dot{\gamma}(t_{0}-)\right|^{2}+\left|D_{t}\dot{ \gamma}\left(t_{0}-\right)\right|^{2}\right),\]
_at every reflection point \(t_{0}\), where \(C\) is a constant depending on \(M\), \(g\) and \(\lambda\)._
Proof.: From (3.15) and (3.14), we have
\[\left|J\left(t_{0}+\right)\right|=\left|\rho J\left(t_{0}-\right)\right|=\left|J \left(t_{0}-\right)\right|,\]
and
\[\left|D_{t}J\left(t_{0}+\right)\right|\]
\[\leq\left|\rho D_{t}J(t_{0}-)\right|+\left(\left\|\beta\right\|_{L^{ \infty}(\pi^{-1}\mathcal{R})}+1\right)\left|\frac{\left\langle J\left(t_{0}- \right),\nu\right\rangle}{\left\langle I\left(t_{0}-\right),\nu\right\rangle} \rho D_{t}I(t_{0}-)\right|\] \[\qquad+\left|\Phi_{\dot{\gamma}\left(t_{0}-\right)}J(t_{0}-)\right|\] \[\leq\left|D_{t}J\left(t_{0}-\right)\right|+\left(\left\|\beta \right\|_{L^{\infty}(\pi^{-1}\mathcal{R})}+1\right)\left|\left\langle\dot{ \gamma}\left(t_{0}-\right),\nu\right\rangle^{-1}\left|\left|\left\langle J \left(t_{0}-\right),\nu\right\rangle D_{t}I(t_{0}-)\right|\right.\] \[\qquad+\left|\left\langle\dot{\gamma}\left(t_{0}-\right),\nu \right\rangle^{-1}AJ\left(t_{0}-\right)\right|,\]
which implies
\[\left|J\left(t_{0}+\right)\right|^{2}+\left|D_{t}J\left(t_{0}+ \right)\right|^{2}\] \[\qquad\leq\frac{C}{\left|\left\langle\dot{\gamma}\left(t_{0}- \right),\nu\right\rangle\right|^{2}}\left(\left|J\left(t_{0}-\right)\right|^{2 }+\left|D_{t}J\left(t_{0}-\right)\right|^{2}+\left|D_{t}\dot{\gamma}\left(t_{ 0}-\right)\right|^{2}\right)\] \[\qquad\leq\frac{C}{\left|\left\langle\dot{\gamma}\left(t_{0}- \right),\nu\right\rangle\right|^{2}}\left(\left|J\left(t_{0}-\right)\right|^{2 }+\left|D_{t}J\left(t_{0}-\right)\right|^{2}+\left|\dot{\gamma}\left(t_{0}- \right)\right|^{2}+\left|D_{t}\dot{\gamma}\left(t_{0}-\right)\right|^{2} \right).\qed\]
**Remark 3.11**.: _Consider a family of broken \(\lambda\)-rays on \(M\) satisfying \(\left|\left\langle\dot{\gamma},\nu\right\rangle\right|\geq a\) at each of the reflection point. Due to the compactness of \(M\) and the requirement for traversality \(\left|\left\langle\dot{\gamma},\nu\right\rangle\right|\geq a\), we can assert the existence of a positive real number \(l\) that bounds from below by the distance between any two consecutive reflection points for any broken \(\lambda\)-ray in the set. This provides us with a lower bound on the number of reflections. If we denote by \(N(t)\) the number of reflections of \(\gamma\) in the time interval \((0,t)\), then from the preceding discussion it follows that \((N(t)-1)l\leq t\), implying \(N(t)\leq 1+\frac{t}{l}\)._
**Corollary 3.12**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Assume that \(\partial M=\mathcal{E}\cup\mathcal{R}\) where \(\mathcal{E}\) and \(\mathcal{R}\) are two relatively open disjoint subsets. Let us fix a number \(a\in(0,1]\) and also assume \(\lambda\circ\rho=\beta\lambda\) on \(\pi^{-1}\mathcal{R}\) with \(\beta\in L^{\infty}(\pi^{-1}\mathcal{R})\). If the broken \(\lambda\)-ray \(\gamma\) satisfies the condition \(\left|\left\langle\dot{\gamma},\nu\right\rangle\right|\geq a\) at every reflection point, then for any Jacobi field \(J\) along such a broken ray, we have_
\[\left|J(t)\right|^{2}+\left|D_{t}J(t)\right|^{2}\leq Ae^{Bt}\left(\left|J(0) \right|^{2}+\left|D_{t}J(0)\right|^{2}+\left|\dot{\gamma}(0)\right|^{2}+\left| D_{t}\dot{\gamma}(0)\right|^{2}\right)\]
_for all \(t\geq 0\), where \(A\) and \(B\) are constants that depend on \(M,g,\lambda\) and \(a\)._
Proof.: Let us assume that there is a reflection at each \(t=t_{k}\), where \(k\in\{0,\cdots,N\}\). Then for any \(t\in(0,t_{0})\), using Lemma 2.2, we have
\[\left|J(t)\right|^{2}+\left|D_{t}J(t)\right|^{2}\leq e^{Ct}\left(\left|J(0) \right|^{2}+\left|D_{t}J(0)\right|^{2}\right)\]
and also by considering Jacobi field \(I(t)=\dot{\gamma}(t)\), we have
\[\left|\dot{\gamma}(t)\right|^{2}+\left|D_{t}\dot{\gamma}(t)\right|^{2}\leq e^{ Ct}\left(\left|\dot{\gamma}(0)\right|^{2}+\left|D_{t}\dot{\gamma}(0)\right|^{2} \right).\]
This proves the case for \(N=0\). We assume that for \(t\in(0,t_{k-1})\) i.e. before the \(k\)th reflection, we have
\[\left|J(t)\right|^{2}+\left|D_{t}J(t)\right|^{2}\leq\frac{e^{kCt}C_{1}^{k-1}} {a^{k-1}}\left(\left|J(0)\right|^{2}+\left|D_{t}J(0)\right|^{2}+\left(k-1 \right)\left|\dot{\gamma}(0)\right|^{2}+\left(k-1\right)\left|D_{t}\dot{\gamma }(0)\right|^{2}\right). \tag{3.16}\]
Also
\[\left|\dot{\gamma}(t)\right|^{2}+\left|D_{t}\dot{\gamma}(t)\right|^{2}\leq e^{ kCt}\frac{C_{1}^{k-1}}{a^{k-1}}\left(\left|\dot{\gamma}(0)\right|^{2}+\left|D_{t} \dot{\gamma}(0)\right|^{2}\right). \tag{3.17}\]
We now prove the estimate for any \(t\in(0,t_{k})\), that is before the \((k+1)\)-th reflection. Using the assumption that \(\left|\left\langle\dot{\gamma},\nu\right\rangle\right|\geq a\) where \(a\in(0,1]\) and Lemma 3.10, we can deduce
that at each reflection point, we have
\[\left|J\left(t_{k}+\right)\right|^{2}+\left|D_{t}J\left(t_{k}+\right)\right|^{2} \leq\frac{C_{1}}{a}\left(\left|J\left(t_{k}-\right)\right|^{2}+\left|D_{t}J \left(t_{k}-\right)\right|^{2}+\left|\dot{\gamma}\left(t_{k}-\right)\right|^{2} +\left|D_{t}\dot{\gamma}\left(t_{k}-\right)\right|^{2}\right)\]
and
\[\left|\dot{\gamma}\left(t_{k}+\right)\right|^{2}+\left|D_{t}\dot{ \gamma}\left(t_{k}+\right)\right|^{2} \leq\left(1+\|\beta\|_{L^{\infty}(\pi^{-1}\mathcal{R})}\right) \left(\left|\dot{\gamma}\left(t_{k}-\right)\right|^{2}+\left|D_{t}\dot{\gamma} \left(t_{k}-\right)\right|^{2}\right)\] \[\leq\frac{C_{1}}{a}\left(\left|\dot{\gamma}\left(t_{k}-\right) \right|^{2}+\left|D_{t}\dot{\gamma}\left(t_{k}-\right)\right|^{2}\right).\]
Combining (3.16) with (3.17), we have
\[\left|J\left(t\right)\right|^{2}+\left|D_{t}J\left(t\right)\right| ^{2}\] \[\leq e^{Ct}\left(\left|J(t_{k-1}+)\right|^{2}+\left|D_{t}J(t_{k-1 }+)\right|^{2}\right)\] \[\leq e^{Ct}\frac{C_{1}}{a}\left(\left|J\left(t_{k-1}-\right) \right|^{2}+\left|D_{t}J\left(t_{k-1}-\right)\right|^{2}+\left|\dot{\gamma} \left(t_{k-1}-\right)\right|^{2}+\left|D_{t}\dot{\gamma}\left(t_{k-1}-\right) \right|^{2}\right)\] \[\leq e^{(k+1)Ct}\frac{C_{1}^{k}}{a^{k}}\left(\left|J(0)\right|^{ 2}+\left|D_{t}J(0)\right|^{2}+(k-1)\left|\dot{\gamma}(0)\right|^{2}+(k-1) \left|D_{t}\dot{\gamma}(0)\right|^{2}\right)\] \[\qquad+e^{(k+1)Ct}\frac{C_{1}^{k}}{a^{k}}\left(\left|\dot{\gamma }(0)\right|^{2}+\left|D_{t}\dot{\gamma}(0)\right|^{2}\right)\] \[=e^{(k+1)Ct}\frac{C_{1}^{k}}{a^{k}}\left(\left|J(0)\right|^{2}+ \left|D_{t}J(0)\right|^{2}+k|\dot{\gamma}(0)|^{2}+k\left|D_{t}\dot{\gamma}(0) \right|^{2}\right).\]
and
\[\left|\dot{\gamma}(t)\right|^{2}+\left|D_{t}\dot{\gamma}(t)\right| ^{2} \leq e^{Ct}\left(\left|\dot{\gamma}\left(t_{k-1}+\right)\right|^{ 2}+\left|D_{t}\dot{\gamma}\left(t_{k-1}+\right)\right|^{2}\right)\] \[\leq e^{Ct}\frac{C_{1}}{a}\left(\left|\dot{\gamma}\left(t_{k-1}- \right)\right|^{2}+\left|D_{t}\dot{\gamma}\left(t_{k-1}-\right)\right|^{2}\right)\] \[\leq e^{(k+1)Ct}\frac{C_{1}^{k}}{a^{k}}\left(\left|\dot{\gamma} \left(0\right)\right|^{2}+\left|D_{t}\dot{\gamma}\left(0\right)\right|^{2} \right).\]
Note that
\[\frac{C_{1}^{k}}{a^{k}}\left(\left|J(0)\right|^{2}+\left|D_{t}J (0)\right|^{2}+k|\dot{\gamma}(0)|^{2}+k\left|D_{t}\dot{\gamma}(0)\right|^{2}\right)\] \[\leq\frac{(2C_{1})^{k}}{a^{k}}\left(\left|J(0)\right|^{2}+\left| D_{t}J(0)\right|^{2}+\left|\dot{\gamma}(0)\right|^{2}+\left|D_{t}\dot{\gamma}(0) \right|^{2}\right).\]
From this analysis one can see that there is constant \(A=e^{Ct}\frac{2C_{1}}{a}\), such that at each reflection point \(\left|J(t)\right|^{2}+\left|D_{t}J(t)\right|^{2}\) increases by the factor \(A\). Now consider the interval \((0,t)\) where any broken ray has less than \(1+t/l\) number of reflections by Remark 3.11. We may conclude the estimate
\[\left|J(t)\right|^{2}+\left|D_{t}J(t)\right|^{2} \leq A^{N(t)}e^{Ct}\left(\left|J(0)\right|^{2}+\left|D_{t}J(0) \right|^{2}+\left|\dot{\gamma}(0)\right|^{2}+\left|D_{t}\dot{\gamma}(0)\right| ^{2}\right)\] \[=e^{\left(1+\frac{t}{l}\right)\log A+Ct}\left(\left|J(0)\right|^ {2}+\left|D_{t}J(0)\right|^{2}+\left|\dot{\gamma}(0)\right|^{2}+\left|D_{t} \dot{\gamma}(0)\right|^{2}\right)\] \[=Ae^{Bt}\left(\left|J(0)\right|^{2}+\left|D_{t}J(0)\right|^{2}+ \left|\dot{\gamma}(0)\right|^{2}+\left|D_{t}\dot{\gamma}(0)\right|^{2}\right),\]
where \(B=\frac{\log A}{l}+C\).
**Remark 3.13**.: _Consider a unit-speed \(C^{1}\) curve on the manifold \(SM\), given by the mapping \(s\in(-\varepsilon,\varepsilon)\mapsto(x_{s},v_{s})\). Let \(\gamma_{x_{s},v_{s}}(t)\) denote a \(\lambda\)-geodesic such that its initial conditions are \((\gamma_{x_{s},v_{s}}(0),\dot{\gamma}_{x_{s},v_{s}}(0))=(x_{s},v_{s})\). Assume that \(\gamma:=\gamma_{x_{0},v_{0}}\). Now, let us examine
_the Jacobi field \(J_{s}(t)=\partial_{s}\gamma_{x_{s},v_{s}}(t)\). In this context, we have_
\[|J_{s}(0)|^{2}+|D_{t}J_{s}(0)|^{2}=|\partial_{s}x_{s}|^{2}+|D_{s}v_{s}|^{2}=1.\]
_Additionally, consider another Jacobi field \(I(t)=\dot{\gamma}_{x,v}(t)\). In this case, we obtain_
\[|I(0)|^{2}+|D_{t}I(0)|^{2}=|v|^{2}+|\lambda(x,v)iv|^{2}\leq 1+\|\lambda\|_{L^{ \infty}(SM)}^{2}.\]
Without any assumption \(\lambda\circ\rho=\lambda\beta\) on \(\pi^{-1}\mathcal{R}\), we can still control \(|J(t)|^{2}+|D_{t}J(t)|^{2}\) as follows:
**Lemma 3.14**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\), and fix a number \(a\in(0,1]\). Let \(\gamma:[0,\tau]\to M\) be a broken \(\lambda\)-ray on \(M\) such that \(|\langle\dot{\gamma},\nu\rangle|\geq a\) at every reflection point. Then for any variation field \(J\) along \(\gamma\), we have_
\[|J(t)|^{2}+|D_{t}J(t)|^{2}\leq Ae^{bt}\left(|J(0)|^{2}+|D_{t}J(0)|^{2}+1+\| \lambda\|_{L^{\infty}(SM)}^{2}\right)\]
_for all \(t\in[0,\tau]\), where \(A\) and \(B\) are constants that depend only on \(M,g,\lambda\) and \(a\)._
Proof.: Similar to the proof of Lemma 3.9, we have \(I(t):=\dot{\gamma}(t)\), and \(\tilde{J}\) such that
\[J(t)=\tilde{J}(t)+\frac{\left\langle J\left(t_{0}-\right),\nu\right\rangle}{ \left\langle I\left(t_{0}-\right),\nu\right\rangle}I(t).\]
We have \(|I\left(t_{0}+\right)|=|I\left(t_{0}-\right)|\) and
\[|D_{t}I\left(t_{0}+\right)| =|\lambda\left(\gamma\left(t_{0}+\right),\dot{\gamma}\left(t_{0} +\right)\right)i\dot{\gamma}\left(t_{0}+\right)|\] \[=|\lambda\circ\rho\left(\gamma\left(t_{0}-\right),\dot{\gamma} \left(t_{0}-\right)\right)||i\rho\dot{\gamma}\left(t_{0}-\right)|\] \[\leq\|\lambda\|_{L^{\infty}(\pi^{-1}\mathcal{R})}|I(t_{0}-)|.\]
Also we have \(|\tilde{J}(t_{0}+)|=|\tilde{J}(t_{0}-)|\) and from (3.12), we obtain
\[|D_{t}J(t_{0}+)| =|\rho D_{t}\tilde{J}\left(t_{0}-\right)-\Phi_{\dot{\gamma}(t_{0 }-)}\tilde{J}\left(t_{0}-\right)|\] \[\leq|D_{t}\tilde{J}\left(t_{0}-\right)|+\frac{C}{|\left\langle \dot{\gamma}\left(t_{0}+\right),\nu\right\rangle^{-1}|}|\tilde{J}\left(t_{0}- \right)|.\]
Similar to the proof of Corollary 3.12, it follows that
\[|J(t)|^{2}+|D_{t}J(t)|^{2}\leq Ae^{bt}\left(|J(0)|^{2}+|D_{t}J(0)|^{2}+|\dot{ \gamma}(0)|^{2}+|D_{t}\dot{\gamma}(0)|^{2}\right),\]
and by the definition of unit speed broken \(\lambda\)-ray \(\gamma\) (cf. Remark 3.13), we deduce that
\[|J(t)|^{2}+|D_{t}J(t)|^{2}\leq Ae^{bt}\left(|J(0)|^{2}+|D_{t}J(0)|^{2}+1+\| \lambda\|_{L^{\infty}(SM)}^{2}\right),\]
where \(A\) and \(B\) are constants that depend only on \(M,g,\lambda\) and \(a\).
### Regularity of solutions to the transport equation
In [14, Lemma 3], a regularity result is proven for the primitive function corresponding to \(f\in C^{2}(SM)\), which is simply the solution to the transport equation. This result shows that the primitive function is twice continuously differentiable in the interior of \(SM\) and Lipschitz continuous in \(SM\). In this subsection, we extend this result to the solution of the transport equation associated with broken \(\lambda\)-rays. We first aim at establishing the regularity result for both forward and backward exit times. In [14, Lemma 3.2.3], the regularity of exit time has been demonstrated in the context of usual geodesics. Following a similar approach, we extend this result to the case of the \(\lambda\)-geodesic.
**Lemma 3.15**.: _Let \((M,g)\) be a compact nontrapping Riemannian surface with strictly \(\lambda\)-convex boundary where \(\lambda\in C^{\infty}(SM)\). Then \(\tau\) and \(\tau^{-}\) are continuous on \(SM\) and smooth on \(SM\setminus\partial_{0}SM\)._
Proof.: We establish the result for the forward exit time only as the proof is identical for \(\tau^{-}\). Let \((N,g)\) be a closed extension of \((M,g)\) (cf. [13, Lemma 3.1.8]). Define \(\rho\) as a boundary defining function on \(M\). Consider a \(\lambda\)-geodesic \(\gamma\) with initial conditions \(\gamma(0)=x\) and \(\dot{\gamma}(0)=v\). We now analyze the function \(\rho(\gamma_{x,v}(t))\) for \((x,v)\in SM\setminus\partial_{0}SM\), where \(\rho\) is a boundary defining function on \(M\). Similar to the proof of [13, Lemma 3.2.3], we have \(h:SN\times\mathbb{R}\to\mathbb{R}\), such that \(h(x,v,t):=\rho\left(\gamma_{x,v}(t)\right)\) and
\[\frac{\partial h}{\partial t}(x,v,t)=\frac{\partial}{\partial t}\rho\left( \gamma_{x,v}(t)\right)=\left\langle\nabla\rho\left(\gamma_{x,v}(t)\right), \dot{\gamma}_{x,v}(t)\right)\right\rangle.\]
Note that \(\gamma_{x,v}(\tau_{x,v})\in\partial M\). This implies that the tangent vector \(\dot{\gamma}_{x,v}(\tau_{x,v})\) must lie in \(\partial_{-}SM\), the outward-pointing tangent vectors at the boundary of the unit tangent bundle \(SM\); otherwise if \(\dot{\gamma}_{x,v}(\tau_{x,v})\) were not in \(\partial_{-}SM\), the geodesic \(\gamma_{x,v}\) could be extended beyond the point \(\gamma_{x,v}(\tau_{x,v})\), contradicting the fact that \(\gamma_{x,v}(\tau_{x,v})\) is the final point on \(M\). By strict \(\lambda\)-convexity, one must have \(\dot{\gamma}_{x,v}(\tau_{x,v})\notin\partial_{0}SM\). Since \(\dot{\gamma}_{x,v}(\tau_{x,v})\in\partial SM\setminus\partial_{0}SM\), i.e. \(\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle<0\) and
\[\frac{\partial h}{\partial t}(x,v,\tau_{x,v}^{+})=\left\langle\nabla\rho( \gamma_{x,v}(t)),\dot{\gamma}_{x,v}(t))\right\rangle_{\tau_{x,v}^{+}}=\left \langle\nu,\dot{\gamma}_{x,v}(\tau_{x,v})\right\rangle<0,\]
it follows from the definition of boundary defining function that \(h(x,v,\tau_{x,v})=0\) and \(h\) is smooth. Finally, by the implicit function theorem, we conclude that \(\tau\) is smooth in \(SM\setminus\partial_{0}SM\).
**Remark 3.16**.: _Similar to the case of broken rays (cf. [14, p. 399]), using Lemma 3.15, \(\tau\) and \(\tau^{-}\) are smooth near any point \((x,v)\) such that the broken \(\lambda_{x,v}\)-ray reflects and exits transversely._
**Remark 3.17**.: _Let \(\sigma:(-\epsilon,\epsilon)\to SM\) be a smooth curve such that \(\sigma(s)=(x_{s},v_{s})\). Note that the function \(h\) (same as defined in Lemma 3.15), satisfies the property that \(h(x_{s},v_{s},\tau(x_{s},v_{s}))=0\) and \(\frac{\partial h}{\partial t}\left(x_{s},v_{s},\tau_{x_{s},v_{s}}\right)\neq 0\). By the implicit function theorem, we have_
\[\begin{split}\partial_{s}\tau_{x_{s},v_{s}}&=-\left[ \frac{d}{dt}h(x_{s},v_{s},\tau_{x_{s},v_{s}})\right]^{-1}\left[\partial_{s}h(x _{s},v_{s},\tau_{x_{s},v_{s}})\right]\\ &=-\frac{\left\langle\nabla\rho\left(\gamma_{x_{s},v_{s}}(t) \right),\partial_{s}\gamma_{x_{s},v_{s}}(t)(t)\right)\right\rangle\mid_{t= \tau_{x_{s},v_{s}}}}{\left\langle\nabla\rho\left(\gamma_{x_{s},v_{s}}(t) \right),\dot{\gamma}_{x_{s},v_{s}}(t)(t)\right)\mid_{t=\tau_{x_{s},v_{s}}}}\\ &=-\frac{\left\langle\nu,\partial_{s}\gamma_{x_{s},v_{s}}(t) \right)\mid_{t=\tau_{x_{s},v_{s}}}}{\left\langle\nu,\dot{\gamma}_{x_{s},v_{s}}( t)\right)\mid_{t=\tau_{x_{s},v_{s}}}}.\end{split} \tag{3.18}\]
**Definition 3.18**.: _Let \((M,g)\) be a Riemmanian surface and \(\lambda\in C^{\infty}(SM)\). Let us denote the interior \(\lambda\)-scattering relation by \(\tilde{\alpha}:SM\to\partial SM\). Given any point and direction \((x,v)\in SM\), we map it via \(\tilde{\alpha}\) to the first intersection point and direction of the \(\lambda\)-geodesic \(\gamma_{x,v}\) with the boundary \(\partial M\) (i.e. either \(\mathcal{E}\) or \(\mathcal{R}\))._
**Remark 3.19**.: _Note that in the case a \(\lambda\)-geodesic \(\gamma_{x,v}\) hits nontangentially, then in a neighborhood of \((x,v)\) the map \(\tilde{\alpha}\) is smooth._
**Remark 3.20**.: _When a broken \(\lambda\)-ray \(\gamma_{x,v}\) hits nontangentially to \(\mathcal{R}\) (possibly multiple times) and reach a point on \(\mathcal{E}\) transversally (by strict convexity), then there is a smooth dependence for the end point on \(\mathcal{E}\) of the broken \(\lambda\)-ray on its initial data \((x,v)\). This follows from the smoothness of \(\tilde{\alpha}\) and \(\rho\) map._
**Lemma 3.21**.: _Let \((M,g,\lambda,\mathcal{E})\) be an admissible dynamical system. If \(f\in C^{2}(SM)\) satisfies \(If=0\), then the primitive function \(u\) solving (3.3) has the regularity \(u\in C^{2}(\operatorname{int}(SM))\cap\operatorname{Lip}(SM)\)._
It is clear that \(u\) solves (3.3). Hence, we split the proof into two cases for the regularity of \(u\).
Proof of \(u\in C^{2}(\operatorname{int}SM)\).: Let \((x,v)\in\operatorname{int}(SM)\). From the admissibility condition, \(\gamma_{x,v}(t)\) or \(\gamma_{x,-v}^{-}(t)\) has no tangential reflections. From Lemma 3.5, we have
\[u(x,v)=-u^{-}(x,-v)\]
where \(u^{-}\) is the solution to the dual transport equation. Hence it suffices to show that either \(u\) or \(u^{-}\) is \(C^{2}\) at \((x,v)\) or \((x,-v)\) respectively. Without loss of generality, we may assume \(\gamma_{x,v}(t)\) has no tangential reflections. Now, for some \(N=0,1,2,\dots\), we have
\[u(x,v)=\sum_{k=0}^{N}u_{k}(x,v) \tag{3.19}\]
where
\[u_{k}(x,v)=\int_{\tau_{k}}^{\tau_{k+1}}f(\phi_{t}(x,v))dt\]
and \(\gamma_{x,v}\) has reflections at \(\tau_{1},\cdots\tau_{N}\) with \(\tau_{0}=0\), \(\tau_{N+1}=\tau_{x,v}\). Since the broken \(\lambda\)-ray hits \(\mathcal{R}\) transversely, \(\tau_{k}(x,v)\) are smooth in some neighbourhood of \((x,v)\), \(\lambda\)-geodesic flow is smooth and \(f\in C^{2}(SM)\), we have that all \(u_{k}(x,v)\) are \(C^{2}\) functions in some neighbourhood of the point \((x,v)\). As each \(u_{k}\) is a \(C^{2}\) function at the point \((x,v)\), it follows from (3.19) that the function \(u\) is also \(C^{2}\) at \((x,v)\).
Proof of \(u\in\operatorname{Lip}(SM)\).: If we show that first order derivatives \(u\) are uniformly bounded in \(\operatorname{int}(SM)\), then this implies \(u\) is Lipschitz. To show this, similar to [12, p. 1283], we consider a \(C^{1}\) unit speed curve \((-\varepsilon,\varepsilon)\ni s\mapsto(x_{s},v_{s})\in\operatorname{int}SM\) with \((x_{0},v_{0})=(x,v)\). Using Lemma 3.5 again, we can assume without loss of generality that \(\gamma_{x,v}\) have no tangential reflections and \(\left|\langle\dot{\gamma},\nu\rangle\right|\geq a\). Now
\[\partial_{s}u\left(x_{s},v_{s}\right)=f\left(\gamma_{x_{s},v_{s} }\left(\tau_{x_{s},v_{s}}\right),\dot{\gamma}_{x_{s},v_{s}}\left(\tau_{x_{s}, v_{s}}\right)\right)\partial_{s}\tau_{x_{s},v_{s}}\] \[\qquad\qquad\qquad\qquad\qquad+\int_{0}^{\tau_{x_{s},v_{s}}} \partial_{s}f\left(\gamma_{x_{s},v_{s}}(t),\dot{\gamma}_{x_{s},v_{s}}(t) \right)\mathrm{d}t.\]
Let us start by examining the integral term
\[\left.\partial_{s}f\left(\gamma_{x_{s},v_{s}}(t),\dot{\gamma}_{x_{s},v_{s}}(t )\right)\right|_{s=0}=\left\langle(\partial_{s}\gamma_{x_{s},v_{s}},D_{s} \dot{\gamma}_{x_{s},v_{s}})\right|_{s=0},\nabla_{SM}f(\gamma(t),\dot{\gamma}( t))\right\rangle=\left\langle(J,\dot{J}),\nabla_{SM}f\right\rangle\]
where \(J:=\partial_{s}\gamma_{x_{s},v_{s}}\) is a broken \(\lambda\)-Jacobi field. Since \(\gamma_{x,v}\) contains no reflections with \(\left|\langle\dot{\gamma},\nu\rangle\right|<a\), it follows from Lemma 3.14 and Remark 3.13 that there exists a uniform \(C_{1}>0\) such that \(|J|^{2}+|\dot{J}|^{2}\leq C_{1}\) holds for all \((x,v)\in\operatorname{int}(SM)\). This implies
\[\left|\int_{0}^{\tau_{x_{s},v_{s}}}\partial_{s}f\left(\gamma_{x_{s},v_{s}}(t), \dot{\gamma}_{x_{s},v_{s}}(t)\right)\mathrm{d}t\right|_{s=0}\right|\leq LC_{1} ^{1/2}\|\nabla_{SM}f\|_{L^{\infty}(SM)}. \tag{3.20}\]
We now focus on the boundary term. By taking \(s=0\) in (3.18), we have
\[\left.\partial_{s}\tau_{x_{s},v_{s}}\right|_{s=0}=\left.-\frac{\left\langle J,\nu\right\rangle}{\left\langle\dot{\gamma}_{x,v},\nu\right\rangle}\right|_{ t=\tau_{x,v}}. \tag{3.21}\]
From the proof of Lemma 3.15, for any \((x,v)\in\operatorname{int}SM\), \(\left\langle\dot{\gamma}_{x,v}\left(\tau_{x,v}\right),\nu\right\rangle<0\). From the expression (3.21), we need to consider two cases: \(\left|\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle\right|<b\) and \(\left|\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle\right|\geq b\) for some small enough \(b>0\). We choose \(b\) such that whenever \(\left|\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle\right|<b\), then
the corresponding broken geodesic has no reflections for any \((x,v)\in\operatorname{int}SM\). A choice of a very small parameter \(b>0\) splits \(\operatorname{int}(SM)\) into two sets corresponding to the \(\lambda\)-broken rays to short ones which are almost tangential to \(\mathcal{E}\) and all other broken rays since \(\mathcal{E}\) strictly \(\lambda\)-convex.
_The case \(\left|\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle\right|\geq b\)._ Using the strict \(\lambda\)-convexity of \(\mathcal{E}\), it follows that \(\lambda\)-geodesics intersect \(\mathcal{E}\) transversely. This implies that \(\left.\partial_{s}\tau_{x_{s},v_{s}}\right|_{s=0}\) is uniformly bounded by \(C_{1}^{1/2}/b\), which indeed shows that
\[\left|\left.f\left(\gamma_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right),\dot{ \gamma}_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right)\right)\partial_{s}\tau_{x _{s},v_{s}}\right|_{s=0}\right|\leq\frac{C_{1}^{1/2}}{b}\|f\|_{L^{\infty}(SM)}. \tag{3.22}\]
_The case \(\left|\left\langle\dot{\gamma}_{x,v}(\tau_{x,v}),\nu\right\rangle\right|<b\)._ By the choice of \(b\), we have that \(\gamma_{x,v}\) never reach \(\mathcal{R}\) and corresponds to a short \(\lambda\)-geodesic almost tangential to \(\partial M\). Since the broken ray transform vanishes, we have \(f(y,w)=0\) holds for \(y\in\mathcal{E}\) and \(w\in S_{y}\mathcal{E}\).
Write \((y_{s},w_{s})=\left(\gamma_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right),\dot{ \gamma}_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right)\right)\). Since \(f\) is Lipschitz, for any \(w\in S_{y_{s}}\mathcal{E}\), we have
\[|f(y_{s},w_{s})|=|f(y_{s},w_{s})-f(y_{s},w)|\leq C|w_{s}-w|. \tag{3.23}\]
Let us express \(w_{s}\) in terms of \(\nu\) and \(w\) where we choose the orientation so that \(w\in S_{y_{s}}\mathcal{E}\) and \(\left\langle w,w_{s}\right\rangle\geq 0\). Now
\[w_{s}=\left\langle w_{s},\nu\right\rangle\nu+\left\langle w_{s},w\right\rangle w =\left\langle w_{s},\nu\right\rangle\nu+\sqrt{1-\left\langle w_{s},\nu \right\rangle^{2}}w.\]
This implies
\[|w_{s}-w|^{2}\leq\left(\sqrt{1-\left\langle w_{s},\nu\right\rangle^{2}}-1 \right)^{2}+\left\langle w_{s},\nu\right\rangle^{2}.\]
Observe that
\[\left(\sqrt{1-\left\langle w_{s},\nu\right\rangle^{2}}-1\right)^{2}\leq \left\langle w_{s},\nu\right\rangle^{2}\iff 1-\left\langle w_{s},\nu\right\rangle^{2}\leq \sqrt{1-\left\langle w_{s},\nu\right\rangle^{2}}.\]
But in case \(-1\leq\left\langle w_{s},\nu\right\rangle\leq 1\), we have \(1-\left\langle w_{s},\nu\right\rangle^{2}\leq\sqrt{1-\left\langle w_{s},\nu \right\rangle^{2}}\), which proves that
\[\left|w_{s}-w\right|^{2}\leq 2\left\langle w_{s},\nu\right\rangle^{2}. \tag{3.24}\]
From (3.23) and (3.24), we have
\[\left|f\left(y_{s},w_{s}\right)\right|\leq\sqrt{2}\left|\left\langle w_{s}, \nu\right\rangle\right|. \tag{3.25}\]
Using (3.21), we can write
\[\left.\partial_{s}\tau_{x_{s},v_{s}}\right|_{s=0}\leq\frac{C_{1}^{1/2}}{| \left\langle\nu,w_{0}\right\rangle|}. \tag{3.26}\]
From (3.22), (3.25) and (3.26), we conclude for any \((x,v)\in\operatorname{int}(SM)\) that
\[\left|\left.f\left(\gamma_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right),\dot{ \gamma}_{x_{s},v_{s}}\left(\tau_{x_{s},v_{s}}\right)\right)\partial_{s}\tau_{x _{s},v_{s}}\right|_{s=0}\right|\leq C\|f\|_{L^{\infty}(SM)} \tag{3.27}\]
for some \(C>0\). It follows from (3.20) and (3.27) that \(u\in W^{1,\infty}(SM)\). We conclude \(u\in\operatorname{Lip}(SM)\).
## 4. Uniqueness for scalar functions and 1-forms
### Revisiting the boundary terms in the Pestov identity
To simplify the Pestov identity in Proposition 2.1, we define the vector field \(\nabla_{T,\lambda}\) by
\[\nabla_{T,\lambda}:=-\langle v_{\perp},\nu\rangle F-\langle v,\nu\rangle(V( \lambda))V+\langle v,\nu\rangle X_{\perp}.\]
The primitive function corresponding to \(f\), denoted by \(u:=u^{f}\), is defined as
\[u^{f}(x,v)=\int_{0}^{\tau(x,v)}f(\phi_{t}(x,v))dt\]
where \(\phi_{t}\) is the broken \(\lambda\)-geodesic flow.
In the next lemma, we provide a simplified form of the boundary term \(\nabla_{T,\lambda}\) appearing in the Pestov identity (2.13) in terms of the odd and even components of \(u\) and the magnetic signed curvature. In [16, Lemma 9], a similar identity has been proved for the broken geodesic flow (i.e. when \(\lambda=0\)). In particular, they showed that
\[\left(\nabla_{T}u,Vu\right)_{\partial SM}=\left(\nabla_{T}u_{e},Vu_{o}\right) _{\partial SM}+\left(\nabla_{T}u_{o},Vu_{e}\right)_{\partial SM}-\left(\kappa Vu,Vu\right)_{\partial SM} \tag{4.1}\]
where \(\nabla_{T}=\nabla_{T,0}\), \(\kappa:=-\langle D_{t}T,\nu\rangle_{g}\) is the signed curvature of \(\partial M\), \(u_{e}\) and \(u_{o}\) are the even and odd components of \(u|_{\partial SM}\) with respect to the reflection \(\rho\) and \(u\) is a primitive function. We are thus after to the following generalization of [16, Lemma 9] to the case of broken \(\lambda\)-geodesic flows.
**Lemma 4.1**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). If \(u\in C^{2}(SM)\), then_
\[\begin{split}\left(\nabla_{T,\lambda}u,Vu\right)_{\partial SM}& =\left(\nabla_{T}u_{e},Vu_{o}\right)_{\partial SM}+\left(\nabla_{ T}u_{o},Vu_{e}\right)_{\partial SM}-\left(\kappa Vu,Vu\right)_{\partial SM}\\ &\quad+\left(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu )_{e}-\langle v,\nu\rangle(V(\lambda)Vu)_{e},Vu_{o}\right)_{\partial SM}\\ &\quad+\left(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu )_{o}-\langle v,\nu\rangle(V(\lambda)Vu)_{o},Vu_{e}\right)_{\partial SM}. \end{split} \tag{4.2}\]
Proof.: We denote \(\nabla_{T,\lambda}u=\nabla_{T}u+Lu\) where \(Lu(x,v):=-\langle v_{\perp},\nu\rangle\lambda Vu-\langle v,\nu\rangle(V( \lambda))Vu\). Note that from [17, p. 119], we have \(\mu(x,v)=\langle v,\nu(x)\rangle\) and \(V(\mu)(x,v)=\langle v_{\perp},\nu\rangle\). From [16, p. 391], we have \((\rho^{*}Vu)=-V\left(\rho^{*}u\right)\). We compute
\[\rho^{*}(\mu(x,v)) =\langle(v-2\langle v,\nu\rangle)\nu,\nu\rangle=-\langle v,\nu \rangle=-\mu(x,v) \tag{4.4}\] \[\rho^{*}(V(\mu)) =-V(\rho^{*}(\mu(x,v)))=V(\mu(x,v)). \tag{4.3}\]
From (4.3) and (4.4), we have
\[\begin{split}\left(Lu\right)_{e}(x,v)&=\frac{Lu(x, v)+\rho^{*}Lu(x,v)}{2}\\ &=\frac{-\left\langle v_{\perp},\nu\right\rangle\lambda Vu- \langle v,\nu\rangle(V(\lambda))Vu-\left\langle v_{\perp},\nu\right\rangle \rho^{*}(\lambda Vu)+\left\langle v,\nu\right\rangle\rho^{*}((V(\lambda))Vu)}{ 2}\\ &=-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{e}- \left\langle v,\nu\right\rangle(V(\lambda)Vu)_{o}.\end{split}\]
and
\[\begin{split}\left(Lu\right)_{o}(x,v)&=\frac{Lu(x, v)-\rho^{*}Lu(x,v)}{2}\\ &=\frac{-\left\langle v_{\perp},\nu\right\rangle\lambda Vu- \left\langle v,\nu\right\rangle(V(\lambda))Vu+\left\langle v_{\perp},\nu \right\rangle\rho^{*}(\lambda Vu)-\left\langle v,\nu\right\rangle\rho^{*}(V( \lambda))Vu}{2}\\ &=-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{o}- \left\langle v,\nu\right\rangle(V(\lambda)Vu)_{e}.\end{split}\]
Since \(\rho\) is an isometry on \(S_{x}\) for each \(x\in\partial M\) (cf. Remark 4.2), we obtain
\[\left(Lu,Vu\right)_{\partial SM}=\left((Lu)_{e},(Vu)_{e}\right)_{\partial SM} +\left((Lu)_{o},(Vu)_{o}\right)_{\partial SM}\]
\[=(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{e}-\left\langle v,\nu\right\rangle(V(\lambda)Vu)_{o},Vu_{o})_{\partial SM}\] \[\qquad+(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{o}- \left\langle v,\nu\right\rangle(V(\lambda)Vu)_{e},Vu_{e})_{\partial SM}\,.\]
Combining with (4.1), we have
\[\left(\nabla_{T,\lambda}u,Vu\right)_{\partial SM} =(\nabla_{T}u_{e},Vu_{o})_{\partial SM}+(\nabla_{T}u_{o},Vu_{e})_ {\partial SM}-(\kappa Vu,Vu)_{\partial SM}\] \[\quad+(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{e}- \left\langle v,\nu\right\rangle(V(\lambda)Vu)_{o},Vu_{o})_{\partial SM}\] \[\quad+(-\left\langle v_{\perp},\nu\right\rangle(\lambda Vu)_{o}- \left\langle v,\nu\right\rangle(V(\lambda)Vu)_{e},Vu_{e})_{\partial SM}\,.\qed\]
**Remark 4.2**.: _Notice that \(\rho\) is an isometry on \(S_{x}\) for each \(x\in\partial M\). The even and odd parts of \(u\) are denoted by \(u_{e}\) and \(u_{0}\) respectively with respect to the isometry \(\rho\). Similarly, \(v_{e}\) and \(v_{0}\) stands for the even and odd parts of \(v\) respectively with respect to the isometry \(\rho\). Then_
\[(u_{e},v_{o}) =\left(\frac{u+u\circ\rho}{2},\frac{v-v\circ\rho}{2}\right)\] \[=\frac{1}{4}\left\{(u,v)+(u\circ\rho,v)-(u,v\circ\rho)-(u\circ \rho,v\circ\rho)\right\}=\frac{1}{4}\left\{(u\circ\rho,v)-(u,v\circ\rho) \right\},\]
_where we used the fact that \(\rho\) is an isometry. Similarly,_
\[(u_{o},v_{e}) =\left(\frac{u-u\circ\rho}{2},\frac{v+v\circ\rho}{2}\right)\] \[=\frac{1}{4}\left\{(u,v)-(u\circ\rho,v)+(u,v\circ\rho)-(u\circ \rho,v\circ\rho)\right\}=\frac{1}{4}\left\{-(u\circ\rho,v)+(u,v\circ\rho) \right\}.\]
_This implies_
\[(u_{e},v_{o})+(u_{o},v_{e})=0,\]
_and in particular, we have_
\[(u,v)=(u_{e}+u_{o},v_{e}+v_{0})=(u_{e},v_{e})+(u_{o},v_{0}).\]
**Corollary 4.3**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). If \(u\in C^{2}(SM)\) and \(u=u\circ\rho\) on \(\partial SM\), then_
\[(\nabla_{T,\lambda}u,Vu)_{\partial SM}=-((\kappa_{\lambda}+\eta_{\lambda})_{e }Vu,Vu)_{\partial SM},\]
_where \(\kappa_{\lambda}(x,v)=\kappa-\left\langle\nu(x),\lambda(x,v)iv\right\rangle\) and \(\eta_{\lambda}(x,v)=\left\langle V(\lambda)(x,v)v,\nu\right\rangle\)._
Proof.: We have
\[\rho^{*}(\lambda(x,v)V(u)(x,v)) =-(\rho^{*}\lambda)V(\rho^{*}u)(x,v), \tag{4.6}\] \[\rho^{*}(V(\lambda)(x,v)Vu(x,v)) =V(\rho^{*}\lambda)V(\rho^{*}u)(x,v). \tag{4.5}\]
By the assumption on \(u\), we have \(u_{e}=u\) on \(\partial SM\) and \(u_{o}=0\) on \(\partial SM\). Combining (4.5) and (4.6), we obtain
\[(\lambda Vu)_{e}(x,v) =\frac{\lambda Vu(x,v)-(\rho^{*}\lambda)\,V\left(\rho^{*}u\right) (x,v)}{2}=(\lambda)_{o}Vu_{e}(x,v)\] \[(\lambda Vu)_{o}(x,v) =\frac{\lambda Vu(x,v)+(\rho^{*}\lambda)\,V\left(\rho^{*}u\right) (x,v)}{2}=(\lambda)_{e}Vu_{e}(x,v)\] \[(V(\lambda)Vu)_{e} =\frac{V(\lambda)(x,v)Vu(x,v)+V\left(\rho^{*}\lambda\right)V\left( \rho^{*}u\right)(x,v)}{2}=V(\lambda_{e})Vu_{e}(x,v)\] \[(V(\lambda)Vu)_{o} =\frac{V(\lambda)(x,v)Vu(x,v)-V\left(\rho^{*}\lambda\right)V \left(\rho^{*}u\right)(x,v)}{2}=V(\lambda_{o})Vu_{e}(x,v)\]
From (4.2) and Lemma 2.4, we have
\[\left(\nabla_{T,\lambda}u,Vu\right)_{\partial SM} =-(\kappa Vu,Vu)_{\partial SM}+(-\left\langle v_{\perp},\nu\right\rangle (\lambda)_{e}Vu_{e}-\left\langle v,\nu\right\rangle V\left(\lambda_{e}\right)Vu _{e},Vu_{e})_{\partial SM}\] \[=-(\kappa Vu,Vu)_{\partial SM}+(\left\langle(\lambda)_{e}iv,\nu \right\rangle Vu-\left\langle V\left(\lambda_{e}\right)v,\nu\right\rangle Vu,Vu \right)_{\partial SM}\] \[=-\left(\left(\kappa_{\lambda}+\eta_{\lambda}\right)_{e}Vu,Vu \right)_{\partial SM}.\qed\]
Lemma 2.1 and Corollary 4.3 now lead to the Pestov identity
\[\|Pu\|_{SM}^{2}=\|\widetilde{P}u\|_{SM}^{2}+\|(X+\lambda V)u\|_{ SM}^{2}-\left(K_{\lambda}Vu,Vu\right)_{SM}\\ -\left(((\kappa_{\lambda})_{e}+(\eta_{\lambda})_{e})Vu,Vu\right) _{\partial SM}, \tag{4.7}\]
for all \(u\in C^{2}(SM)\) with \(u\circ\rho=u\) on \(\partial SM\). The important point to note here is that the regularity of the solution \(u\) to the transport equation is \(C^{2}(\operatorname{int}(SM))\cap\operatorname{Lip}(SM)\) by Lemma 3.21. We need to prove the Pestov identity (4.7) for this class of functions. To overcome this difficulty, we use an approximation argument following [12, pp. 1289-1290].
**Lemma 4.4**.: _Let \((M,g)\) be a compact Riemmanian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). If \(u\in C^{2}(\operatorname{int}(SM))\)\(\cap\operatorname{Lip}(SM)\), \(Pu\in L^{2}(SM)\), and \(u=u\circ\rho\) on \(\partial SM\) then_
\[\|Pu\|_{SM}^{2}=\|\widetilde{P}u\|_{SM}^{2}+\|Fu\|_{SM}^{2}-\left(K_{\lambda}Vu,Vu\right)_{SM}\\ -\left(((\kappa_{\lambda})_{e}+(\eta_{\lambda})_{e})Vu,Vu\right) _{\partial SM}. \tag{4.8}\]
Proof.: Following the approach taken in the proof of [12, Lemma 10], we extend our manifold as follows: Let \(\widetilde{M}\) be a smooth and compact Riemannian manifold with boundary, such that \(M\subset\operatorname{int}\widetilde{M}\). We extend the function \(u\) to a new function \(\tilde{u}:\widetilde{SM}\to\mathbb{R}\) such that \(\tilde{u}\) satisfies \(\tilde{u}=u\) in \(SM\), \(\tilde{u}\in C^{2}(\operatorname{int}SM)\cap\operatorname{Lip}(\widetilde{SM})\) and has compact support in \(\operatorname{int}S\widetilde{M}\).
Following the proof of [12, Lemma 10], we define a sequence of mollifications \(\left(u^{j}\right)_{j=1}^{\infty}\) of \(\tilde{u}\). By the basic properties of mollifiers, we have \(u^{j}\to u\) in \(\operatorname{Lip}(SM)\) and \(C^{2}(\operatorname{int}SM)\). By applying Lemma 2.1 and Lemma 4.1 to \(u^{j}|_{SM}\), we obtain the following expression
\[\|Pu^{j}\|_{SM}^{2}=\|\widetilde{P}u^{j}\|_{SM}^{2}+\|Fu^{j}\|_{SM }^{2}-\left(K_{\lambda}Vu^{j},Vu^{j}\right)_{SM}\\ +\left(\nabla_{T,\lambda}u_{e}^{j},Vu_{o}^{j}\right)_{\partial SM }+\left(\nabla_{T,\lambda}u_{o}^{j},Vu_{e}^{j}\right)_{\partial SM}\\ -(((\kappa_{\lambda})_{e}+(\eta_{\lambda})_{e})Vu^{j},Vu^{j})_{ \partial SM}. \tag{4.9}\]
Here, we have defined \(u_{e}^{j}=\frac{1}{2}\left(u^{j}+u^{j}\circ\rho\right)\) and \(u_{o}^{j}=\frac{1}{2}\left(u^{j}-u^{j}\circ\rho\right)\), and we have used the fact that \(u\circ\rho=u\) at \(\partial SM\) by assumption.
Note that \(\operatorname{Lip}(\partial SM)\subset H^{1}(\partial SM)\) and \(u^{j}\to u\) in \(\operatorname{Lip}(\partial SM)\). Therefore, the convergence also holds in \(H^{1}(\partial SM)\). Similar to the proof of [12, Lemma 10], we can conclude from \(H^{1}\) convergence in \(SM\) that \(Fu^{j}\to Fu\) and \(Vu^{j}\to Vu\) in \(L^{2}(SM)\). We have by the properties of mollification and regularity assumptions on \(u\) that \(Pu^{j}\to Pu\) in \(L^{2}(SM)\). Since the all other terms but \(\|\widetilde{P}u^{j}\|\) in (4.9) are known to converge as \(j\to\infty\), we may conclude that \(\lim_{j\to\infty}\|\widetilde{P}u^{j}\|\) exists and is finite. Using the commutator formula (2.7), we have \(\widetilde{P}u^{j}=Pu^{j}+X_{\perp}u^{j}-V(\lambda)Vu^{j}\). We know that \(Pu^{j}\to Pu\) in \(L^{2}(SM)\) by assumption and, on the other hand, \(X_{\perp}u^{j}\to X_{\perp}u\) and \(V(\lambda)Vu^{j}\to V(\lambda)Vu\) in \(L^{2}(SM)\) since \(u\in\operatorname{Lip}(SM)\subset H^{1}(SM)\) and \(\lambda\in C^{\infty}(SM)\). This shows that \(\widetilde{P}u^{j}\to\widetilde{P}u\) in \(L^{2}(SM)\).
By combining all of the above facts about the convergence of terms, we obtain (4.8) by taking the limit \(j\to\infty\)
### Proof of Theorem 1.1
Let us write \(f=f_{-1}+f_{0}+f_{1}\) where \(f_{j}\in H_{j}\) for \(-1\leq j\leq 1\) and \((f_{-1}+f_{1})(x,v)=\alpha_{x}(v)\). It follows from the definition of \(u\) in (3.2) that \(Fu=-f\) in the interior of \(SM\) as stated in (3.3). By Lemma 3.21 and the identity \(VFu=-Vf\in C^{1}(SM)\), we know that \(u\) satisfies the assumptions of Lemma 4.4.
From (2.3) and the orthogonality (2.4), we obtain \(VFu=if_{1}-if_{-1}\) in \(\operatorname{int}(SM)\) with the identity
\[\|VFu\|_{SM}^{2}=\|Vf\|_{SM}^{2}=\|if_{1}-if_{-1}\|_{SM}^{2}=\|f_{1}\|_{SM}^{2 }+\|f_{-1}\|_{SM}^{2}. \tag{4.10}\]
Combining the Pestov identity (4.8) with (4.10), we have
\[\|f_{1}\|_{SM}^{2}+\|f_{-1}\|_{SM}^{2}=\|\widetilde{P}u\|_{SM}^{ 2}+\|f\|_{SM}^{2}-\left(K_{\lambda}Vu,Vu\right)_{SM}\] \[\qquad\qquad\qquad\qquad-\left(((\kappa_{\lambda})_{e}+(\eta_{ \lambda})_{e})Vu,Vu\right)_{\partial SM}.\]
We may simplify further to obtain that
\[0=\|\widetilde{P}u\|_{SM}^{2}+\|f_{0}\|_{SM}^{2}-\left(K_{\lambda }Vu,Vu\right)_{SM}\] \[\qquad\qquad-\left(((\kappa_{\lambda})_{e}+(\eta_{\lambda})_{e}) Vu,Vu\right)_{\partial SM}.\]
Since \((M,g,\lambda,\mathcal{E})\) is admissible in the sense of Definition 3.4, we have that \((\kappa_{\lambda_{e}}+\eta_{\lambda_{e}})\leq 0\) and \(K_{\lambda}\leq 0\). In other words, each term on the right-hand side of the above equation is individually nonpositive. Since the sum of these terms vanishes, it follows that each individual term must be zero. Consequently, we deduce that \(FVu=0\) and \(f_{0}=0\). Since the dynamical system is nontrapping and \(Vu\) is constant along the \(\lambda\)-geodesic flow, this implies that \(Vu=0\) by the boundary condition \(u|_{\pi^{-1}\mathcal{E}}=0\). In conclusion, \(u\) is independent of the vertical variable \(v\). Hence, we have
\[f=f_{1}+f_{-1}=F(-u)=(X+\lambda V)(-u)=-Xu.\]
Next note that \(h:=-\pi^{*}u\) defines a mapping \(h:M\to\mathbb{R}\) and it holds that \(dh=\alpha\). Since \(\alpha\) is an exact \(1\)-form with \(C^{2}\) coefficients and \(h\) itself is continuous, we may conclude that \(h\in C^{3}(M)\).
## Appendix A Geometry of twisted geodesic flows on surfaces
In this appendix, for the sake of completeness, we develop some basic theory for the twisted geodesic flows. Most of these properties and lemmas are used in the article. This includes the Pestov energy identities with boundary terms, some properties of \(\lambda\)-Jacobi fields, the time-reversed flows (called the dual flows), and a discussion on different notions of curvature and convexity. We remark that our discussion complements those of [1, 2, 3]. In particular, our Proposition 2.1 can be seen as a special case of [1, Theorem 2.3], using a slightly different approach and notation. We also remark that \(\lambda\)-Jacobi fields on surfaces were already introduced and studied in the context of \(\lambda\)-conjugate points in [1, 3]. However, Lemma A.1, the implications of it and the growth estimate of Lemma 2.2 do not appear in these works. On the other hand, the dual flow, in Section 2.4, is only introduced and applied in our work.
### Proof of Proposition 2.1
We first assume that \(u\in C^{\infty}(SM)\). Then the proof of proposition for \(C^{2}(SM)\) follows by the density of \(C^{\infty}(SM)\) in \(C^{2}(SM)\). Using the
commutator formulas (2.7), for any smooth function \(u:SM\to\mathbb{R}\) one has
(A.1) \[-2(X_{\perp}u\cdot V(X+\lambda V)u)\] \[\qquad\qquad=((X+\lambda V)u)^{2}+(X_{\perp}u)^{2}-(K+X_{\perp}( \lambda)+\lambda^{2})(Vu)^{2}\] \[\qquad\qquad\qquad\qquad\qquad-(X+\lambda V+V(\lambda))(X_{\perp }u\cdot Vu)+X_{\perp}((X+\lambda V)u\cdot Vu)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-V((X+\lambda V)u\cdot X_{ \perp}u).\]
See [10, Lemma 3.1] for the details. Integrating the identity (A.1) over \(SM\), we obtain
\[-2\int_{SM}(X_{\perp}u\cdot V(X+\lambda V)u)d\Sigma^{3}\\ =\int_{SM}((X+\lambda V)u)^{2}d\Sigma^{3}+\int_{SM}(X_{\perp}u)^{2 }d\Sigma^{3}\\ -\int_{SM}(K+X_{\perp}(\lambda)+\lambda^{2})(Vu)^{2}d\Sigma^{3}- \int_{SM}(X+\lambda V+V(\lambda))(X_{\perp}u\cdot Vu)d\Sigma^{3}\\ +\int_{SM}X_{\perp}((X+\lambda V)u\cdot Vu)d\Sigma^{3}-\int_{SM}V ((X+\lambda V)u\cdot X_{\perp}u)d\Sigma^{3}.\]
Using the integration by parts formulas (2.2), we have
\[\int_{SM}(X+\lambda V+V(\lambda))(X_{\perp}u\cdot Vu)d\Sigma^{3} =-\int_{\partial SM}\langle v,\nu\rangle(X_{\perp}u\cdot Vu)d \Sigma^{2},\] \[\int_{SM}(X_{\perp})((X+\lambda V)u\cdot Vu)d\Sigma^{3} =-\int_{\partial SM}\langle v_{\perp},\nu\rangle((X+\lambda V)u \cdot Vu)d\Sigma^{2},\] \[\int_{SM}V((X+\lambda V)u\cdot X_{\perp}u)d\Sigma^{3} =0.\]
This implies
(A.2) \[-2\int_{SM}(X_{\perp}u)\cdot V(X+\lambda V)ud\Sigma^{3}\\ =\int_{SM}((X+\lambda V)u)^{2}d\Sigma^{3}+\int_{SM}(X_{\perp}u)^{ 2}d\Sigma^{3}-\int_{SM}(K+X_{\perp}(\lambda)+\lambda^{2})(Vu)^{2}d\Sigma^{3}\\ +\int_{\partial SM}\langle v,\nu\rangle(X_{\perp}u\cdot Vu)d \Sigma^{2}-\int_{\partial SM}\langle v_{\perp},\nu\rangle((X+\lambda V)u\cdot Vu )d\Sigma^{2}.\]
Applying the integrating by parts formula once again, we have
\[\int_{SM}(X+\lambda V)(V(\lambda)(Vu)^{2})d\Sigma^{3}\\ =-\int_{SM}(V(\lambda))^{2}(Vu)^{2}d\Sigma^{3}-\int_{\partial SM }\langle v,\nu\rangle(V(\lambda))(Vu)^{2}d\Sigma^{2}.\]
An identity similar to [10, p. 538] can be obtained using the commutator relations as follows
\[((X+\lambda V)Vu)^{2}=(V(X+\lambda V)u)^{2}+(X_{\perp}u)^{2}-(V( \lambda))^{2}(Vu)^{2}+2V(X+\lambda V)u\cdot X_{\perp}u\\ -(X+\lambda V)((V(\lambda))(Vu)^{2})+(Vu)^{2}(X+\lambda V)(V( \lambda)).\]
We now integrate the above equation over \(SM\) to get
(A.3) \[-2\int_{SM}X_{\perp}u\cdot V(X+\lambda V)ud\Sigma^{3}\] \[\qquad=-\int_{SM}((X+\lambda V)Vu)^{2}d\Sigma^{3}+\int_{SM}(V(X+ \lambda V)u)^{2}d\Sigma^{3}\quad+\int_{SM}(X_{\perp}u)^{2}d\Sigma^{3}\] \[\qquad\qquad+\int_{SM}((X+\lambda V)(V(\lambda))(Vu)^{2}d\Sigma^{ 3}+\int_{\partial SM}\langle v,\nu\rangle(V(\lambda))(Vu)^{2}d\Sigma^{2}.\]
Notice that the identity above is a generalisation of [11, eq.(14)] with the boundary term. Combining (A.2) with (A.3) yields
\[\int_{SM}((X+\lambda V)Vu)^{2}d\Sigma^{3}-\int_{SM}(K+X_{\perp}( \lambda)+\lambda^{2}+(X+\lambda V)V(\lambda))(Vu)^{2}d\Sigma^{3}\] \[\qquad=\int_{SM}(V(X+\lambda V)u)^{2}d\Sigma^{3}-\int_{SM}((X+ \lambda V)u)^{2}d\Sigma^{3}+\int_{\partial SM}\langle v,\nu\rangle(V(\lambda) )(Vu)^{2}d\Sigma^{2}\] \[\qquad\qquad+\int_{\partial SM}\langle v_{\perp},\nu\rangle((X+ \lambda V)u\cdot Vu)d\Sigma^{2}-\int_{\partial SM}\langle v,\nu\rangle(X_{ \perp}u\cdot Vu)d\Sigma^{2}.\]
Hence, we have
\[\|Pu\|_{SM}^{2} =\|\widetilde{P}u\|_{SM}^{2}+\|Fu\|_{SM}^{2}-(K_{\lambda}Vu,Vu)_{ SM}-(\langle v_{\perp},\nu\rangle Fu,Vu)_{\partial SM}\] \[\qquad-(\langle v,\nu\rangle(V(\lambda))Vu,Vu)_{\partial SM}+( \langle v,\nu\rangle(X_{\perp}u,Vu))_{\partial SM}.\]
This is precisely the assertion of the proposition.
### Jacobi fields for \(\lambda\)-geodesics
Jacobi fields are understood by means of a variation of a family of geodesics (see e.g. [10, Chapter 10] or [14, Section 3.7.1]). In this subsection, we provide a detailed exposition of the Jacobi field for \(\lambda\)-geodesic flows. Consider a smooth one-parameter families of \(\lambda\)-geodesics \((\gamma_{s})_{s\in(-\varepsilon,\varepsilon)}\). We say that the family \((\gamma_{s})_{s\in(-\varepsilon,\varepsilon)}\) is a variation of \(\gamma\) through \(\lambda\)-geodesics if each \(\gamma_{s}:[a,b]\to M\) is a \(\lambda\)-geodesic and \(\gamma_{0}=\gamma\). We denote by \(D_{t}=\nabla_{\dot{\gamma}(t)}\) the covariant derivative along the curve \(\gamma(t)\).
It is well-known that the variation field of a geodesic satisfy the Jacobi equation which is demonstrated in [14, Lemma 3.7.2], [10, Theorem 10.1]. Furthermore, the magnetic Jacobi fields are known to satisfy the equation (A.4) for \(\lambda\in C^{\infty}(M)\) (cf. [11, eq. (A.7)]). In our next lemma, we deduce the \(\lambda\)-Jacobi equation in the case of \(\lambda\in C^{\infty}(SM)\).
**Lemma A.1** (\(\lambda\)-Jacobi equation).: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). Let \(\gamma\) be a \(\lambda\)-geodesic segment in \(M\) and \(\gamma_{s}\) be a variation of \(\gamma\). Then the variation field \(J(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}\) of a variation through \(\lambda\)-geodesics satisfies the \(\lambda\)-Jacobi equation_
(A.4) \[D_{t}^{2}J(t)=\langle(J(t),\dot{J}(t)),\nabla_{SM}\lambda\rangle_{G}\;i\dot{ \gamma}(t)+\lambda(\gamma(t),\dot{\gamma}(t))\nabla_{J}(i\dot{\gamma})+R(\dot{ \gamma}(t),J(t))\dot{\gamma}(t)\]
_where \(R\) is the Riemann curvature tensor of \((M,g)\) and \(G\) is the Sasaki metric on \(SM\)._
**Remark A.2**.: _In the article, we will only consider variations of a unit speed \(\lambda\)-geodesic. Therefore, Jacobi fields have only one tangential component as the scaling of speed is not permissible under this assumption._
Proof.: Let \(\Gamma(s,t)=\gamma_{s}(t)\) be a smooth variation of \(\lambda\)-geodesics on Riemannian surface \(M\). Note that the family \(\gamma_{s}(t)\) of \(\lambda\)-geodesics satisfies
(A.5) \[D_{t}\partial_{t}\gamma_{s}(t)=\lambda i\dot{\gamma}_{s}.\]
Define \(J(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}\). We first compute \(D_{t}^{2}J(t)\). We denote by \(D_{s}=\nabla_{\partial_{t}\gamma_{s}}\) the covariant derivative along \(\partial_{s}\gamma_{s}\). According to [10, Lemma 6.2], we have the torsion-free property of the connection \(\nabla\),
(A.6) \[D_{t}\partial_{s}\gamma_{s}(t)=D_{s}\partial_{t}\gamma_{s}(t).\]
Recall that the Riemann curvature tensor \(R\) satisfies
(A.7) \[D_{t}D_{s}W-D_{s}D_{t}W=R\left(\partial_{t}\gamma_{s},\partial_{s}\gamma_{s} \right)W\]
for any vector field \(W\) along \(\gamma_{s}\), see [10, Proposition 7.5]. Combining (A.6), (A.7) and (A.5), we have
(A.8) \[D_{t}^{2}J(t) =\left.D_{t}D_{t}\partial_{s}\gamma_{s}(t)\right|_{s=0}=\left.D_{ t}D_{s}\partial_{t}\gamma_{s}(t)\right|_{s=0}\] \[=\left.D_{s}D_{t}\partial_{t}\gamma_{s}(t)\right|_{s=0}+R(\dot{ \gamma}(t),J(t))\dot{\gamma}(t)\] \[=D_{s}(\lambda(\gamma_{s}(t),\dot{\gamma}_{s}(t))i\dot{\gamma}_{ s}(t))|_{s=0}+R(\dot{\gamma}(t),J(t))\dot{\gamma}(t).\]
Applying the product rule for the covariant derivative along a curve for a scalar function \(f\) and a vector field \(V\), we have
\[D_{s}(fV)=\left(\partial_{s}f\right)V+fD_{s}V,\]
see for instance, [10, Theorem 4.24]. This implies
(A.9) \[D_{s}(\lambda(\gamma_{s}(t),\dot{\gamma}_{s}(t))i\dot{\gamma}_{s}(t))=\partial _{s}(\lambda(\gamma_{s}(t),\dot{\gamma}_{s}(t)))i\dot{\gamma}_{s}(t)+\lambda( \gamma_{s}(t),\dot{\gamma}_{s}(t))D_{s}(i\dot{\gamma}_{s}(t)).\]
Since \(\lambda\in C^{\infty}(SM)\), we see that
(A.10) \[\partial_{s}\left(\lambda\left(\gamma_{s}(t),\dot{\gamma}_{s}(t) \right)\right)|_{s=0} =\langle(\partial_{s}\gamma_{s}(t),D_{s}\dot{\gamma}_{s}(t))|_{s= 0},\nabla_{SM}\lambda\left(\gamma_{s}(t),\dot{\gamma}_{s}(t)\right)|_{s=0} \rangle_{G}\] \[=\langle(J(t),D_{t}J(t)),\nabla_{SM}\lambda(\gamma(t),\dot{ \gamma}(t))\rangle_{G}.\]
Combining (A.8), (A.9) and (A.10), we have
\[D_{t}^{2}J(t) =\langle(J(t),D_{t}J(t))\,,\nabla_{SM}\lambda(\gamma(t),\dot{ \gamma}(t))\rangle_{G}\,i\dot{\gamma}(t)+\lambda(\gamma(t),\dot{\gamma}(t)) \nabla_{J}(i\dot{\gamma})\] \[\qquad+R(\dot{\gamma}(t),J(t))\dot{\gamma}(t).\]
We thus have proved the lemma.
Recall from [11, p. 82], the connection map \(\mathfrak{K}:T_{(x,v)}SM\to T_{x}M\) is defined as follows: For a given \(\xi\in T_{(x,v)}SM\), consider a curve \(Z:(-\epsilon,\epsilon)\to SM\) with \(Z(0)=(x,v)\) and \(\dot{Z}(0)=\xi\). We can represent the curve \(Z(s)\) as a pair \((\alpha(s),W(s))\). Then the connection map acts on \(\xi\) as \(\mathfrak{K}\xi:=\left.D_{s}W\right|_{s=0}\), where \(D_{s}\) denotes the covariant derivative of \(W\) along \(\alpha\). From [11, eq. (3.12)], any \(\xi\in T_{(x,v)}SM\) can be written as
(A.11) \[\xi=(d\pi(\xi),\mathfrak{K}\xi)\]
where \(d\pi(\xi)\) lies in the horizontal subbundle and \(\mathfrak{K}\xi\) resides in the vertical subbundle.
**Remark A.3**.: _To simplify, we need to decompose \((J(t),D_{t}J(t))\) into horizontal and vertical subbundle in order to use the Sasaki metric property. Now,_
\[J(t)=\left.\partial_{s}\gamma_{s}(t)\right|_{s=0}=\left.\partial_{s}\pi( \phi_{t}(\gamma_{s}(0))\right|_{s=0}=d\pi(\phi_{t}(\gamma_{s}(0)))\partial_{s }\phi_{t}(\gamma_{s}(0))|_{s=0}\]
_and_
(A.12) \[D_{t}J(t)=\left.D_{t}\partial_{s}\gamma_{s}(t)\right|_{s=0}=\left.D_{s} \partial_{t}\gamma_{s}(t)\right|_{s=0}=\left.D_{s}\dot{\gamma}_{s}(t)\right|_{ s=0}.\]
_Consider the map \(Z_{t}(s)=\phi_{t}(\gamma_{s}(0))=(\gamma_{s}(t),\dot{\gamma}_{s}(t))\). By the definition of the connection map \(\mathfrak{K}\), we have that \(\mathfrak{K}(\partial_{s}\phi_{t}(\gamma_{s}(0))|_{s=0})=D_{s}\dot{\gamma}_{s}( t)|_{s=0}\). Thus, we can deduce from (A.12) that \(D_{t}J(t)=\mathfrak{K}(\partial_{s}\phi_{t}(\gamma_{s}(0))|_{s=0})\). According to [11, eq. (3.12)], any vector \(\xi\in T_{(x,v)}SM\) can be decomposed as \(\xi=(\xi_{H},\xi_{V})\), where \(\xi_{H}=d\pi(\xi)\) represents
the horizontal component and \(\xi_{V}=\mathfrak{K}\xi\) denotes the vertical component. Now by taking \(\xi=\partial_{s}\phi_{t}(\gamma_{s}(0))|_{s=0}\), we can write_
\[\partial_{s}\phi_{t}(\gamma_{s}(0)=(J(t),D_{t}J(t))\]
_where \(J(t)=(\partial_{s}\phi_{t}(\gamma_{s}(0))|_{s=0})_{H}\) and \(D_{t}J(t)=(\partial_{s}\phi_{t}(\gamma_{s}(0))|_{s=0})_{V}\). Additionally, the gradient of \(\lambda\) on \(SM\), denoted by \(\nabla_{SM}\lambda\), can be decomposed into its horizontal and vertical components, expressed as \(((\nabla_{SM}\lambda)_{H},(\nabla_{SM}\lambda)_{V})\). By the definition of the Sasaki metric (see for instance [13, eq. (3.14)]), we have_
\[\left\langle(J(t),\dot{J}(t)),\nabla_{SM}(\lambda(\gamma(t),\dot{\gamma}(t))) )_{G}=\left\langle J(t),(\nabla_{SM}\lambda)_{H}\right\rangle_{g}+\left\langle D _{t}J(t),(\nabla_{SM}\lambda)_{V}\right\rangle_{g}\,.\]
**Definition A.4** (\(\lambda\)-exponential map).: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). For each point \(x\in M\), let us define the maximal domain \(D_{x}\) as follows:_
\[D_{x}:=\left\{tv\in T_{x}M;v\in S_{x}M\text{ and }t\in[0,\tau(x,v)]\right\}.\]
_The \(\lambda\)-exponential map \(\exp_{x}^{\lambda}:D_{x}\to M\) is given by \(\exp_{x}^{\lambda}(tv)=\gamma_{x,v}(t)\), where \(\gamma\) is a \(\lambda\)-geodesic._
**Remark A.5**.: _We set \(\tau(x,v):=\infty\) if \(\partial M=\emptyset\) and, in general, if the flow \(\phi_{t}(x,v)\) does not reach the boundary._
**Remark A.6**.: _Notice that \(\exp_{x}^{\lambda}(tv)=\pi\circ\phi_{t}(x,v)\) for \(t\geq 0\), where \(\pi\) is the projection \(\pi(x,v)=x\). Hence, the \(\lambda\)-exponential map is smooth on \(D_{x}\setminus 0\)._
We denote \(\lambda(t):=\lambda(\gamma(t),\dot{\gamma}(t))\). The following lemma is a generalisation of [14, Proposition 10.2] in the case of \(\lambda\)-Jacobi field.
**Lemma A.7** (Existence and uniqueness of \(\lambda\)-Jacobi fields).: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). Suppose \(I\subset\mathbb{R}\) is an interval and \(\gamma:I\to M\) is a \(\lambda\)-geodesic with \(\gamma(t_{0})=x\) for some \(t_{0}\in I\). For any pair \((v,w)\in S_{x}M\times T_{x}M\), there exists a unique \(\lambda\)-Jacobi field \(J\) along \(\gamma\) satisfying the initial conditions_
\[J(t_{0})=v\text{ and }\ D_{t}J(t_{0})=w.\]
The following lemma is a converse of Lemma A.1, which tells that any \(\lambda\)-Jacobi field is the variation field of a variation of \(\lambda\)-geodesic flows. A similar result has been proved in [14, Proposition 10.4] in the case of \(\lambda=0\).
**Lemma A.8**.: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). Suppose \(I\subset\mathbb{R}\) is a compact interval and \(\gamma:I\to M\) is a \(\lambda\)-geodesic. Then every \(\lambda\)-Jacobi field along \(\gamma\) corresponds to the variation field of a unit speed \(\lambda\)-geodesic variation of \(\gamma\)._
We omit the proofs of Lemmas A.7 and A.8. One may prove them following the standard Riemannian proofs with obvious modifications.
We now introduce the notion of Lie bracket, following [13, Remark 4.13] and we require this in our later analysis. Let \(Y\) and \(Z\) be two vector fields on a manifold \(N\). We will denote by \(\psi_{t}\) the local flow of the vector field \(Y\). The Lie bracket of \(Y\) and \(Z\), denoted by \([Y,Z]\), can then be defined by setting
(A.13) \[[Y,Z](x)=\left.\frac{d}{dt}\right|_{t=0}d\psi_{-t}\left(Z\left(\psi_{t}x\right) \right).\]
Let \((x,v)\in SM\), \(\xi\in T_{(x,v)}SM\) and \(\phi_{t}\) be a \(\lambda\)-geodesic flow. Similarly to [13, p. 38], we write \(F(t):=F(\phi_{t}(x,v))\), \(X_{\perp}(t):=X_{\perp}(\phi_{t}(x,v))\) and \(V(t):=V(\phi_{t}(x,v))\) with a slight
abuse of notation. We can express \(\xi\) as \(\xi=aF+bX_{\perp}+cV\), where \(a,b,c\in\mathbb{R}\) and \(F,V,X_{\perp}\) form a basis for \(T_{(x,v)}SM\), see in [MP, eq. (4.2.2.)]. Furthermore, we can find smooth functions \(a(t),b(t),c(t)\) satisfying
(A.14) \[d\phi_{t}(\xi)=a(t)F(t)+b(t)X_{\perp}(t)+c(t)V(t),\]
with initial conditions \(a(0)=a\), \(b(0)=b\), and \(c(0)=c\). We assume here that (A.14) is defined for all \(t\in I\), where the interval \(I\) might be unbounded. When \(M\) is a closed surface, then one may take \(I=\mathbb{R}\) as the \(\lambda\)-geodesic flow is defined for all times.
The coefficients \(a(t),b(t),c(t)\) satisfy a certain system of ODEs (see e.g. [MP, Proposition 4.14]). In the next lemma, we will prove a similar result in the case of \(\lambda\)-geodesics corresponding to the \(\lambda\)-Jacobi equation. See also [AD18, MP22, Zha23] for an equivalent result. In comparison to [MP22, Section 3.2], we remark that our definition of \(K\) is different by including the term \(FV(\lambda)\) in \(K\), \(H=-X_{\perp}\), and the formulas look a bit different because of these conventions.
**Lemma A.9**.: _Let \((M,g)\) be a closed surface and \(\lambda\in C^{\infty}(SM)\). If functions \(a(t),b(t),\) and \(c(t)\) satisfy the equation (A.14) with the initial conditions \(a(0)=a\), \(b(0)=b\), and \(c(0)=c\), then these functions satisfy the following set of differential equations_
(A.15) \[\begin{split}\dot{a}&=-\lambda b,\\ \dot{b}&=-c,\\ \dot{c}-cV(\lambda)&=(K-FV(\lambda))b.\end{split}\]
Proof.: Define \(\phi_{-t}:=r\circ\phi_{t}^{-}\circ r\), where \(\phi_{t}^{-}\) is the dual \(\lambda\)-geodesic flow and \(r\) is reversion map i.e., \(r(x,v)=(x,-v)\). Since composition of smooth maps is smooth and \(\phi_{-t}\circ\phi_{t}=\mathrm{Id}\) in \(SM\), we have \(\phi_{-t}\) is a smooth map. Therefore, for each \(t\in[0,\infty)\), \(\phi_{t}:SM\to SM\) is a diffeomorphism. This implies that the differential \(d\phi_{t}\) is invertible for every \(t\in[0,\infty)\). Consequently, the differential \(d\phi_{-t}\) is invertible for all \(t\in[0,\infty)\) and hence \(d\phi_{t}^{-1}=d\phi_{-t}\). Similar to [MP, Proof of Proposition 4.14], we apply \(d\phi_{-t}\) to both sides of (A.14). Then we obtain
\[\xi=a(t)d\phi_{-t}(F(t))+b(t)d\phi_{-t}(X_{\perp}(t))+c(t)d\phi_{-t}(V(t)).\]
Differentiating both sides of the above identity with respect to \(t\) and applying the derivative formula for Lie brackets (A.13), we have
\[0=\frac{d}{dt}(\xi)= \dot{a}(t)d\phi_{-t}(F(t))+a(t)d\phi_{-t}([F,F](t))+\dot{b}(t)d \phi_{-t}(X_{\perp}(t))\] \[+b(t)d\phi_{-t}([F,X_{\perp}](t))+\dot{c}(t)d\phi_{-t}(V(t))+c(t) d\phi_{-t}([F,V](t)).\]
Applying the commutator formulas, linearity and grouping like terms, we obtain
\[0=d\phi_{-t}\{(\dot{a}(t)+\lambda b(t))F(t)+(\dot{b}(t)+c(t))X_{ \perp}(t)\] \[\qquad+(\dot{c}(t)-(K(t)+X_{\perp}(\lambda)+\lambda^{2})b(t)-c(t )V(\lambda))V(t)\}.\]
Since \(d\phi_{-t}\) is invertible and \(\{F(t),X_{\perp}(t),V(t)\}\) is a basis of each tangent space \(T_{\phi_{t}(x,v)}SM\), the coefficients of \(F(t),X_{\perp}(t)\) and \(V(t)\) must vanish for all \(t\), and hence the result follows.
**Lemma A.10**.: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). For any \((x,v)\in SM\) with \(\xi\in T_{(x,v)}SM\), the differential of the \(\lambda\)-geodesic flow can be decomposed as_
(A.16) \[d\phi_{t}(\xi)=(J_{\xi}(t),D_{t}J_{\xi}(t))\,,\]
_where \(J_{\xi}\) is the unique Jacobi field along the \(\lambda\)-geodesic \(\pi(\phi_{t}(x,v))\) with the initial condition \((J_{\xi}(0),D_{t}J_{\xi}(0))=\xi\)._
Proof.: Let us consider \(\sigma:(-\epsilon,\epsilon)\to SM\) be a smooth curve such that \(\sigma(0)=(x,v)\) and \(\sigma^{\prime}(0)=\xi\). Now we consider a variation of the \(\lambda\)-geodesic
\[\Gamma(t,s):=\gamma_{\sigma(s)}(t)=\pi(\phi_{t}(\sigma(s))).\]
Now we have
\[J(t) :=\partial_{s}\Gamma(t,s)|_{s=0}=\partial_{s}\pi(\phi_{t}(\sigma( s))|_{s=0}=d\pi(\phi_{t}(\sigma(0)))d\phi_{t}(\sigma(0))\sigma^{\prime}(0)\] \[=d\pi(\phi_{t}(x,v))d\phi_{t}(x,v)\xi.\]
Let \(Z_{t}(s):=\phi_{t}(\sigma(s))=(\gamma_{\sigma(s)}(t),\dot{\gamma}_{\sigma(s)} (t))\). Using the definition of connection map, we have
(A.17) \[\mathfrak{K}(\partial_{s}Z_{t}(s)|_{s=0})=\mathfrak{K}(\partial_{s}\phi_{t}( \sigma(s))|_{s=0})=D_{s}\dot{\gamma}_{\sigma(s)}(t)|_{s=0}.\]
Combining (A.6) with (A.17), we deduce that
\[D_{t}J(t) =D_{t}\partial_{s}\Gamma(t,s)|_{s=0}=D_{s}\partial_{t}\Gamma(t,s )|_{s=0}\] \[=D_{s}\partial_{t}\pi(\phi_{t}(\sigma(s)))|_{s=0}=D_{s}\dot{ \gamma}_{\sigma(s)}(t)|_{s=0}\] \[=\mathfrak{K}(\partial_{s}(\phi_{t}(\sigma(s)))|_{s=0})= \mathfrak{K}d\phi_{t}(\sigma(0))\sigma^{\prime}(0)\] \[=\mathfrak{K}d\phi_{t}(x,v)\xi.\]
From (A.11), we obtain
\[d\phi_{t}(x,v)\xi =(d\pi(\phi_{t}(x,v))d\phi_{t}(x,v)\xi,\mathfrak{K}d\phi_{t}(x,v )\xi)\] \[=(J(t),D_{t}J(t)).\]
This result establishes the lemma.
We will next proof an estimate needed for the proof of our main theorem.
Proof of Lemma 2.2.: If \(M\) is a surface with boundary, then we extend \((M,g)\) into a closed surface \(N\) and extend \(\lambda\) to a smooth function on \(SN\) (cf. [13, Lemma 3.1.8]). Then the \(\lambda\)-geodesic flow is defined for all \(t\in\mathbb{R}\). If (2.14) holds for the closed extension \(N\), then it also holds for \(M\). Therefore we can assume without loss of generality, we are in a closed setting.
Let us first define \(Z=(Z_{1},Z_{2},Z_{3})\) such that \(Z_{1}:=a,Z_{2}:=b,Z_{3}:=c\). Then the equation (A.15) can be written as
\[\begin{cases}D_{t}Z_{1}=\dot{a}=-\lambda(\gamma,\dot{\gamma})Z_{2},\\ D_{t}Z_{2}=-Z_{3},\\ D_{t}Z_{3}=V(\lambda)Z_{3}+(K-FV(\lambda))Z_{2}.\end{cases}\]
This is equivalent to
\[D_{t}Z=A_{t}^{\gamma}Z,\]
where
\[A_{t}^{\gamma}=\left[\begin{array}{ccc}0&-\lambda&0\\ 0&0&-1\\ 0&(K-FV(\lambda))&V(\lambda)\end{array}\right]\]
is a bounded linear map for each \(t\). By compactness, there is a positive constant \(C=C(M,g,\lambda)\) such that \(\|A_{t}^{\gamma}\|\leq C/2\) for all geodesics \(\gamma\) and all times \(t\). Thus, we have
\[D_{t}|Z|^{2}=2\,\langle Z,A_{t}^{\gamma}Z\rangle\leq C|Z|^{2}.\]
By Gronwall's inequality (cf. [10, p. 624]), the above implies that \(|Z(t)|^{2}\leq e^{Ct}|Z(0)|^{2}\) for all \(t\geq 0\).
Now if \(J\) is a \(\lambda\)-Jacobi field, then from (A.14) and (A.16), we have \(Z=(J,D_{t}J)\) for some initial data, from which the claim follows.
### Convexity, concavity and signed \(\lambda\)-curvature
Recall that \(\mathrm{I\!I}\) denotes the second fundamental form of the boundary \(\partial M\), and \(\nu(x)\) the inward unit normal vector to \(\partial M\) at \(x\in\partial M\).
**Definition A.11**.: _Let \((M,g)\) be a Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). We say that the boundary \(\partial M\) is strictly \(\lambda\)-convex at a point \(x\in\partial M\) if the following inequality holds:_
\[\mathrm{I\!I}_{x}(v,v)>\langle\lambda(x,v)iv,\nu(x)\rangle\quad\text{ for all }v\in S_{x}(\partial M).\]
_Similarly, we say that the boundary \(\partial M\) is strictly \(\lambda\)-concave at a point \(x\in\partial M\) if the following inequality holds:_
\[\mathrm{I\!I}_{x}(v,v)<\langle\lambda(x,v)iv,\nu(x)\rangle\quad\text{ for all }v\in S_{x}(\partial M).\]
In [14, Lemma 3.1.12], it is demonstrated that any geodesic tangent to the boundary \(\partial M\) stays in the exterior to the surface \(M\) for both small positive and negative time intervals. Moreover, the surface \(M\) is strictly magnetic convex at \(x\in\partial M\), if \(\mathrm{I\!I}(x,v)>\langle Y_{x}(v),\nu(x)\rangle\), holds for all \(v\in S_{x}(\partial M)\) where \(Y_{x}\) denotes the Lorentz force, see [13, Lemma A.6]. An analogous property will be discussed in the next lemma in the case of a strictly \(\lambda\)-convex boundary which generalizes both of these results. Before stating the next lemma, let us consider the boundary defining map following [14, Lemma 3.1.10]. Let \((N,g)\) be a closed extension of \((M,g)\). Then there is a function \(\rho\in C^{\infty}(N)\), called a boundary defining function, such that \(\rho(x)=d(x,\partial M)\) near \(\partial M\) in \(M\), and \(M=\{x\in N:\rho\geq 0\},\partial M=\{x\in N:\rho=0\}\), and \(N\setminus M=\{x\in N:\rho<0\}\). Moreover, \(\nabla\rho(x)=\nu(x)\) holds for all \(x\in\partial M\).
**Lemma A.12**.: _Let \((M,g)\) be a compact Riemannian surface with smooth boundary and \(\lambda\in C^{\infty}(SM)\). Suppose \((N,g)\) be a closed extension of \((M,g)\). Then \(\partial M\) is strictly \(\lambda\)-convex if and only if any \(\lambda\)-geodesic in \(N\) starting from some point \((x,v)\in\partial_{0}SM\) satisfies \(\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)]|_{t=0}<0.\) Furthermore, any \(\lambda\)-geodesic tangent to \(\partial M\) stays outside \(M\) for small positive and negative time intervals. Also any maximal \(\lambda\)-geodesic going from \(\partial M\) into \(M\) stays in the interior of \(M\) excepts for its end points._
Proof.: Let \(\rho\) be a boundary defining function such that \(\rho|_{\partial M}=0\), \(\nabla\rho|_{\partial M}=\nu\), \(\rho|_{M}\geq 0\) and \(\rho|_{N\setminus M}<0\), see for instance [14, Lemma 3.1.10]. Let \(v\in S_{x}(\partial M)\) and \(\gamma_{x,v}(t)\) be the \(\lambda\)-geodesic with \(\gamma_{x,v}(0)=x,\dot{\gamma}_{x,v}(0)=v\). Now \(\partial M\) is strictly \(\lambda\)-convex if and only if \(\mathrm{I\!I}(x,v)>\langle\nu,\lambda(x,v)iv\rangle.\) Notice that
\[\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)]\bigg{|}_{t=0} =\left.\frac{d}{dt}\langle\nabla\rho(\gamma_{x,v}(t)),\dot{\gamma }(t)\rangle\right|_{t=0}\] \[=\langle\nabla_{\dot{\gamma}(t)}\nabla\rho(\gamma_{x,v}(t)), \dot{\gamma}(t)\rangle\left|{}_{t=0}+\langle\nabla\rho(\gamma_{x,v}(t)), \nabla_{\dot{\gamma}(t)}\dot{\gamma}(t)\rangle|_{t=0}\] \[=\langle\nabla_{v}\nu(x),v\rangle+\langle\nu,\lambda(x,v)iv\rangle\] \[=-\mathrm{I\!I}(x,v)+\langle\nu,\lambda(x,v)iv\rangle.\]
This implies that \(\partial M\) is strictly \(\lambda\)-convex if and only if \(\left.\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)]\right|_{t=0}<0\). By Taylor's theorem, we have
\[\rho(\gamma_{x,v}(t)) =\rho(\gamma_{x,v}(0))+\left\langle\nabla\rho\left(x,v\right), \dot{\gamma}_{x,v}\left(0\right)\right\rangle t+\frac{1}{2}\left.\frac{d^{2}}{ dt^{2}}[\rho\circ\gamma_{x,v}(t)]\right|_{t=0}t^{2}+O\left(t^{3}\right)\] \[=\rho(\gamma_{x,v}(0))+\left\langle\nu\left(x\right),v\right\rangle t +\frac{1}{2}\left.\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)]\right|_{t=0}t ^{2}+O\left(t^{3}\right)\] \[=\frac{1}{2}\left.\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)] \right|_{t=0}t^{2}+O\left(t^{3}\right),\]
which is negative for small \(|t|\) since \(\left.\frac{d^{2}}{dt^{2}}[\rho\circ\gamma_{x,v}(t)]\right|_{t=0}<0\). This shows that for small positive and negative times \(\rho(\gamma(t))<0\), i.e., \(\gamma_{x,v}(t)\in N\setminus M\).
In the next subsection, we will see that how the signed \(\lambda\)-curvature for \(\lambda\)-geodesics is related to its dual \(\lambda\)-geodesic.
### Curvature of the dual \(\lambda\)-geodesic flow
Let us define the _reversion map_\(r:SM\to SM\) by \(r(x,v)=(x,-v)\). Let \(u\in C^{\infty}(SM)\), and consider its extension as a homogeneous function of degree zero in \(C^{\infty}(TM\setminus 0)\), denoted by \(u(x,y/|y|)\). We define the horizontal and vertical derivatives as follows:
\[\nabla_{x_{i}}u =\left.\frac{\partial}{\partial x_{i}}(u(x,y/|y|))-\left.\Gamma^{ l}_{ik}v^{k}\partial_{v_{l}}u\right|_{SM},\right.\] \[\partial_{v_{i}}u =\left.\frac{\partial}{\partial y_{i}}(u(x,y/|y|))\right|_{SM}.\]
Here, \(\Gamma^{l}_{ik}\) denotes the Christoffel symbols of the metric \(g\) (cf. [16, p. 386]).
**Lemma A.13**.: _Let \((M,g)\) be a Riemannian surface with or without boundary. For any function \(f\in C^{1}(SM)\), the following formulas hold:_
1. \(V(f\circ r)(x,v)=((Vf)\circ r)(x,v)\)_._
2. \(X(f\circ r)(x,v)=-((Xf)\circ r)(x,v)\)_._
3. \(X_{\perp}(f\circ r)(x,v)=-((X_{\perp}f)\circ r)(x,v)\)_._
We omit the derivations of the formulas in A.13. Their proofs are straightforward computations.
Note that \(\lambda^{-}=-\lambda\circ r\) by the definitions. We define \(K_{\lambda^{-}}(x,v)\) as the Gaussian \(\lambda^{-}\) curvature corresponding to the \(\lambda^{-}\)-geodesic flow. We prove in the next corollary that the global negative curvature assumption for the \(\lambda\)-geodesic flow is equivalent to the same property of the \(\lambda^{-}\)-geodesic flow.
**Corollary A.14**.: _Let \((M,g)\) be a Riemannian surface with or without boundary and \(\lambda\in C^{\infty}(SM)\). Then the (Gaussian) \(\lambda\)-curvature of the dual system satisfies_
\[K_{\lambda^{-}}(x,v)=K_{\lambda}(x,-v),\qquad\text{ for all }(x,v)\in SM.\]
_Additionally, if \(\partial M\neq\emptyset\), then the signed \(\lambda\)-curvature of the dual system satisfies_
\[\kappa_{\lambda^{-}}(x,v)=\kappa_{\lambda}(x,-v),\qquad\text{ for all }(x,v)\in\partial SM.\]
Proof.: Let \((x,v)\in SM\). We may compute using Lemma A.13 that
\[K_{\lambda^{-}}(x,v)\] \[\quad=K(x)+X_{\perp}(\lambda^{-})(x,v)+(\lambda^{-})^{2}(x,v)+(X +\lambda^{-}V)V(\lambda^{-})(x,v)\]
\[=K(x)-X_{\perp}(\lambda\circ r)(x,v)+\lambda^{2}(x,-v)-(X-(\lambda \circ r)V)V(\lambda\circ r)(x,v)\] \[=K(x)+X_{\perp}(\lambda)(x,-v)+\lambda^{2}(x,-v)-(X-\lambda(x,-v)V)V (\lambda)(x,-v)\] \[=K(x)+X_{\perp}(\lambda)(x,-v)+\lambda^{2}(x,-v)-X(V(\lambda) \circ r)(x,v)+\lambda(x,-v)V)V(\lambda)(x,-v)\] \[=K(x)+X_{\perp}(\lambda)(x,-v)+\lambda^{2}(x,-v)+X(V(\lambda))(x,- v)+\lambda(x,-v)VV(\lambda)(x,-v)\] \[=K(x)+X_{\perp}(\lambda)(x,-v)+\lambda^{2}(x,-v)+(X+\lambda(x,-v) V)V(\lambda)(x,-v)\] \[=K_{\lambda}(x,-v).\]
By Lemma A.13, we obtain
\[\kappa_{\lambda^{-}}(x,v) =\kappa(x)-\langle\nu(x),\lambda^{-}(x,v)iv\rangle\] \[=\kappa(x)+\langle\nu(x),\lambda(x,-v)iv\rangle\] \[=\kappa(x)-\langle\nu(x),\lambda(x,-v)i(-v)\rangle\] \[=\kappa_{\lambda}(x,-v).\qed\]
**Remark A.15**.: _In a similar manner we can see that \(\eta_{\lambda^{-}}(x,v)=\eta_{\lambda}(x,-v)\)._
\[\eta_{\lambda^{-}}(x,v)=\langle V(\lambda^{-})(x,v)v,\nu\rangle=-\langle V( \lambda)(x,-v)v,\nu\rangle=\eta_{\lambda}(x,-v).\]
### Proof of Lemma 2.4
(i) We have
\[\kappa_{\lambda}(x,v) =\kappa(x)-\langle\nu(x),\lambda(x,v)iv\rangle,\] \[\kappa_{\lambda}\circ\rho(x,v) =\kappa(x)-\langle\nu(x),\lambda\circ\rho(x,v)i(v-2\langle v,\nu( x)\rangle\nu(x))\rangle\] \[=\kappa(x)-\langle\nu(x),\lambda\circ\rho(x,v)(iv-2\langle v,\nu (x)\rangle i\nu(x))\rangle\] \[=\kappa(x)-\langle\nu(x),\lambda\circ\rho(x,v)iv\rangle\] \[=\kappa_{\lambda\circ\rho}(x,v).\]
(ii) Let us compute
\[\eta_{\lambda}(x,v) =\langle V(\lambda)(x,v)v,\nu\rangle,\] \[\eta_{\lambda}\circ\rho(x,v) =\langle(V(\lambda)\circ\rho(x,v))(v-2\langle v,\nu\rangle\nu),\nu\rangle\] \[=\langle-(V(\lambda\circ\rho)(x,v))(v-2\langle v,\nu\rangle\nu),\nu\rangle\] \[=\langle V(\lambda\circ\rho)(x,v)v,\nu\rangle\] \[=\eta_{\lambda\circ\rho}(x,v).\]
(iii) By the definition of even function, we have
\[(\kappa_{\lambda})_{e} =\frac{\kappa_{\lambda}+\kappa_{\lambda}\circ\rho}{2}\] \[=\frac{\kappa_{\lambda}+\kappa_{\lambda\circ\rho}}{2}\] \[=\kappa-\left\langle\nu(x),\frac{(\lambda+\lambda\circ\rho)(x,v) }{2}iv\right\rangle\] \[=\kappa_{\lambda_{e}}.\]
Similarly, one could get \(\eta_{\lambda_{e}}=(\eta_{\lambda})_{e}\).
(iv) This part directly follows form (iii). In particular, we have
\[\rho^{*}(\kappa_{\lambda_{e}}) =\rho^{*}\left(\kappa+\langle v_{\perp},\nu\rangle\left(\frac{ \lambda(x,v)+\lambda\circ\rho(x,v)}{2}\right)\right)\] \[=\kappa+\langle v_{\perp}-2\langle v,\nu\rangle\nu_{\perp},\nu \rangle\left(\frac{\lambda\circ\rho(x,v)+\lambda\circ\rho\circ\rho(x,v)}{2}\right)\]
\[=\kappa+\langle v_{\perp},\nu\rangle\left(\frac{\lambda\circ\rho(x,v)+ \lambda(x,v)}{2}\right)\] \[=\kappa_{\lambda_{e}}\]
and
\[\rho^{*}\left(\eta_{\lambda_{e}}(x,v)\right) =\rho^{*}\left(\langle v,\nu\rangle V\left(\lambda_{e}\right)(x,v )\right)=\left(\langle v-2\langle v,\nu\rangle\nu,\nu\rangle\rho^{*}V\left( \lambda_{e}\right)(x,v)\right)\] \[=-\langle v,\nu\rangle\rho^{*}V\left(\lambda_{e}\right)(x,v)= \langle v,\nu\rangle V\rho^{*}\left(\lambda_{e}\right)(x,v)\] \[=\langle v,\nu\rangle V\left(\lambda_{e}\right)(x,v)=\eta_{ \lambda_{e}}(x,v).\qed\]
|
2309.17081 | Beyond the Cavity: Molecular Strong Coupling using an Open Fabry-Perot
Cavity | The coherent strong coupling of molecules with confined light fields to
create polaritons - part matter, part light - is opening exciting opportunities
ranging from extended exciton transport and inter-molecular energy transfer to
modified chemistry and material properties. In many of the envisaged
applications open access to the molecules involved is vital, as is independent
control over polariton dispersion, and spatial uniformity. Existing cavity
designs are not able to offer all of these advantages simultaneously. Here we
demonstrate an alternative yet simple cavity design that exhibits all of the
the desired features. We hope the approach we offer here will provide a new
technology platform to both study and exploit molecular strong coupling.
Although our experimental demonstration is based on excitonic strong coupling,
we also indicate how the approach might also be achieved for vibrational strong
coupling. | Kishan. S. Menghrajani, Benjamin. J. Bower, Graham. J. Leggett, William. L. Barnes | 2023-09-29T09:24:28Z | http://arxiv.org/abs/2309.17081v1 | # Beyond the Cavity:
###### Abstract
The coherent strong coupling of molecules with confined light fields to create polaritons - part matter, part light - is opening exciting opportunities ranging from extended exciton transport and inter-molecular energy transfer to modified chemistry and material properties. In many of the envisaged applications open access to the molecules involved is vital, as is independent control over polariton dispersion, and spatial uniformity. Existing cavity designs are not able to offer all of these advantages simultaneously. Here we demonstrate an alternative yet simple cavity design that exhibits all of the the desired features. We hope the approach we offer here will provide a new technology platform to both study and exploit molecular strong coupling. Although our experimental demonstration is based on excitonic strong coupling, we also indicate how the approach might also be achieved for vibrational strong coupling.
## I Introduction
Molecular strong coupling has emerged as a new paradigm in nanophotonics that bridges physics, chemistry and materials science. When a large number of molecular vibronic or excitonic resonators are located within the confined light field associated with a cavity mode the molecules and cavity mode may interact. For low values of this interaction (low coupling strength) the emission and absorption of light may be modified, but the underlying molecular resonances remain unchanged; this is the weak coupling regime, important in areas such as single photon generation. For higher values of the coupling strength the interaction leads to the formation of hybridised (polariton) states, states that are part light and part matter. Strong coupling occurs when the coupling rate between the molecular resonances and the cavity mode exceeds the dissipation rates. There is intense activity in the field of strong coupling, particularly owing to the prospect of using vibrational strong coupling to modify a range chemical reactions [1; 2]. Whilst most research is currently focused, quite rightly, on trying to understand the complex coherent molecular process involved, the role played by the cavity mode is vital but much less explored. In this report we introduce a powerful new platform with which to pursue molecular strong coupling that we hope will find applicability in many areas.
For polariton chemistry investigations, and for associated applications, we can identify a number of key attributes that any experimental cavity design should embody, they are: First, open access: we would like the strongly coupled molecules to be 'open access', i.e. we want to be able to easily introduce and carry away reactants and products. Second, dispersion: we need appropriate control over the dispersion of the polaritons. It has become clear that not all 'cavity' modes offer dispersion properties suitable for polariton chemistry. In particular it seems to be increasingly well-established that a high density of polariton states around zero momentum (the \(k=0\) condition) are needed [3]. Third, spatial homogeneity: we would prefer to have a system where all molecules involved are coupled to the same extent with the cavity mode, i.e. we seek spatial uniformity. Fourth, independent tuning: we would like to be able - for example - to modify the concentration of the molecular material without at the same time changing the extent of any de-tuning, i.e. the mismatch between the molecular resonance energy and the cavity mode energy. As far as we are aware, no existing cavity design offers all of these attributes at the same time.
Most studies of molecular strong coupling have focused on planar (Fabry-Perot) optical microcavities where molecules are positioned between two closely spaced metal or dielectric mirrors. However, these structures offer only limited access to the molecules involved, making their use in cavity-modified chemistry possible, but difficult [4]. To overcome this limitation, alternative 'open' geometries have been investigated, including surface plasmon modes [5; 6], dielectric microspheres [7], and surface lattice resonances [8; 9]. These geometries are very interesting, but the extent of the light-matter coupling now varies spatially across the structure, again making their use in cavity-modified chemistry challenging. Recently, another class of 'cavity-free' geometry has been explored. These structures show extensive mode splitting without employing metallic or dielectric multi-layer mirrors (DBR); instead these geometries rely on reflection from the interface between the molecular material and another dielectric, such as air, to generate optical modes [10; 11; 12]. Although changes in molecular absorption have been observed in some of these structures, their effectiveness in controlling chemistry is not yet estab
lished. Even if this type of open cavity can be convincingly put into the strong coupling regime, there is one problem they do not seem likely to help us overcome, that of achieving appropriate modal dispersion. Similarly, surface plasmons on planar metal films [6], a system that otherwise looks very attractive, does not afford the appropriate dispersion. None of the cavity designs discussed thus far provide all of the desirable features we seek, see table 1.
The strong coupling geometry we discuss here exploits the appropriate dispersion control _and_ the spatial uniformity offered by planar Fabry-Perot cavities, yet overcomes the lack of open access to the coupled molecules by placing the molecules outside rather than inside the optical cavity. In doing so we couple the molecules to the cavity mode via the evanescent (near) field of the cavity modes. Evanescent wave coupling is a well established approach in such areas as integrated optic couplers [13], lasers [14] and amplifiers [15].
Here we demonstrate this new approach using two different molecular systems. In the first we employ the well known J-aggregated dye TDBC [16]. This dye has been used for a large number of strong coupling experiments owing to the narrow bandwidth and high oscillator strength of the aggregates [17; 18]. We used TDBC to compare the'standard' arrangement where the TDBC molecules are located inside the cavity (figure 2, left schematic) with our 'new' arrangement where the TDBC molecules are outside the cavity, i.e. located in the evanescent tail of the cavity mode, (figure 2, middle schematic). We also employ a different dye, Rhodamine-B (RhB), for our outside the cavity arrangement, (figure 2, right schematic); RhB has a much broader emission spectrum than the narrow spectra offered by aggregated dyes such as TDBC. We make use of a combination of photoluminescence dispersion data, reflectance dispersion data and results from both numerical modelling and a simple coupled oscillator model. We especially focus on photoluminescence as this has proven to be a stronger measure of strong coupling compared to, for example, reflectance and transmittance [19]. Our cavities were based on two gold (40 nm) mirrors separated by a PMMA spacer layer.
## II Results and discussion
Schematics of the different structures we investigated are shown in the upper row of figure 2, in the lower row the corresponding dispersion diagrams based on measured photoluminescence data are shown. Reflectance data are shown in the SI.
**TDBC inside the cavity.** In the left-hand column, upper panel, we show the familiar situation for strong coupling using a planar Fabry-Perot cavity, the dye-doped layer lies inside the cavity. The dye, here TDBC, was placed between two layers of PMMA using a layer-by-layer approach to deposit 4 monolayers (8 nm thick in total), see Vasista _et al._ for details [20]. In the lower panel we show a dispersion plot based on the photoluminescence collected from the sample, see supplementary information. As expected, the PL shows emission mediated by the lower polariton, the PL peak tracking the dispersion of the lower polariton.
In addition we collected reflectance data, again see the supplementary information. We matched the results from a simple coupled oscillator model (see SI) to the reflectance and PL data, the positions for the polaritons obtained by simultaneously trying to match the model to the PL and reflectance data are indicated as dashed white lines on the PL dispersion. We can see that the PL nicely tracks the lower polariton, as expected, even though there are only 4 monolayers of TDBC within the cavity [20].
At this stage it is worth looking at the numbers involved. A standard strong coupling criterion is [21; 22] that the Rabi splitting (\(\Omega_{R}\)) should exceed the average of the cavity (K) and molecular transition (\(\Gamma\)) linewidths, i.e. that,
\[\Omega_{R}>(K+\Gamma)/2. \tag{1}\]
For TDBC we find the molecular resonance linewidth is \(\Gamma=0.08\ \pm 0.01\) eV, whilst the bare cavity mode decay rate (see SI) is found to be \(K=0.11\ \pm 0.01\) eV. From our match of a coupled oscillator model to our PL and reflectance data we found \(\Omega_{R}=0.15\ \pm 0.03\) eV, so that in this case \(\Omega_{R}/(K+\Gamma)/2=1.5\ \pm 0.3\). Consequently, for this TDBC inside the cavity sample we meet the strong coupling condition.
**TDBC outside cavity.** For the centre column the TDBC dye layer is no longer inside the cavity, rather it is now just outside the cavity, the TDBC molecules are thus openly accessible, see upper panel. In the lower panel we again show the PL dispersion data. Although the PL no longer tracks the lower polariton, there is
Figure 1: **Table 1** Summary of different cavity types commonly employed in strong coupling experiments, and attributes valuable for polaritonic chemistry.
clear evidence of significant coupling, the PL spectrum being very significantly modified when compared to PL from a thin TDBC film, see SI. This is a remarkable finding, there is no cavity between the dye and the detector, so the modified PL can not be the result of a simple cavity filtering effect. From our match of a coupled oscillator model to both our PL and reflectance data for this system we find that the Rabi splitting is \(\Omega_{R}\)=0.10 \(\pm 0.02\) eV, and that in this case, the cavity linewidth is \(K=0.18\,\pm 0.02\) eV. For our TDBC outside the cavity case we thus have \(\Omega_{R}/(K+\Gamma)/2\) = 0.7 \(\pm\) 0.3. Despite being - at best - just at the boundary of the strong coupling condition, the PL is very significantly modified.
**RhB outside the cavity.** We next investigated a non-aggregated dye system, we switched from TDBC to a custom thiolated variant of a standard laser dye, RhB. (Briefly, this involved modifying the RhB molecule so as to include an alkynethiol, the original idea being to make a version of RhB capable of self-assembly.) For this RhB 'outside the cavity' sample the RhB layer was deposited on top of the cavity by drop casting, see details in SI. The PL dispersion data are shown in the right-hand column of figure 2. The situation for this dye - as for many dyes - is somewhat complicated by the multi-oscillator nature of the excitonic transition. Nonetheless, the data show that the PL tracks the lower polariton, even though the dye is located outside the cavity.
For this system we find the bare cavity mode linewidth to be \(K=0.18\,\pm 0.02\) eV whilst the RhB linewidth we determined to be \(\Gamma=0.21\,\pm 0.02\) eV. From our match of a coupled oscillator model to our PL and reflectance data for this system we find that the Rabi splitting is 0.20 \(\pm\) 0.04 eV, so that we find \(\Omega_{\text{R}}/((K+\Gamma)/2)=1.0\)\(\pm\) 0.4. Although the error bar is substantial, we are roughly at the strong coupling condition, consistent with what we see from our PL data (shown in the right-hand column of figure 2), i.e. that the PL clearly tracks the lower polariton branch. Again, as for the TDBC outside the cavity, PL from the RhB film does not pass through the cavity on its way to the detector, there is no filtering effect.
In the supplementary information we show, for each cavity structure, the calculated power dissipated in the dye layer when illuminated from above. Again, these data are presented in the form of dispersion plots. For the TDBC inside the cavity the dispersion takes the form of the lower polariton. For the TDBC outside the cavity and the RhB outside the cavity, there is little if any evidence that power dissipated is influenced by the lower polariton. This is perhaps not unexpected since in this case the dye layers are readily dominated by bare exciton absorption. Nonetheless, it would be interesting to extend this aspect of our investigation.
Elsewhere we have indicated that - based on photoluminescence measurements -'simply' meeting the strong coupling condition, 1 may not be enough [23], and suggested that a second criterion might need to be applied,
Figure 2: **Sample Schematics and PL Dispersion. Upper row:** The three different cavities explored are shown as schematics. The left-hand two were used with TDBC films, the right-hand on with Rhodamine-B. **Lower row:** Photoluminescence data from the three samples shown as dispersion maps, i.e. the PL is shown on a colour scale as a function of frequency and in-plane wavevector, the plane corresponding to that of the cavities. Also shown as white dashed lines in each dispersion plot are the hybrid modes as determined by matching a coupled oscillator model to the PL and reflectance data.
that the cavity finesse needs to be \(\mathcal{F}>5\). As indicated in the supplementary information, all three cavities we studied here had \(\mathcal{F}>10\), thus meeting this extra criterion.
## IV Summary
In summary, our study successfully demonstrated the attainment of strong coupling by positioning molecules outside a Fabry-Perot cavity. This approach offers all of the attributes discussed in table 1, making it interesting and valuable platform for polaritonic chemistry.
There is of course an obvious downside to our approach, it places the molecules in a low field region of the cavity mode, thus reducing the attainable coupling strength. We have however shown that despite this problem, strong coupling can still be achieved. Nonetheless, we have done this by using materials with a combination of high concentration and high oscillator strength.m It would be useful if there were a way to boost the strength of the coupling, e.g. for vibrational coupling where the molecular resonances involved are in the infrared, and are the resonances that are so important to polaritonic chemistry. This might additionally have the benefit of enabling a single monolayer of molecules exhibit strong coupling (for an excitonic resonance). To achieve this we suggest that two molecular systems could be used, possibly even employing the same molecules. One set of molecules is placed _inside_ the cavity. These molecules are not involved in any external chemistry, and could be dispersed in a solid matrix. The role of these 'internal' molecules is simply to boost the coupling strength. The second set of molecules is then deposited on the outside of the cavity as we have indicated. One could also consider using an hierarchical approach [24].
The ability to access and control strong coupling dynamics outside a conventional cavity structure broadens the scope of possibilities for polaritonic chemistry. As we have indicated above, this may be possible down to the single monolayer regime, making it attractive for studies of catalysis. We hope our findings will lead to a promising avenue for future research and to new applications of strong coupling in various fields.
## Acknowledgements
The authors are grateful for and acknowledge the financial support of the Leverhulme Trust, associated with the research grant "Synthetic Biological Control of Quantum Optics". K.S.M. also acknowledges the support of Royal Society International Exchange grant (119893R). We thank Adarsh Vasista, Wai Jue Tan, Philip Thomas, Marie Rider, Felipe Herrera, and William Wardley for many fruitful discussions. The authors also acknowledge the support of European Research Council through the Photmat project (ERC-2016-AdG-742222 :www.photmat.eu).
|
2309.05485 | Reply to "Comment on `Validity of path thermodynamic description of
reactive systems: Microscopic simulations' | The Comment's author argues that a correct description of reactive systems
should incorporate the explicit interaction with reservoirs, leading to a
unified system-reservoirs entity. However, this proposition has two major
flaws. Firstly, as we will emphasize, this entity inherently follows a
thermodynamic equilibrium distribution. In the Comment, no indication is
provided on how to maintain such a system-reservoirs entity in a
non-equilibrium state. Secondly, contrary to the author's claim, the inclusion
of system-reservoir interaction in traditional stochastic modeling of reactive
systems does not automatically alter the limited applicability of path
thermodynamics to problematic reactive systems. We will provide a simple
demonstration to illustrate that certain elementary reactions may not involve
any changes in reservoir components, which seems to have been overlooked by the
author. | F. Baras, A. L. Garcia, M. Malek Mansour | 2023-09-11T14:25:12Z | http://arxiv.org/abs/2309.05485v1 | ###### Abstract
###### Abstract
The Comment's author argues that a correct description of reactive systems should incorporate the explicit interaction with reservoirs, leading to a unified system-reservoirs entity. However, this proposition has two major flaws. Firstly, as we will emphasize, this entity inherently follows a thermodynamic equilibrium distribution. In the Comment, no indication is provided on how to maintain such a system-reservoirs entity in a non-equilibrium state. Secondly, contrary to the author's claim, the inclusion of system-reservoir interaction in traditional stochastic modeling of reactive systems does not automatically alter the limited applicability of path thermodynamics to problematic reactive systems. We will provide a simple demonstration to illustrate that certain elementary reactions may not involve any changes in reservoir components, which seems to have been overlooked by the author.
**Reply to "Comment on 'Validity of path thermodynamic**
**description of reactive systems: Microscopic simulations' "**
F. Baras\({}^{\,a}\), A. L. Garcia\({}^{\,b}\), and M. Malek Mansour\({}^{\,c}\)
(a) Laboratoire Interdisciplinaire Carnot de Bourgogne,
UMR 6303 CNRS-Universite Bourgogne Franche-Comte,
9 Avenue A. Savary, BP 47 870,
F-21078 Dijon Cedex, France
(b) Dept. Physics and Astronomy,
San Jose State University,
San Jose, California, 95192 USA
(c) Universite Libre de Bruxelles CP 231, Campus Plaine,
B-1050 Brussels, Belgium
## 1 Introduction
The argument presented in the Comment article is based on two separate assertions. Firstly, the article states: _"Let us further remark that several Markov jump processes may be considered for a given reaction network. This key point is well known"_. To the best of our knowledge, this statement is likely known only by the author himself, as he introduced it recently in his previous Comment article [1]. Furthermore, it contradicts a fundamental principle of probability theory, that is: "the probability associated with a random event is unique" (see for example [3] or [4]). We rigorously proved this result in the Introduction of [2]. Recall that the proof relies on the choice of \(\mathbb{Z}^{n}\) as the state space for a homogeneous, isothermal reactive system with \(n\) components (\(\mathbb{Z}\) represents the set of non-negative integers). This choice aligns precisely with that of all authors dealing with the stochastic modeling of reactive systems because of its unique correspondence with experimentally measurable quantities [3, 4].
Recently, we demonstrated that the validity of path thermodynamics is limited to reactive systems that involve only one elementary reaction leading to each type of observed composition change. [5, 6]. This proof relies on the traditional stochastic modeling of reactive systems established over half a century ago [3, 4]. In order to restore the validity of path thermodynamics in problematic reactive systems, the Comment's author recommended the use of an "expanded state space" by incorporating a set of new variables [1]. These variables were intended to differentiate the elementary reactions that lead to the same change in composition.
However, as highlighted in our work [2], these newly introduced variables do not correspond to any observable quantities in real-life systems. This observation served as the primary motivation behind our decision to perform microscopic simulations of reactive systems. The results of these simulations unequivocally contradict the author's assertion, thereby validating the theoretical predictions based on the traditional modeling of reactive systems [2]. Now it appears that the author has revised his opinion, as there is no mention of the concept of "expanded state space" in the present Comment. Instead, he presents a different approach that we will now address.
In his Comment the author acknowledges the validity of our microscopic simulation results but argues that they fail to account for potential variations in other chemical components that act as control parameters (reservoir quantities). In other words, he contends that the investigation of the statistical properties of reactive systems must explicitly incorporate the interaction between the system and its reservoirs. In order to illustrate his arguments, the Comment's author considered the same reactive system that we used in our microscopic simulation, that is:
\[A\ +\ X\ \ \stackrel{{ k_{1}}}{{\underset{k_{-1}}{\rightleftharpoons}}}\ 2\,X\qquad B\ +\ C\ \ \stackrel{{ k_{2}}}{{ \underset{k_{-2}}{\rightleftharpoons}}}\ \ B\,+\,X \tag{1}\]
where we utilized a well-established procedure to maintain a physico-chemical system out of equilibrium. This procedure involves the system interacting with external reservoirs assumed to be infinitely large, thereby ensuring that their state remains rigorously constant over time. Author claims in his Comment that, instead of solely considering the variable \(X(t)\), while keeping \(A,B\), and \(C\) constant, we should have analyzed the joint statistical trajectories of \(\{X(t),A(t),B(t),C(t)\}\) which takes into account the simultaneous variations of all variables over time (c.f. the last sentence of the fourth paragraph of the Comment).
However, as highlighted in the appendix of our paper [2], the total number of particles in the system-reservoirs entity remains constant, indicating that the state of such an entity is not affected by any external constraint. Specifically, we wrote,
Finally, note that for both reaction models (2) and (4) the number of \(A\), \(B\), and \(C\) particles and the sum of \(X\) and solvent particles \(X(t)+S(t)\) remain constant. As such, knowledge of \(X(t)\) determines entirely the state of the system at each instant of time.
Consequently, it can be easily demonstrated that the resulting stationary probability distribution follows a multinomial distribution, which corresponds to a thermodynamic equilibrium distribution. No indication is provided in the Comment on how to proceed to maintain the system-reservoirs entity in a non-equilibrium state. Not addressing this fundamental issue undermines the arguments criticizing our work.
But there exist a more fundamental objection against Comment author's proposition of the new type of modelling for reactive systems. Contrary to his claim, the incorporation of system-reservoir interaction in traditional stochastic modeling of reactive systems does not necessarily alter the limited applicability of path thermodynamics to reactive systems with only one elementary reaction leading to observable compositional changes [2, 5, 6]. In fact, certain elementary reactions may simply not involve any changes in reservoir components, a possibility that the author seems to have overlooked. Consider for example the following set of elementary reactions:
\[S\ +\ X\ \ \stackrel{{ k_{1}}}{{\underset{k_{-1}}{\rightleftharpoons}}}\ \ S\,+\,Y\qquad Y\ +\ X\ \ \stackrel{{ k_{2}}}{{ \underset{k_{-2}}{\rightleftharpoons}}}\ 2\,Y \tag{2}\]
both leading either to the change of composition \(X,Y\to X-1,Y+1\) (forward) or \(X,Y\to X+1,Y-1\) (backward). Regardless of how we treat the reservoirs, the state trajectory of a reactive system involving the set of reactions (2) does not incorporate any information that allows us to differentiate them from each other. However, we know from the basic principles of irreversible thermodynamics that the entropy production of a reactive system is the sum of the entropy production associated with each individual reaction [8]. Consequently, properties of such a reactive system as given by path thermodynamics will inevitably contradict the actual thermodynamic properties of the system.
In conclusion, we would like to make one final remark. It is interesting to note that the author previously employed the same methodology on multiple occasions, which he now rejects in his Comment. This includes his seminal 2004 paper, where he developed the path thermodynamic theory of reactive systems [7]. Interestingly, in that paper, the author specifically considered the Schlogl model (Section IV in [7]) as an illustrative example of the theory. It is worth mentioning that the Schlogl model is the same type of model we used for microscopic simulation in our article, for which the author now questions the validity. In the Comment the author appears to contradict statements made in his previous work [7]. In a way, this newest Comment underscores the strength of our earlier works.
## Acknowledgments
One author (AG) acknowledges support by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics Program under contract No. DE-AC02-05CH11231.
|
2307.00079 | Dataset balancing can hurt model performance | Machine learning from training data with a skewed distribution of examples
per class can lead to models that favor performance on common classes at the
expense of performance on rare ones. AudioSet has a very wide range of priors
over its 527 sound event classes. Classification performance on AudioSet is
usually evaluated by a simple average over per-class metrics, meaning that
performance on rare classes is equal in importance to the performance on common
ones. Several recent papers have used dataset balancing techniques to improve
performance on AudioSet. We find, however, that while balancing improves
performance on the public AudioSet evaluation data it simultaneously hurts
performance on an unpublished evaluation set collected under the same
conditions. By varying the degree of balancing, we show that its benefits are
fragile and depend on the evaluation set. We also do not find evidence
indicating that balancing improves rare class performance relative to common
classes. We therefore caution against blind application of balancing, as well
as against paying too much attention to small improvements on a public
evaluation set. | R. Channing Moore, Daniel P. W. Ellis, Eduardo Fonseca, Shawn Hershey, Aren Jansen, Manoj Plakal | 2023-06-30T18:33:27Z | http://arxiv.org/abs/2307.00079v1 | # Dataset Balancing Can Hurt Model Performance
###### Abstract
Machine learning from training data with a skewed distribution of examples per class can lead to models that favor performance on common classes at the expense of performance on rare ones. AudioSet has a very wide range of priors over its 527 sound event classes. Classification performance on AudioSet is usually evaluated by a simple average over per-class metrics, meaning that performance on rare classes is equal in importance to the performance on common ones. Several recent papers have used dataset balancing techniques to improve performance on AudioSet. We find, however, that while balancing improves performance on the public AudioSet evaluation data it simultaneously hurts performance on an unpublished evaluation set collected under the same conditions. By varying the degree of balancing, we show that its benefits are fragile and depend on the evaluation set. We also do not find evidence indicating that balancing improves rare class performance relative to common classes. We therefore caution against blind application of balancing, as well as against paying too much attention to small improvements on a public evaluation set.
R. Channing Moore, Daniel P. W. Ellis, Eduardo Fonseca, Shawn Hershey, Aren Jansen, Manoj Plakal Google Research, Mountain View, CA, and New York, NY, USA
{_channingmoore, dpwe, efonseca, shershey, arenjansen, plakal_}@google.com
## 1 Introduction
Recent advances in transformer models have improved the state of the art in natural-language processing and in image and audio classification machine learning (ML) tasks [1, 2, 3]. These models have large capacity and can make use of large quantities of labeled training data. The largest set of publicly-available labels for audio event classification, AudioSet [4], is small compared to some of the datasets commonly used to train image models like ImageNet-2k [5] or JFT [6]. This makes pretraining and dataset manipulations critical.
The AudioSet task involves recognizing 527 sound event classes within \(\sim\)10-second audio clips. These classes display large variations in their intrinsic difficulty as well as in their prevalence in the 2M clips of the training set: The most common, _music_, occurs roughly 15,000 times more frequently than the rarest (_toothbrush_).
AudioSet performance is usually reported as an unweighted average of per-class metrics (such as average precision) across all classes. To improve the performance on the rarest classes, several authors have implemented _balanced sampling_ in training, where samples of the less-common classes are repeated to improve their representation during training. This has shown improvements in overall performance when combined with data augmentation techniques [7, 8, 9, 3].
In this work, we compare the impact of this approach on the public AudioSet evaluation set to a separate, internal evaluation set collected at the same time. This second set has a different, more skewed class distribution. Unexpectedly, we find that balancing the dataset actually worsens performance on this second evaluation set.
We present this as evidence that class balancing, expected to improve allocation of model capacity and learned prior, may not be significantly useful in the AudioSet domain--and that previously-reported benefits may reflect immaterial peculiarities of the widely-used evaluation set.
### Class imbalance in AudioSet partitions
The creators of AudioSet1 produced balanced training and evaluation partitions by picking 60 examples for the rarest class, then adding examples of the next-rarest class until there were 60 examples of that class, and so forth. These balanced sets still have significant imbalance because the examples have multiple labels and common classes co-occur with uncommon ones. For example, of the 60 _toothbrush_ training examples, 41 also have the label _speech_, 10 have the label _music_, and 8 have both. In this way, common classes remain over-represented even after these attempts to balance the dataset.
Footnote 1: Authors Moore, Hershey, Plakal, Jansen, and Ellis were among the creators of AudioSet.
### Class imbalance and performance
Class imbalance could in principle skew training of machine-learning models [10] (although see the discussion of Figure 7 in [9] for other factors). Although class prior is not directly correlated with per-class performance, reducing imbalance could help. In addition to the improved performance on audio classification mentioned above, similar techniques are also reported under the name
Figure 1: mAP as a function of oversampling exponent \(\beta\) for Public and Internal evaluation sets. The two traces are plotted with the same vertical scaling, but are vertically offset to align performance without balanced sampling (\(\beta=0\)).
class-aware sampling_, primarily in the image domain. [11, 12]. We are not aware of any work which has used balanced sampling for automatic speech recognition.
## 2 Methods
### Quantifying class imbalance
With \(N\) total examples and \(K=527\) total classes in AudioSet, we define \(j\in\{1..N\}\) as the sample index and \(k\in\{1..K\}\) as the class index. Given the label matrix \(C\in\{0,1\}^{N\times K}\) with entries \(c_{jk}\) we define the prior \(p_{k}\) for each class \(k\) as
\[p_{k}=\frac{1}{N}\sum_{j=1}^{N}c_{jk}\equiv\frac{N_{k}}{N}, \tag{1}\]
where \(N_{k}=\sum_{j=1}^{N}c_{jk}\) is the number of examples for which class \(k\) is marked as present.
We can measure class imbalance using the _imbalance ratio_[13], which we will denote as \(\rho\),
\[\rho=\frac{\max_{k}(p_{k})}{\min_{k}(p_{k})}\equiv\frac{\max_{k}(N_{k})}{\min _{k}(N_{k})}. \tag{2}\]
This measure gives a simple, intuitive sense of how unbalanced the dataset is, but it only considers two samples from the prior distribution, the head and tail classes. To better measure the degree of imbalance across the entire dataset we compute the Gini coefficient [14], a common measure of how far data differ from a uniform distribution. There are more than one label per example--roughly \(2\) on average--so we compute this measure based on the fraction of total labels rather than directly from the prior distribution.
If \(p_{k}\) is list of the per-class priors sorted in ascending order, the Gini coefficient is equal to
\[1-\frac{2\cdot\sum_{k=1}^{K}\sum_{l=1}^{k}p_{l}}{K\cdot\sum_{k=1}^{K}p_{k}}. \tag{3}\]
### AudioSet
#### 2.2.1 Current version of the dataset
The version of AudioSet we used for this paper consists of the 1,728,000 training clips and 16,591 evaluation clips that remain available. The statistics of this version of the dataset are relatively unchanged from the original publication: the most common class is still _music_ (prior 0.482), and the least common is _toothbrush_ (prior \(2.66\times 10^{-5}\)).
#### 2.2.2 Validation dataset
We reserved examples from the AudioSet training dataset as a validation set. We chose these examples by a similar procedure to the evaluation set: we required a minimum of 5 examples of each class. The current version of this set has 1715 clips.
#### 2.2.3 Internal evaluation dataset
We have labels for a further set of 22,573 clips that were not published. We also evaluate on the union of this set with the public evaluation set (a total of 39,164 clips) and refer to this combined set as the _internal evaluation_ set. We found and annotated the labels for the additional internal evaluation clips at the same time as the public AudioSet clips, using the same procedures.
#### 2.2.4 Class prior imbalance
Table 1 summarizes the class imbalance in the various data partitions. The training set we used has substantially the same class imbalance as the published training set when measured by the Gini coefficient. The two evaluation sets are more balanced, and the validation set falls in between the two evaluation sets.
### Model architecture and training
We replicated the AST model architecture of [3] in TensorFlow and pretrained it on a video-level tag prediction task with a fixed ontology of 10k classes (not necessarily audio-related) on a collection of 50 million 10-second audio clips from internet videos. We then refined the full model on AudioSet. We performed refinement using binary cross-entropy loss, Adam optimizer [15], and a batch size of 1024. We did not use the learning-rate warmup, learning-rate decay, weight decay, weight averaging, or model ensembling described in [3].
### Balancing
We balanced the training dataset by repeating examples of low-prior classes, repeating examples \(M_{j}\in\mathbb{Z}^{+}\) times on each pass through the training data. Defining \(m_{j}\in\mathbb{R}^{+}\) as
\[m_{j}=\max_{k:c_{jk}=1}\left(\frac{1}{p_{k}}\right)^{\beta}, \tag{4}\]
we compute the oversampling factor \(M_{j}\) in terms of \(m_{j}\) as
\[M_{j}=\text{round}\left(\frac{m_{j}}{\min_{j}(m_{j})}\right). \tag{5}\]
Our balancing scheme modifies previous work by adding the parameter \(\beta\), which we term the _oversampling exponent_. This allows us to choose oversampling schemes that lie between the raw, unbalanced AudioSet dataset at \(\beta=0\) and the full oversampling used by [3, 9], and [8] at \(\beta=1\). At \(\beta=0\), \(M_{j}\equiv 1\) for all examples. At \(\beta=1\), \(M_{j}\) ranges from 1 for examples marked with only the two most common classes _speech_ and _music_ to 18,102 for examples marked with _toothbrush_. Table 2 shows the dataset statistics from this partial balancing.
We did not implement the per-batch balancing scheme of [8]. Our training batch size of 1024 is large enough that the oversampling procedure ensures that most classes are represented in any given batch. The rarest class in the fully-balanced dataset has a frequency of 1 in 435 examples, showing up on average more than twice in each batch.
\begin{table}
\begin{tabular}{c|c|c} & Imbalance ratio & Gini coefficient \\ \hline Published train & 15,009 & 0.83 \\ Current train & 18,102 & 0.83 \\ Validation & 275 & 0.45 \\ Public evaluation & 181 & 0.39 \\ Internal evaluation & 367 & 0.61 \\ \end{tabular}
\end{table}
Table 1: Prior imbalance statistics for data partitions. We computed statistics for _published train_ from the original list of labels published with [4]. All other statistics are computed from the dataset version used in this paper.
### Augmentation
We used SpecAugment [16] with the same parameters as [3], and applied MixUp [17] in the energy domain with \(\alpha\)=10 for all of our experiments. We combined examples within batches rather than drawing a separate batch of augmentation examples. Our work differs from [3] in that we only used a MixUp rate of 1, meaning that all training examples are subject to MixUp.
## 3 Evaluation
### Metrics
The AST model produces one score per clip for each of the 527 classifier outputs. We computed mean average precision (mAP) and the area under the ROC curve (AUC) for each classifier and averaged them using equal weight for each class (macro-averaging). We then computed \(d^{\prime}\) from the macro average AUC as described in [18]. Using \(d^{\prime}\) allows us to better compare between datasets with different prior distributions, since mAP is confounded with the evaluation set class prior. We perform macro-averaging before converting to \(d^{\prime}\) since, as probabilities, AUCs may be meaningfully averaged. Averaging after converting to \(d^{\prime}\) gives different results.
### Hyperparameter and checkpoint selection
We computed evaluation metrics on our held-out validation set every 20 minutes, roughly every 3000 steps. We smoothed the raw mAP and \(d^{\prime}\) traces at each learning rate with a 7-point moving average and picked the checkpoint centered on the highest value for each metric. We chose the learning rate and best checkpoint with highest \(d^{\prime}\) on the validation set. For both evaluation sets, we report all metrics using that checkpoint.
### Evaluation of external model
We evaluated the AST model2 published in [3] by running the PyTorch code from the GitHub repository for that paper using the current version of the AudioSet dataset. We used the mAP and AUC computed there, and computed \(d^{\prime}\) from the macro-averaged AUC as described above.
Footnote 2: We downloaded the best single model without weight-averaging from github.com/YuanGongND/asst/tree/master/pretrained_models, labeled _Full AudioSet, 10stride, 10 fstride, without Weight Averaging, Model 1 (0.450 mAP)_
## 4 Results
### Full balancing hurts performance on internal eval
Tables 3 and 4 show the effects of balancing on \(d^{\prime}\) and mAP, respectively. Full balancing resulted in higher \(d^{\prime}\) and mAP than no balancing on the public evaluation set. In contrast to this, full balancing _reduced_ both metrics on the internal evaluation set. Partial balancing performed better than full balancing in both cases.
The public and internal evaluation sets differ in multiple ways. They clearly have different class prior distributions (Gini coefficients 0.39 and 0.61, respectively), but the added samples could substantially alter the difficulty since they were chosen from candidates for each label that were rated as "not present". The similarity of the observed, prior-invariant \(d^{\prime}\) values in Table 3 suggests, however, that the difficulty of the two evaluations is comparable.
### Optimum balancing depends on evaluation set and metric
We investigated the best balancing scheme by computing evaluation metrics at several values of \(\beta\). Figure 1 shows the observed dropoff of internal evaluation mAP from \(\beta=0\) to \(\beta=1\), and the corresponding increase in public evaluation mAP. The best mAP was at \(\beta=0.5\) for the public evaluation set and \(\beta=0.3\) for the internal.
Figure 2 shows a slightly different picture: \(d^{\prime}\) was lower on the internal set with full balancing, and higher on the external set. The optimal balancing on both evaluation sets was at \(\beta=0.3\)
The validation set has a different class balance from the evaluation set and might bias our results. In practice, however, we found that the optimal validation checkpoint and learning rate were close to the oracle best values for the evaluation set.
### Balancing effects show little relation to class prior
Balancing could allow the model to focus more on rare classes by presenting them more often. If this were this true, we would expect to see a greater benefit from balancing for rare classes. When we compared the per-class evaluation metrics between the models trained on the balanced (\(\beta=0\)) and unbalanced (\(\beta=1\)) training sets, the changes in per-class metrics were not correlated with the
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Oversampling exponent \(\beta\) & Imbalance ratio & Gini coefficient \\ \hline
0.0 & 18102 & 0.83 \\
0.1 & 8282 & 0.79 \\
0.2 & 4916 & 0.75 \\
0.3 & 2746 & 0.71 \\
0.5 & 1032 & 0.63 \\
0.7 & 443 & 0.56 \\
1.0 & 143 & 0.47 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prior imbalance statistics for oversampled AudioSet training set.
Figure 2: \(d^{\prime}\) as a function of oversampling.
prior of the class in the training set, \(p>0.1\) for the regression of \(\Delta\)AP\({}_{k}\) vs. \(\log_{10}(p_{k})\) (see Figure 3).
## 5 Discussion
The performance of state-of-the-art training techniques and transformer models depends heavily on both the metrics and the dataset used for evaluation. Our results show that dataset balancing does not always improve model performance, and that there is likely an interaction between the balancing scheme and the class prior distribution in the evaluation set. As shown in Table 3, the model from [3] had lower \(d^{\prime}\) on the internal evaluation set than the public set. This suggests that some portion of the training setup is better matched to the public evaluation set. The fully-balanced models from this paper shows a similar drop in \(d^{\prime}\) from public to internal evaluation sets, -0.060 (2.828 to 2.768), compared to -0.077 for the published model (2.760 to 2.683), confirming that balancing explains the decreased performance. The model of [3] performs better than our model on MAP. MAP and \(d^{\prime}\) are sensitive to different parts of the class prior distribution, and we believe that the AST model is particularly well-tuned for MAP.
We found qualitatively similar results when using the ImageNet ViT model [2] used by [3] for pretraining instead of our audio pretraining. We observed better evaluation performance with partial balancing, and we observed worse results on the internal evaluation set with full balancing. This suggests that these phenomena are not artifacts of the audio pretraining.
Our experiments showed that balancing significantly accelerated convergence in the fine-tuning regime. The peak mAP on the validation set for the fully-balanced training occurred after 33M examples, while the peak for the unbalanced training occurred after 104M examples. Each class reached peak performance at around the same step rather than varying with prior. [8] showed that this might be possible, but did not train the models to convergence.
It is possible that this accelerated training convergence with balancing explains the 0.083 mAP difference from removing balancing shown in Table XII of [9]. That paper trained for a fixed number of epochs that we presume was optimal for full balancing. We believe that it is possible that a model trained to convergence without balancing would have seen a much smaller decrement in mAP, much closer to our result of 0.014.
We did not find meaningful correlation between per-class performance changes and the prior of the class. When performance increased or decreased it did so on average for all classes regardless of prior. Intuitively, we expected that increased representation of rare classes in training should disproportionately improve model performance for those classes, but our measurements did not back this up.
The equivocal impact of class balancing raises the question: why _doesn't_ it work? More training examples will, in general, improve classifier performance, yet increased representation of minority classes shows minimal benefit in Figure 3. One point is that although balancing presents rare classes more often, we are not in fact increasing the diversity of those examples, but simply repeating them (with augmentation). These large deep nets have so many parameters that perhaps they are effectively fully learning from the examples provided-even the rare ones-in the unbalanced regime, so balancing does not confer significant additional benefit.
It is possible to create other balancing schemes than the one we have described. The use, degree, and method of oversampling should be viewed as a training hyperparameter. We recommend adjusting the balancing based on performance on a held-out validation set for best generalization and to avoid overfitting to the test set.
We also observe that, since the magnitude and polarity of metric change is dependent on what should be insignificant details of the evaluation set, one should be cautious when interpreting these kinds of effects on AudioSet.
|
2309.03962 | Floquet theory and stability for Hamiltonian partial differential
equations | We analyze Floquet theory as it applies to the stability and instability of
periodic traveling waves in Hamiltonian PDEs. Our investigation focuses on
several examples of such PDEs, including the generalized KdV and BBM equations
(third order), the nonlinear Schr\"odinger and Boussinesq equations (fourth
order), and the Kawahara equation (fifth order).
Our analysis reveals that the characteristic polynomial of the monodromy
matrix inherits symmetry from the underlying PDE, enabling us to determine the
essential spectrum along the imaginary axis and bifurcations of the spectrum
away from the axis, employing the Floquet discriminant. We present numerical
evidence to support our analytical findings. | Jared C Bronski, Vera Mikyoung Hur, Robert Marangell | 2023-09-07T18:25:28Z | http://arxiv.org/abs/2309.03962v1 | # Floquet theory and stability analysis for Hamiltonian PDEs
###### Abstract
We analyze Floquet theory as it applies to the stability and instability of periodic traveling waves in Hamiltonian PDEs. Our investigation focuses on several examples of such PDEs, including the generalized KdV and BBM equations (third order), the nonlinear Schrodinger and Boussinesq equations (fourth order), and the Kawahara equation (fifth order). Our analysis reveals that the characteristic polynomial of the monodromy matrix inherits symmetry from the underlying PDE, enabling us to determine the essential spectrum along the imaginary axis and bifurcations of the spectrum away from the axis, employing the Floquet discriminant. We present numerical evidence to support our analytical findings.
###### Contents
* 1 Introduction
* 1.1 Spectrum on the imaginary axis
* 1.2 Bifurcation of the spectrum away from the imaginary axis
* 1.3 Main results
* 1.4 Numerical method
* 2 Third order equations
* 2.1 The generalized KdV equation
* 2.2 Numerical experiments for equations of KdV type
* 2.3 The generalized BBM equation
* 2.4 Numerical experiments for equations of BBM type
* 2.5 Spectrum along the imaginary axis towards \(\pm i\infty\)
* 3 Fourth order equations
* 3.1 The nonlinear Schrodinger equation
* 3.2 Trivial phase solutions
* 3.3 Nontrivial phase solutions
* 3.4 The Boussinesq equation
* 3.5 Spectrum along the imaginary axis towards \(\pm i\infty\)
* 4
The Kawahara equation * 4.1 Numerical experiments * 4.2 Spectrum along the imaginary axis towards \(\pm i\infty\)
## 1 Introduction
The subject of the investigation here is the stability and instability of periodic traveling waves in Hamiltonian partial differential equations (PDEs) in one spatial dimension. This entails addressing spectral problems of the form
\[\lambda v=\mathbf{\mathcal{J}}\mathbf{\mathcal{L}},\qquad\lambda\in\mathbb{C}, \tag{1.1}\]
where \(\mathbf{\mathcal{J}}\) represents a symplectic form, \(\mathbf{\mathcal{L}}\) is a linear self-adjoint operator corresponding to the second variation of an appropriate Hamiltonian, and \(\mathbf{\mathcal{J}}\mathbf{\mathcal{L}}\) has periodic coefficients. It is well-established that the \(L^{2}(\mathbb{R})\) essential spectrum of such an operator remains invariant under the transformations
\[\lambda\mapsto-\lambda\quad\text{and}\quad\lambda\mapsto\overline{\lambda},\]
implying that the spectrum is symmetric with respect to reflections across the real and imaginary axes. Our emphasis lies in quasi-periodic eigenvalue problems that exhibit symmetry of the kind. Prior research on this subject can be found in [23, 22, 24, 5], among many others, and [35] in particular. See also [32] for some similar, related results.
We can reformulate (1.1) as
\[\mathbf{v}_{x}=\mathbf{A}(x,\lambda)\mathbf{v}, \tag{1.2}\]
where \(\mathbf{A}(x,\lambda)\) is an \(n\times n\) matrix-valued function of \(x\in\mathbb{R}\), satisfying \(\mathbf{A}(x+T,\lambda)=\mathbf{A}(x,\lambda)\) for some \(T>0\), the period. Throughout, we assume _generalized Hamiltonian symmetry_. That is,
* \(\mathbf{A}(x,\lambda)\) is real for \(\lambda\in\mathbb{R}\).
* \(\mathbf{A}^{\top}(x,\lambda)\mathbf{B}(\lambda)=-\mathbf{B}(\lambda)\mathbf{A} (x,-\lambda)\) for some matrix \(\mathbf{B}(\lambda)\), independent of \(x\), and nonsingular everywhere except for at most finitely many values of \(\lambda\).
These assumptions can be somewhat relaxed--for instance, for the generalized BBM equation--but they encompass the majority of relevant examples. Importantly, \(n\) does not have to be even. Here we present examples for \(n=3\) (such as the generalized KdV and BBM equations) and \(n=5\) (such as the fifth-order KdV or Kawahara equation). Additionally, we remark that (A2) is weaker than the infinitesimal symplecticity assumption [25, 41]. Among all the examples discussed herein, only the linearizations of the nonlinear Schrodinger equation about a trivial phase solution and the Boussinesq equation about a stationary solution satisfy infinitesimal symplecticity. For \(n\) odd, infinitesimal symplecticity implies that \(\mathbf{A}(x,\lambda)\) must be singular, but generalized Hamiltonian symmetry does not.
We define the _monodromy matrix_ of (1.2) as
\[\mathbf{M}(\lambda)=\mathbf{V}(T,\lambda),\quad\text{where}\quad\mathbf{V}_{ x}=\mathbf{A}(x,\lambda)\mathbf{V}\quad\text{and}\quad\mathbf{V}(0,\lambda)= \mathbf{I}_{n}. \tag{1.3}\]
Here \(\mathbf{I}_{n}\) represents the \(n\times n\) identity matrix. We define the _characteristic polynomial_ of the monodromy matrix of (1.2) as
\[p(\mu,\lambda)=\det(\mathbf{M}(\lambda)-\mu\mathbf{I}_{n}). \tag{1.4}\]
The monodromy matrix and the characteristic polynomial inherit symmetry from the underlying ODE. This can prove particularly advantageous when investigating the stability and instability of periodic traveling waves in Hamiltonian PDEs.
**Lemma 1** (Generalized Hamiltonian symmetry).: _Suppose that \(\mathbf{A}(x,\lambda)\) is an \(n\times n\) matrix-valued function, periodic in \(x\), and (A1) and (A2) hold true. Let \(\mathbf{M}(\lambda)\) denote the monodromy matrix of (1.2), and it must satisfy_
\[\mathbf{M}(\lambda)=\mathbf{B}^{-\top}(\lambda)\mathbf{M}^{-\top}(-\lambda) \mathbf{B}^{\top}(\lambda). \tag{1.5}\]
_Additionally, suppose that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\). Let the characteristic polynomial of the monodromy matrix take the form_
\[p(\mu,\lambda)=\sum_{k=0}^{n}(-\mu)^{k}e_{n-k}(\lambda),\]
_where \(e_{k}(\lambda)\) denotes the \(k\)-th elementary symmetric polynomial of the eigenvalues of \(\mathbf{M}(\lambda)\). That is,_
\[e_{0}(\lambda)=1\quad\text{and}\quad e_{k}(\lambda)=\sum_{1\leqslant j_{1}<j_{ 2}<\ldots j_{k}\leqslant n}\mu_{j_{1}}\mu_{j_{2}}\cdots\mu_{j_{k}},\quad k=1,2,\ldots,n,\]
_where \(\mu_{j_{1}},\mu_{j_{2}},\ldots,\mu_{j_{k}}\) are the eigenvalues of \(\mathbf{M}(\lambda)\). The elementary symmetric polynomials must then satisfy_
\[e_{k}(\lambda)=e_{n-k}(-\lambda),\quad k=0,1,2,\ldots,n,\quad\text{for $ \lambda\in\mathbb{C}$} \tag{1.6}\]
_and, in turn,_
\[e_{k}(\lambda)=\overline{e_{n-k}(\lambda)},\quad k=0,1,2,\ldots,n,\quad\text{ for $\lambda\in i\mathbb{R}$}. \tag{1.7}\]
_Particularly, for \(n\) even, \(e_{n/2}(\lambda)\) is necessarily real._
Proof.: Recalling (1.3), we observe that \(\mathbf{V}^{-\top}(x,\lambda)\) satisfies
\[\mathbf{V}_{x}^{-\top}(x,\lambda)=-\mathbf{A}^{\top}(x,\lambda)\mathbf{V}^{- \top}(x,\lambda)\quad\text{and}\quad\mathbf{V}^{-\top}(0,\lambda)=\mathbf{I}_ {n},\]
whence \(\mathbf{W}(x,\lambda):=\mathbf{B}^{-1}(\lambda)\mathbf{V}^{-\top}(x,\lambda) \mathbf{B}(\lambda)\) satisfies
\[\mathbf{W}_{x}(x,\lambda)=-\mathbf{B}^{-1}(\lambda)\mathbf{A}^{\top}(x, \lambda)\mathbf{B}(\lambda)\mathbf{W}(x,\lambda)=\mathbf{A}(x,-\lambda) \mathbf{W}(x,\lambda)\quad\text{and}\quad\mathbf{W}(0,\lambda)=\mathbf{I}_{n},\]
by (A2). Additionally, recalling (1.3), we observe that \(\mathbf{V}(x,-\lambda)\) satisfies
\[\mathbf{V}_{x}(x,-\lambda)=\mathbf{A}(x,-\lambda)\mathbf{V}(x,-\lambda)\quad \text{and}\quad\mathbf{V}(0,-\lambda)=\mathbf{I}_{n}.\]
Therefore, (1.5) follows by uniqueness.
Suppose that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\), and we observe
\[\sum_{k=0}^{n}(-\mu)^{k}e_{n-k}(-\lambda)= \det(\mathbf{M}(-\lambda)-\mu\mathbf{I}_{n})=\det\bigl{(} \mathbf{M}^{-1}(\lambda)-\mu\mathbf{I}_{n}\bigr{)}\] \[= (-\mu)^{n}\det\bigl{(}\mathbf{M}^{-1}(\lambda)\bigr{)}\det\bigl{(} \mathbf{M}(\lambda)-\mu^{-1}\mathbf{I}_{n}\bigr{)}=\sum_{k=0}^{n}(-\mu)^{k}e_{ k}(\lambda).\]
Therefore, (1.6) follows. Here the second equality follows from generalized Hamiltonian symmetry of the monodromy matrix, and the last equality follows because
\[\det(\mathbf{V}(x,\lambda))=\exp\left(\int_{0}^{x}\operatorname{tr}(\mathbf{A }(x,\lambda))\ dx\right)\det(\mathbf{V}(0,\lambda))=\det(\mathbf{V}(0,\lambda )),\]
by hypothesis. This completes the proof.
Note that \(e_{k}(\lambda)\) can be related to \(\operatorname{tr}\bigl{(}\mathbf{M}^{j}(\lambda)\bigr{)}\), \(j=1,2,\ldots,k\), using the well-known Newton formula as
\[ke_{k}(\lambda)=\sum_{j=1}^{k}(-1)^{j-1}e_{k-j}(\lambda)\operatorname{tr} \bigl{(}\mathbf{M}^{j}(\lambda)\bigr{)}. \tag{1.8}\]
We will primarily work with \(e_{k}(\lambda)\), but (1.8) allows us to express them in terms of \(\operatorname{tr}\bigl{(}\mathbf{M}^{k}(\lambda)\bigr{)}\), which can prove particularly advantageous for numerical computations.
Lemma 1 implies--perhaps not surprisingly--that if (1.2) exhibits generalized Hamiltonian symmetry then only half of the invariants of \(\mathbf{M}(\lambda)\), \(\lambda\in i\mathbb{R}\), are linearly independent. However, the full implications of this observation have not been thoroughly explored. Our interest here lies in the spectrum along the imaginary axis, which holds significant importance when investigating the stability and instability of periodic traveling waves in Hamiltonian PDEs. Our objective is to determine the algebraic multiplicity of the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) on the imaginary axis, as well as the bifurcation of the spectrum away from the axis, by utilizing the _Floquet discriminant_\(\boldsymbol{f}:i\mathbb{R}\to\mathbb{R}^{n-1}\), defined element-wise as
\[f_{k}(\lambda)=\begin{cases}\operatorname{Re}(e_{(k+1)/2}(\lambda)),&1\leqslant k \leqslant n-1,\text{odd},\\ \operatorname{Im}(e_{k/2}(\lambda)),&1\leqslant k\leqslant n-1,\text{even}. \end{cases}\]
We identify \(\mathbb{R}^{2}\) with \(\mathbb{C}\) whenever convenient, and \(\boldsymbol{f}:i\mathbb{R}\to\begin{cases}\mathbb{C}^{(n-1)/2},&n\ \ \text{odd},\\ \mathbb{C}^{n/2-1}\times\mathbb{R},&n\ \ \text{even}.\end{cases}\) For \(n=2\), note from (1.7) and (1.8) that the Floquet discriminant becomes \(\operatorname{tr}(\mathbf{M}(\lambda))\), which is real for \(\lambda\in i\mathbb{R}\). For \(n=3\), \(\operatorname{tr}(\mathbf{M}(\lambda))\) takes values in \(\mathbb{C}\), and for \(n=4\), the Floquet discriminant consists of \(\operatorname{tr}(\mathbf{M}(\lambda))\) and \(\frac{1}{2}(\operatorname{tr}(\mathbf{M}(\lambda))^{2}-\operatorname{tr} \bigl{(}\mathbf{M}^{2}(\lambda)\bigr{)})\). The latter is necessarily real.
### Spectrum on the imaginary axis
If the monodromy matrix of (1.2), evaluated at \(\lambda\in\mathbb{C}\), possesses \(m\) simple eigenvalues along the unit circle--that is, the characteristic polynomial of the monodromy matrix has \(m\) simple roots on the unit circle--then it follows from the Floquet theorem that \(\lambda\) belongs to the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) with an algebraic multiplicity \(m\). Consequently, our interest lies in determining the number of eigenvalues of the monodromy matrix on the unit circle. This task can be intricate, but we will explore how generalized Hamiltonian symmetry can facilitate the process.
Suppose that \(\mathbf{A}(x,\lambda)\) is an \(n\times n\) matrix-valued function, periodic in \(x\), satisfying (A1) and (A2). We deduce from (1.7) that the characteristic polynomial of the monodromy matrix of (1.2) obeys
\[p(\mu,\lambda)=\mu^{n}\overline{p\left(\frac{1}{\mu},\lambda\right)}\ \ \text{ for }\lambda\in i\mathbb{R},\]
whence if \(\mu\) is a root of \(p(\cdot,\lambda)\) for \(\lambda\in i\mathbb{R}\) then \(\frac{1}{\mu}\) is also a root necessarily with the same multiplicity. In other words, the roots of \(p(\cdot,\lambda)\) for \(\lambda\in i\mathbb{R}\) are symmetric with respect to reflection across the unit circle. To determine the number of roots on the unit circle, we exploit the fact that linear fractional transformations map generalized circles to generalized circles. More specifically, we introduce
\[p^{\sharp}(\nu,\lambda):=(1-i\nu)^{n}p\left(\frac{1+i\nu}{1-i\nu},\lambda \right), \tag{1.9}\]
which is a real polynomial for \(\lambda\in i\mathbb{R}\), and the real roots of \(p^{\sharp}(\nu,\lambda)\) for \(\lambda\in i\mathbb{R}\) correspond to the roots of \(p(\mu,\lambda)\) on the unit circle. (Care must be taken, though, when accounting for roots at \(\infty\).) Importantly, the discriminant transforms under the action of \(\operatorname{SL}(2,\mathbb{C})\). Specifically, if \(p\) is a polynomial of degree \(n\) then
\[\operatorname{disc}_{\nu}\left((c+d\nu)^{n}p\left(\frac{a+b\nu}{c+d\nu} \right)\right)=(ad-bc)^{n(n-1)}\operatorname{disc}_{\mu}p(\mu),\]
where "\(\operatorname{disc}\)" means the discriminant. For this and other facts concerning the classical discriminant we refer the reader to the text of Gelfand, Kapranov and Zelevinsky[15]. Consequently, \(\operatorname{disc}_{\nu}p^{\sharp}(\nu,\lambda)=(-2i)^{n(n-1)}\operatorname {disc}_{\mu}p(\mu,\lambda)\), and conditions involving the sign of the discriminant of \(p(\mu,\lambda)\) can be expressed in terms of the sign of \(\operatorname{disc}_{\nu}p^{\sharp}(\nu,\lambda)\). (Care must be taken, though, because the degree of \(p^{\sharp}\) is less than the degree of \(p\) when \(p\) has a root of \(-1\).) It is important
to note that the discriminant of \(p^{\sharp}(\nu,\lambda)\) and, hence, \(\operatorname{disc}_{\mu}p(\mu,\lambda)\) are real for \(\lambda\in i\mathbb{R}\). See, for instance, [15, Chapter 12] for more on discriminant and resultant.
Consequently, the task of counting the number of roots on the unit circle of the characteristic polynomial \(p(\mu,\lambda)\) for \(\lambda\in i\mathbb{R}\) becomes equivalent to counting the number of real roots of the real polynomial \(p^{\sharp}(\nu,\lambda)\), defined in (1.9). Such counting can be accomplished, for instance, by utilizing the Sturm sequence of a polynomial. When dealing with real polynomials with a degree \(\leqslant 3\), the number of real roots can be determined by inspecting the discriminant of the polynomial. For polynomials with a degree \(\geqslant 4\), additional auxiliary quantities, which depend polynomially on the coefficients of the polynomial, must be taken into account, alongside the discriminant.
Let's examine a somewhat elementary example, involving a second-order equation
\[v_{xx}+Q(x)v=\lambda^{2}v, \tag{1.10}\]
where \(Q(x)\) is a real-valued and periodic function. We remark that the left side of (1.10) makes a self-adjoint operator. We can reformulate (1.10) as
\[\mathbf{v}_{x}=\begin{pmatrix}0&\lambda\\ \lambda-Q(x)/\lambda&0\end{pmatrix}\mathbf{v}=:\mathbf{A}(x,\lambda)\mathbf{v },\qquad\lambda\in\mathbb{C}, \tag{1.11}\]
and we confirm that (A1) and (A2) hold true for \(\mathbf{B}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\). Incidentally, (1.11) does not take the usual symplectic form of the spectral problem for the Schrodinger operator, for which \(\mathbf{A}^{\top}(x,\lambda)\mathbf{B}=-\mathbf{B}\mathbf{A}(x,\lambda)\) for some skew-symmetric matrix \(\mathbf{B}\).
Let \(\mathbf{M}(\lambda)\) denote the \(2\times 2\) monodromy matrix of (1.11). The characteristic polynomial of the monodromy matrix takes the form
\[p(\mu,\lambda)=\mu^{2}-\operatorname{tr}(\mathbf{M}(\lambda))\mu+1,\]
and (1.9) becomes \(p^{\sharp}(\nu,\lambda)=-(2+\operatorname{tr}(\mathbf{M}(\lambda)))\nu^{2}+2 -\operatorname{tr}(\mathbf{M}(\lambda))\). Consequently,
\[\operatorname{disc}_{\nu}p^{\sharp}(\nu,\lambda)=4(4-\operatorname{tr}( \mathbf{M}(\lambda))^{2}).\]
This reproduces the well-known result [34, 12, 41] that the \(L^{2}(\mathbb{R})\) essential spectrum of (1.11) has a band when the Floquet discriminant \(\operatorname{tr}(\mathbf{M}(\lambda))\), which is real for \(\lambda\in i\mathbb{R}\), lies within the interval \((-2,2)\), a band edge with a double eigenvalue of \(1\) or \(-1\) when \(\operatorname{tr}(\mathbf{M}(\lambda))=\pm 2\), and a gap otherwise.
When dealing with higher-order equations that exhibit generalized Hamiltonian symmetry, we wish to likewise establish that the monodromy matrix of (1.2) at \(\lambda\in i\mathbb{R}\) has \(m\) simple eigenvalues on the unit circle, or, equivalently, \(\lambda\) belongs to the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) with an algebraic multiplicity \(m\), if and only if the Floquet discriminant lies within a region \(\Omega_{m}\subset\mathbb{R}^{n-1}\), which can be explicitly computed. For example, for \(n=3\) and considering the generalized KdV equation, we can demonstrate that \(\Omega_{3}\subset\mathbb{R}^{2}\) makes to a deltoidal region, bounded by a hypocycloid with three cusps, and \(\lambda\in i\mathbb{R}\) is in the \(L^{2}(\mathbb{R})\) essential spectrum with an algebraic multiplicity three if and only if the Floquet discriminant \(\operatorname{tr}(\mathbf{M}(\lambda))\) lies within \(\Omega_{3}\). Similarly, for \(n=4\) and the nonlinear Schrodinger equation, \(\Omega_{4}\subset\mathbb{R}^{3}\) makes a tetrahedral region with four cusps. Theorems 2 and 3 provide further details.
### Bifurcation of the spectrum away from the imaginary axis
We are also interested in Hopf bifurcations of the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) away from the imaginary axis, which can manifest not only in the vicinity of the origin in the complex plane--namely, modulational instability--but also away from \(0\in\mathbb{C}\). This holds significant importance when investigating the stability and instability of periodic traveling waves in Hamiltonian PDEs. For second-order equations, the third author and collaborators [23, 22, 24, 35] have offered a
comprehensive theoretical explanation of such bifurcations. For higher-order equations, however, our understanding remains limited, and only a few results **CITE!** are available for small amplitudes, relying on perturbative techniques. Detecting all the spectrum off the imaginary axis for equations of all orders could be an insurmountable task. Here our objective is to derive a _bifurcation index_, a polynomial in the Floquet discriminant and its derivative, which can be utilized to locate points along the imaginary axis where bifurcations of the spectrum away from the axis may occur.
We continue assuming that \(\mathbf{A}(x,\lambda)\) is an \(n\times n\) matrix-valued function, periodic in \(x\), and (A1) and (A2) hold true. Suppose that the characteristic polynomial of the monodromy matrix of (1.2) satisfies
\[p(\mu_{0},\lambda_{0})=0\quad\text{for some }\mu_{0}\in\mathbb{C},\,|\mu_{0}|=1, \quad\text{for some }\lambda_{0}\in i\mathbb{R}.\]
(More generally, the characteristic polynomial may vanish on subsets of the unit circle, each with co-dimension one.) It then follows from the implicit function theorem that we can solve \(p(\mu,\lambda)=0\) uniquely for \(\lambda\) as a function of \(\mu\) in a neighborhood of \(\mu_{0}\) and \(\lambda_{0}\), as long as \(p_{\lambda}(\mu_{0},\lambda_{0})\neq 0\). Consequently, a necessary condition for the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) to bifurcate at \(\lambda_{0}\in i\mathbb{R}\) away from the imaginary axis in a transversal manner--in addition to the spectrum on the axis itself--is
\[p_{\lambda}(\mu_{0},\lambda_{0})=0.\]
Since both \(p(\mu,\lambda)\) and \(p_{\lambda}(\mu,\lambda)\) are polynomials in \(\mu\), they will simultaneously vanish if and only if their resultant with respect to \(\mu\) vanishes. Therefore, a necessary condition for the bifurcation of the spectrum away from the imaginary axis is
\[\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))=0\quad \text{for some }\lambda\in i\mathbb{R}, \tag{1.12}\]
where "res" means the resultant.
For example, let's consider (1.11) and a straightforward calculation reveals that
\[\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))=\operatorname {tr}(\mathbf{M}_{\lambda}(\lambda))^{2}.\]
Indeed, if the \(L^{2}(\mathbb{R})\) essential spectrum of (1.11) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis then \(\operatorname{tr}(\mathbf{M}_{\lambda}(\lambda))=0\). Conversely, it is well-known [34] that the Floquet discriminant for a second-order self-joint operator is monotonous within bands.
For third-order equations, under some additional assumptions, we can establish that (1.12) is also a sufficient condition for the bifurcation of the spectrum away from the imaginary axis. Theorem 4 provides further details.
### Main results
For second-order self-adjoint operators, such as (1.11), the Floquet discriminant, the trace of the monodromy matrix, can be used to determine whether the \(L^{2}(\mathbb{R})\) essential spectrum has a band or a gap. For third-order and fourth-order equations that exhibit generalized Hamiltonian symmetry, the Floquet discriminant can likewise determine the algebraic multiplicity of the spectrum along the imaginary axis.
**Theorem 2** (Spectrum on the imaginary axis for third-order equations).: _Suppose that \(\mathbf{A}(x,\lambda)\) is a \(3\times 3\) matrix-valued function, periodic in \(x\), satisfying (A1) and (A2). Suppose that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\). Let \(\mathbf{M}(\lambda)\) denote the monodromy matrix of (1.2), and \(\mathbf{M}(\lambda)\) for \(\lambda\in i\mathbb{R}\) must have either one or three eigenvalues along the unit circle, counted by algebraic multiplicity._
_Recall that the Floquet discriminant is \(\boldsymbol{f}(\lambda)=\operatorname{tr}(\mathbf{M}(\lambda))\) for \(\lambda\in i\mathbb{R}\), and we write \(\boldsymbol{f}(\lambda)=f_{1}(\lambda)+if_{2}(\lambda)\). Let_
\[\begin{split}\Delta_{3}(\lambda)=&|\boldsymbol{f}( \lambda)|^{4}-4\boldsymbol{f}(\lambda)^{3}-4\overline{\boldsymbol{f}(\lambda)} ^{3}+18|\boldsymbol{f}(\lambda)|^{2}-27\\ =&((f_{1}^{2}+f_{2}^{2})^{2}-8(f_{1}^{3}-3f_{1}f_{2}^ {2})+18(f_{1}^{2}+f_{2}^{2})-27)(\lambda),\end{split} \tag{1.13}\]
_and the followings hold true:_
* _If_ \(\Delta_{3}(\lambda)<0\) _then_ \(\mathbf{M}(\lambda)\) _has three distinct eigenvalues on the unit circle, implying that_ \(\lambda\in i\mathbb{R}\) _belongs to the_ \(L^{2}(\mathbb{R})\) _essential spectrum of (_1.1_) with an algebraic multiplicity three._
* _If_ \(\Delta_{3}(\lambda)>0\)_, on the other hand, then_ \(\mathbf{M}(\lambda)\) _has three eigenvalues: one on the unit circle, one inside the unit circle, and one outside the unit circle. The latter two have the same argument._
* _If_ \(\Delta_{3}(\lambda)=0\) _then_ \(\mathbf{M}(\lambda)\) _has three eigenvalues on the unit circle, counted with their algebraic multiplicities, and at least two of them are degenerate._
_Alternatively, let_
\[\varGamma=\{2e^{i\theta}+e^{-2i\theta}\in\mathbb{C}:\theta\in[-\pi,\pi]\} \tag{1.14}\]
_denote the deltoid curve or a Steiner curve, a hypocycloid with three cusps, and the followings hold true:_
* _If_ \(\mathbf{f}(\lambda)\) _lies inside_ \(\varGamma\)_, the connected component of_ \(\mathbb{C}\setminus\varGamma\) _containing_ \(0\)_, then_ \(\mathbf{M}(\lambda)\) _has three distinct eigenvalues on the unit circle._
* _If_ \(\mathbf{f}(\lambda)\) _lies outside_ \(\varGamma\)_, the connected component of_ \(\mathbb{C}\setminus\varGamma\) _containing_ \(\infty\)_, then_ \(\mathbf{M}(\lambda)\) _has three eigenvalues: one on the unit circle, one inside the unit circle, and one outside the unit circle._
* _If_ \(\mathbf{f}(\lambda)\) _lies on_ \(\varGamma\) _then_ \(\mathbf{M}(\lambda)\) _has three eigenvalues on the unit circle and at least one eigenvalue has a higher multiplicity. Specifically, one eigenvalue has an algebraic multiplicity one and another has a multiplicity two unless_ \(\mathbf{f}(\lambda)\) _coincides with one of the cusps of_ \(\varGamma\)_--_\(3\)_,_ \(3e^{2\pi i/3}\)_, and_ \(3e^{4\pi i/3}\)_--in which cases_ \(1\)_,_ \(e^{2\pi i/3}\)_, and_ \(e^{4\pi i/3}\) _respectively are eigenvalues with an algebraic multiplicity three._
Proof.: The proof begins by observing that the characteristic polynomial of the monodromy matrix of (1.2) takes the form
\[p(\mu,\lambda)= -\mu^{3}+\mathbf{f}(\lambda)\mu^{2}-\mathbf{f}(-\lambda)\mu+1,\qquad \lambda\in\mathbb{C},\]
where \(\mathbf{f}(\lambda)=\operatorname{tr}(\mathbf{M}(\lambda))\), and
\[p(\mu,\lambda)= -\mu^{3}+\mathbf{f}(\lambda)\mu^{2}-\overline{\mathbf{f}(\lambda)}\mu+1,\qquad\lambda\in i\mathbb{R}, \tag{1.15}\]
by (1.6) and (1.7). Consequently, if \(\mu\) is a root of \(p(\cdot,\lambda)\) for \(\lambda\in i\mathbb{R}\) then \(\frac{1}{\overline{\mu}}\) is also a root. Since \(p(\mu,\lambda)\), \(\lambda\in i\mathbb{R}\), has three roots, counted with their algebraic multiplicities, at least one root must satisfy \(\mu=\frac{1}{\overline{\mu}}\), implying that it lies on the unit circle. Consequently, \(p(\mu,\lambda)\), \(\lambda\in i\mathbb{R}\), has either one or three roots on the unit circle.
To determine whether all three roots of \(p(\mu,\lambda)\), \(\lambda\in i\mathbb{R}\), lie on the unit circle or if two of them fall off it, we can inspect the sign of the discriminant. Specifically, the number of roots changes if and only if \(p(\mu,\lambda)\), \(\lambda\in i\mathbb{R}\), has a double root or, equivalently, the discriminant becomes zero. Let \(\mathbf{f}=f_{1}+if_{2}\), and \(\operatorname{disc}(-\mu^{3}+(f_{1}+if_{2})\mu^{2}-(f_{1}-if_{2})\mu+1)\) gives rise to \(\Delta_{3}\), defined in (1.13).
Additionally, a direct calculation reveals that the deltoid curve, defined by (1.14), in the complex coordinates, or, equivalently,
\[(2\cos(\theta)+\cos(2\theta),2\sin(\theta)-\sin(2\theta))\]
in polar coordinates, corresponds to \(\Delta_{3}=0\) in the \((f_{1},f_{2})\) coordinates. This completes the proof.
The left panel of Figure 1 depicts the region in \(\mathbb{R}^{2}(=\mathbb{C})\), enclosed by the deltoid curve \(\varGamma\), defined in (1.14). The monodromy matrix of (1.2) at \(\lambda\in i\mathbb{R}\) possesses three simple eigenvalues on the unit circle, or, equivalently, \(\lambda\) belongs to the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) with an algebraic multiplicity three, if and only if the Floquet discriminant lies within the deltoid region.
**Remark.** The deltoid curve \(\varGamma\), defined in (1.14) and also known as a Steiner curve, is a hypocycloid with three cusps, and it is the path traced by a point on the circumference of a circle with a radius \(1\) as it rolls without slipping along the interior of a circle with a radius \(3\). The fact that \(\varGamma\) shows up in Theorem 2 may not be as surprising as it appears. In fact, the range of the trace operator
\[\operatorname{tr}:\operatorname{SU}(3)\to\mathbb{C}\]
coincides with \(\varGamma\) and its interior [26]. Consequently, if \(\operatorname{tr}(\mathbf{M}(\lambda))\) lies within the interior of \(\varGamma\) then \(\mathbf{M}(\lambda)\) is similar to a matrix in \(\operatorname{SU}(3)\). This suggests a connection among the deltoid curve, the range of the trace operator, and the monodromy matrix.
**Theorem 3** (Spectrum on the imaginary axis for fourth-order equations).: _Suppose that \(\mathbf{A}(x,\lambda)\) is a \(4\times 4\) matrix-valued function, periodic in \(x\), satisfying (A1) and (A2). Suppose that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\). Let \(\mathbf{M}(\lambda)\) denote the monodromy matrix of (1.2), and for \(\lambda\in i\mathbb{R}\), it must have zero, two, or four eigenvalues along the unit circle, counted by algebraic multiplicity._
_Recall that the Floquet discriminant is \(\boldsymbol{f}(\lambda)=(f_{1}(\lambda),f_{2}(\lambda),f_{3}(\lambda))\in \mathbb{R}^{3}(=\mathbb{C}\times\mathbb{R})\) for \(\lambda\in i\mathbb{R}\), where_
\[f_{1}(\lambda)+if_{2}(\lambda)=\operatorname{tr}(\mathbf{M}(\lambda))\quad \text{and}\quad f_{3}(\lambda)=\tfrac{1}{2}(\operatorname{tr}(\mathbf{M}( \lambda))^{2}-\operatorname{tr}\bigl{(}\mathbf{M}^{2}(\lambda)\bigr{)}). \tag{1.16}\]
_Let_
\[\Delta_{4}= -4(f_{1}^{6}+f_{2}^{6})-12f_{1}^{2}f_{2}^{2}(f_{1}^{2}+f_{2}^{2}) +(f_{1}^{2}+f_{2}^{2})^{2}f_{3}^{2} \tag{1.17}\] \[+36(f_{1}^{4}-f_{2}^{4})f_{3}-8(f_{1}^{2}-f_{2}^{2})f_{3}^{3}\] \[-60(f_{1}^{4}+f_{2}^{4})+312f_{1}^{2}f_{2}^{2}+16f_{3}^{4}-80(f_{1 }^{2}+f_{2}^{2})f_{3}^{2}\] \[+288(f_{1}^{2}-f_{2}^{2})f_{3}-192(f_{1}^{2}+f_{2}^{2})-128f_{3}^{ 2}+256\]
Figure 1: The deltoidal region in \(\mathbb{R}^{2}\) (left) and tetrahedral region in \(\mathbb{R}^{3}\) (right). For third-order and fourth-order equations with generalized Hamiltonian symmetry, the monodromy matrix has three and four simple eigenvalues, respectively, along the unit circle—or, equivalently, the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) has an algebraic multiplicity three and four, respectively, on the imaginary axis—if and only if the Floquet discriminant lies in the blue region.
\[P_{4} =8(2f_{1}+f_{3}+2)(2f_{3}-12)-48f_{2}^{2}, \tag{1.18}\] \[D_{4} =-256(4f_{1}^{4}+3f_{2}^{4}+f_{1}^{2}(4f_{2}^{2}+(-6+f_{3})^{2})\] \[\qquad\qquad+f_{2}^{2}(28+12f_{3}-f_{3}^{2})+4f_{1}^{3}(2+f_{3})\] (1.19) \[\qquad\qquad-4(-2+f_{3})(2+f_{3})^{2}+16f_{1}(4+2f_{2}^{2}-f_{3}^{ 2})).\]
_Suppose that \(\Delta_{4}(\lambda),P_{4}(\lambda),D_{4}(\lambda)\neq 0\), and \(\lambda\in i\mathbb{R}\) must belong to the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) with:_
* _multiplicity_ \(4\) _if_ \(\Delta_{4}(\lambda)>0\)_,_ \(P_{4}(\lambda),D_{4}(\lambda)<0\)_,_
* _multiplicity_ \(2\) _if_ \(\Delta_{4}(\lambda)<0\)_, and_
* _multiplicity_ \(0\) _if_ \(\Delta_{4}(\lambda)>0\) _while_ \(P_{4}(\lambda)\) _or_ \(D_{4}(\lambda)>0\)_._
Proof.: The proof resembles that of Theorem 2, observing that the characteristic polynomial of the monodromy matrix of (1.2) takes the form
\[p(\mu,\lambda)=\mu^{4}-(f_{1}(\lambda)+if_{2}(\lambda))\mu^{3}+f_{3}(\lambda) \mu^{2}-(f_{1}(\lambda)-if_{2}(\lambda))\mu+1 \tag{1.20}\]
for \(\lambda\in i\mathbb{R}\), by generalized Hamiltonian symmetry. Here \(f_{1}\), \(f_{2}\), \(f_{3}\) are defined in (1.16). We remark that \(f_{3}\) is necessarily real although \(f_{1}+if_{2}\) does not have to be. However, \(f_{1}+if_{2}\) becomes real, for instance, for the linearization of the nonlinear Schrodinger equation about a trivial phase solution, thanks to additional symmetry.
The discriminant of (1.20) leads to \(\Delta_{4}\), defined in (1.17). When \(\Delta_{4}\neq 0\), the eigenvalues of the monodromy matrix of (1.2) on the unit circle are distinct, and these eigenvalues determine the algebraic multiplicity of the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) on the imaginary axis. To count the number of eigenvalues of the monodromy matrix on the unit circle, we can inspect the signs of (1.17), (1.18), (1.19), provided that \(\Delta_{4}\), \(P_{4}\), \(D_{4}\neq 0\). We omit the details.
The right panel of Figure 1 shows the regions in \(\mathbb{R}^{3}(=\mathbb{C}\times\mathbb{R})\) that correspond to different algebraic multiplicities of the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) along the imaginary axis. The multiplicity of \(\lambda\in i\mathbb{R}\) is four when the Floquet discriminant lies within the tetrahedral region, depicted in blue. The multiplicity is zero--that is, \(\lambda\) is not in the spectrum--when the Floquet discriminant is within the red region, and the multiplicity is two in the remaining region of \(\mathbb{R}^{3}\).
The tetrahedral region is bounded in \(\mathbb{R}^{3}\) because if all eigenvalues of the monodromy matrix lie on the unit circle then
\[|(f_{1},f_{2})|\leqslant 4\quad\text{and}\quad|f_{3}|\leqslant 6.\]
The region has four cusps located at \((f_{1},f_{2},f_{3})=(\pm 4,0,6)\) and \((0,\pm 4,6)\), at which the characteristic polynomial of the monodromy matrix simplifies to \((\mu\mp 1)^{4}\) and \((\mu\mp i)^{4}\) respectively. The blue and the red regions are tangent to each other along the parabolic segments, in green, which can be parametrized as
\[(f_{1},f_{2},f_{3})=(-4\cos(\theta),0,2+4\cos^{2}(\theta))\quad\text{and}\quad (0,-4\cos(\theta),-2-4\cos^{2}(\theta)),\]
where \(\theta\in[-\pi,\pi]\). Along these curves, the monodromy matrix of (1.2) has two distinct eigenvalues each with an algebraic multiplicity two. The remaining four edges of the tetrahedral region, in magenta, can be represented by the curve parametrized as
\[(3\cos(\theta)+\cos(3\theta),-3\sin(\theta)+\sin(3\theta),6\cos(2\theta)).\]
Along these curves, the monodromy matrix has one eigenvalue with a multiplicity four.
Interestingly, when the tetrahedral region is projected onto the \(f_{3}=0\) plane, it forms a region bounded by an astroid curve in \(\mathbb{R}^{2}\), a hypocycloid with four cusps, which can be parametrized as
\[(f_{1},f_{2})=(3\cos(\theta)+\cos(3\theta),-3\sin(\theta)+\sin(3\theta)).\]
Recall the connection between the range of \(\mathrm{tr}:\mathrm{SU}(3)\to\mathbb{C}\) and the deltoid curve defined in (1.14). Similarly, the range of the trace operator \(\mathrm{tr}:\mathrm{SU}(4)\to\mathbb{C}\) coincides with the astroid curve and its interior.
**Remark**.: The deltoidal and tetrahedral regions in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) can be seen as analogous to taking the interval \((-2,2)\subset\mathbb{R}\) for second-order self-adjoint operators and extending it to third-order and fourth-order equations with generalized Hamiltonian symmetry. Going further, it is possible to explicitly calculate a region in \(\mathbb{R}^{n-1}\) for \(n\)-th order equations, such that the monodromy matrix of (1.2) at \(\lambda\in i\mathbb{R}\) has \(n\) simple eigenvalues on the unit circle or, equivalently, \(\lambda\) is in the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) with an algebraic multiplicity \(n\) if and only if the Floquet discriminant lies within the region. In Section 4 we present an example for \(n=5\). However, it is important to note that as the order of the equation increases, the Floquet discriminant becomes more challenging to compute analytically as well as numerically.
The region where the purely imaginary spectrum of an \(n\)-th order equation has an algebraic multiplicity \(n\) is bounded in \(\mathbb{R}^{n-1}\) with \(n\) cusps, at which the characteristic polynomial of the monodromy matrix becomes \((\mu-e^{2\pi ik/n})^{n}\), \(k=0,1,\ldots,n-1\). On the other hand, the regions corresponding to multiplicities \(<n\) are unbounded in \(\mathbb{R}^{n-1}\).
For second-order self-adjoint operators, such as (1.11), the derivative of the Floquet discriminant determines whether the \(L^{2}(\mathbb{R})\) essential spectrum bifurcates away from the imaginary axis. For third-order equations that exhibit generalized Hamiltonian symmetry, the Floquet discriminant and its derivative can likewise be used for a necessary condition for such bifurcations. Furthermore, under some additional assumptions, the necessary condition can also serve as a sufficient condition.
**Theorem 4** (Spectrum away from the imaginary axis for third-order equations).: _Suppose that \(\mathbf{A}(x,\lambda)\) is a \(3\times 3\) matrix-valued function, periodic in \(x\), satisfying (A1) and (A2). Suppose that \(\mathrm{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\). Let \(\mathbf{f}(\lambda)\), \(\lambda\in i\mathbb{R}\), denote the Floquet discriminant. If the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis in a transversal manner, alongside the spectrum on the axis, then_
\[\Phi_{3}(\lambda):=\mathbf{f}^{\prime}(\lambda)^{3}+\mathbf{f}^{\prime}(-\lambda)^{3}+ \mathbf{f}(\lambda)\mathbf{f}^{\prime}(-\lambda)^{2}\mathbf{f}^{\prime}(\lambda)+\mathbf{f}(- \lambda)\mathbf{f}^{\prime}(\lambda)^{2}\mathbf{f}^{\prime}(-\lambda)=0. \tag{1.21}\]
_Here and elsewhere, the prime means ordinary differentiation. Conversely, if \(\Phi_{3}(\lambda)=0\) and if_
\[\Delta_{3}(\lambda)\neq 0\quad\text{and}\quad\mathbf{f}^{\prime}(-\lambda)\mathbf{f}^{ \prime\prime}(\lambda)\neq-\mathbf{f}^{\prime}(\lambda)\mathbf{f}^{\prime\prime}(- \lambda), \tag{1.22}\]
_where \(\Delta_{3}\) is in (1.13), then the spectrum of (1.1) bifurcates at \(\lambda\) away from the imaginary axis. Moreover, the spectrum exhibits the normal form_
\[\Delta\lambda=\alpha\sqrt{\Delta\mu}+o(\sqrt{\Delta\mu})\quad\text{for }| \Delta\lambda|,|\Delta\mu|\ll 1.\]
Note that \(\Phi_{3}(\lambda)\) is real for \(\lambda\in i\mathbb{R}\) by (1.7).
Proof.: Recall from the proof of Theorem 2 that the characteristic polynomial of the monodromy matrix of (1.2) is
\[p(\mu,\lambda)=-\mu^{3}+\mathbf{f}(\lambda)\mu^{2}-\mathbf{f}(-\lambda)\mu+1.\]
Suppose that \(p(\mu_{0},\lambda_{0})=0\) for some \(\mu_{0}\in\mathbb{C}\), \(|\mu_{0}|=1\), for some \(\lambda_{0}\in i\mathbb{R}\). It follows from the implicit function theorem that we can solve \(p(\mu,\lambda)=0\) uniquely for \(\lambda\) as a function of \(\mu\) in a neighborhood of \(\mu_{0}\) and \(\lambda_{0}\), provided that
\[\mathbf{f}^{\prime}(\lambda_{0})\mu_{0}^{2}+\mathbf{f}^{\prime}(-\lambda_{0})\mu_{0}=p _{\lambda}(\mu_{0},\lambda_{0})\neq 0,\]
whence \(\mathbf{f}^{\prime}(\lambda_{0})\mu_{0}+\mathbf{f}^{\prime}(-\lambda_{0})\neq 0\) because \(p(0,\lambda)=1\) for all \(\lambda\in\mathbb{C}\). Consequently, if the implicit function theorem fails for \(p(\mu,\lambda)=0\) near \(\mu_{0}\) and \(\lambda_{0}\) then
\[\mathbf{f}^{\prime}(\lambda_{0})\mu_{0}+\mathbf{f}^{\prime}(-\lambda_{0})=0, \tag{1.23}\]
implying
\[\mu_{0}=-\frac{\mathbf{f}^{\prime}(-\lambda_{0})}{\mathbf{f}^{\prime}(\lambda_{0})}\quad \text{or}\quad\mathbf{f}^{\prime}(\lambda_{0})=0.\]
Note that \(-\frac{\mathbf{f}^{\prime}(-\lambda_{0})}{\mathbf{f}^{\prime}(\lambda_{0})}\) for \(\lambda_{0}\in i\mathbb{R}\) necessarily lies on the unit circle although it does not have to be the case for \(\lambda_{0}\notin i\mathbb{R}\). Moreover, if \(\mathbf{f}^{\prime}(\lambda_{0})=0\) for \(\lambda_{0}\in i\mathbb{R}\) then \(\mathbf{f}^{\prime}(-\lambda_{0})=0\). Therefore, if the implicit function theorem fails for \(p(\mu,\lambda)=0\) near \(\mu_{0}\) and \(\lambda_{0}\) then
\[p\left(-\frac{\mathbf{f}^{\prime}(-\lambda_{0})}{\mathbf{f}^{\prime}( \lambda_{0})},\lambda_{0}\right)\] \[\qquad=\frac{1}{\mathbf{f}^{\prime}(\lambda_{0})^{3}}(\mathbf{f}^{\prime }(-\lambda_{0})^{3}+\mathbf{f}(\lambda_{0})\mathbf{f}^{\prime}(-\lambda_{0})^{2}\mathbf{f} ^{\prime}(\lambda_{0})+\mathbf{f}(-\lambda_{0})\mathbf{f}^{\prime}(-\lambda_{0})\mathbf{f }^{\prime}(\lambda_{0})^{2}+\mathbf{f}^{\prime}(\lambda_{0})^{3})=0,\]
which gives rise to \(\Phi_{3}\), defined in (1.21). Alternatively, we can calculate
\[\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))=\mathbf{f}^{ \prime}(\lambda)^{3}+\mathbf{f}^{\prime}(-\lambda)^{3}+\mathbf{f}(\lambda)\mathbf{f}^{ \prime}(-\lambda)^{2}\mathbf{f}^{\prime}(\lambda)+\mathbf{f}(-\lambda)\mathbf{f}^{\prime} (\lambda)^{2}\mathbf{f}^{\prime}(-\lambda).\]
Recall that the resultant of two polynomials is zero if and only if they share a common root. Therefore, if the implicit function theorem fails for \(p(\mu,\lambda)=0\) near \(\mu_{0}\) and \(\lambda_{0}\) then \(\operatorname{res}_{\mu}(p(\mu_{0},\lambda_{0}),p_{\lambda}(\mu_{0},\lambda_{ 0}))=0\).
Conversely, suppose that \(\Phi_{3}(\lambda_{0})=0\) for some \(\lambda_{0}\in i\mathbb{R}\), implying that
\[p(\mu_{0},\lambda_{0})=p_{\lambda}(\mu_{0},\lambda_{0})=0\quad\text{for some $\mu_{0}\in\mathbb{C}$, $|\mu_{0}|=1$, satisfying \eqref{eq:p_lambda}.}\]
Suppose that (1.22) holds true. The former holds true if and only if \(p(\mu,\lambda_{0})=0\) has three distinct roots, whence \(p_{\mu}(\mu_{0},\lambda_{0})\neq 0\). The latter, on the other hand, can be written as
\[\mu_{0}\mathbf{f}^{\prime\prime}(\lambda_{0})\neq\mathbf{f}^{\prime\prime}(-\lambda_{ 0}),\]
by (1.23), whence \(p_{\lambda\lambda}(\mu_{0},\lambda_{0})\neq 0\). We wish to solve
\[0 =p(\mu,\lambda)\] \[=p(\mu_{0},\lambda_{0})+p_{\lambda}(\mu_{0},\lambda_{0})\Delta \lambda+p_{\mu}(\mu_{0},\lambda_{0})\Delta\mu+\tfrac{1}{2}p_{\lambda\lambda}( \mu_{0},\lambda_{0})(\Delta\lambda)^{2}+o((\Delta\lambda)^{2},\Delta\mu)\] \[=\tfrac{1}{2}p_{\lambda\lambda}(\mu_{0},\lambda_{0})(\Delta \lambda)^{2}+p_{\mu}(\mu_{0},\lambda_{0})\Delta\mu+o((\Delta\lambda)^{2}, \Delta\mu),\]
where we assume that \(|\Delta\mu|\) and \(|\Delta\lambda|\) are sufficiently small. It follows from the Weierstrass preparation theorem that
\[\Delta\lambda=\pm i\sqrt{\frac{2p_{\mu}(\mu_{0},\lambda_{0})}{p_{\lambda \lambda}(\mu_{0},\lambda_{0})}\Delta\mu}+o(\sqrt{|\Delta\mu|})\]
for \(|\Delta\mu|\ll 1\). Since \(\mu\) must lie on the unit circle, we can take \(\Delta\mu=i\mu_{0}r\), where \(r\in\mathbb{R}\). Since \(p_{\lambda\lambda}(\mu_{0},\lambda_{0})\neq 0\) and since there must be a spectral curve along the imaginary axis in the vicinity of \(\mu_{0}\) and \(\lambda_{0}\), \(\frac{2p_{\mu}(\mu_{0},\lambda_{0})}{p_{\lambda\lambda}(\mu_{0},\lambda_{0})} i\mu_{0}r\) must be real. As \(r\) varies over positive real values, \(\Delta\lambda\) varies over imaginary values, and as \(r\) varies over negative real values, \(\Delta\lambda\) varies over real values, or vice versa, depending on the sign of \(\frac{p_{\mu}(\mu_{0},\lambda_{0})}{p_{\lambda\lambda}(\mu_{0},\lambda_{0})}i\mu _{0}\). In either case, two spectral curves emerge near \(\mu_{0}\) and \(\lambda_{0}\): one along the imaginary axis and another parallel to the real axis. This completes the proof.
**Remark**.: Let \(\mu=e^{i\theta}\), where \(\theta\in\mathbb{R}\). Treating \(\theta\) as a function of \(\lambda\in i\mathbb{R}\), we calculate
\[\frac{d\theta}{d\lambda}=-\frac{1}{i\mu}\frac{p_{\lambda}(\mu,\lambda)}{p_{\mu }(\mu,\lambda)}=-\frac{\mathbf{f}^{\prime}(\lambda)e^{2i\theta}+\mathbf{f}^{\prime}(- \lambda)e^{i\theta}}{ie^{i\theta}(-3e^{2i\theta}+2\mathbf{f}(\lambda)e^{i\theta}- \mathbf{f}(-\lambda))}.\]
We observe that \(p_{\lambda}(\mu,\lambda)=\mathbf{f}^{\prime}(\lambda)\mu^{2}+\mathbf{f}^{\prime}(- \lambda)\mu\) vanishes if and only if \(\Phi_{3}(\lambda)=0\). Similarly, \(i\mu p_{\mu}(\mu,\lambda)=-3i\mu^{3}+2\mathbf{if}(\lambda)\mu^{2}-i\mathbf{f}(- \lambda)\mu\) vanishes if and only if \(\Delta_{3}(\lambda)=0\). Consequently, the bifurcation of the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) away from the imaginary axis becomes possible when one of the eigenvalues of the monodromy matrix reverses its direction and starts moving the unit circle in the opposite direction as defined by the Krein signature [28, 30].
For higher-order equations with generalized Hamiltonian symmetry, a necessary condition for the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) to bifurcate away from the imaginary axis is (1.12).
**Theorem 5** (Spectrum away from the imaginary axis for higher-order equations).: _Suppose that \(\mathbf{A}(x,\lambda)\) is a \(n\times n\) matrix-valued function, periodic in \(x\), satisfying (A1) and (A2). Let \(p(\mu,\lambda)\) denote the characteristic polynomial of the monodromy matrix of (1.2). If the \(L^{2}(\mathbb{R})\) essential spectrum of (1.1) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis in a transversal manner, alongside the spectrum on the axis, then_
\[\varPhi_{n}(\lambda):=\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))=0. \tag{1.24}\]
**Remark**.: Under some assumptions, (1.24) implies that the mapping \(\mu\mapsto\lambda(\mu)\) is locally not invertible. However, unlike the case with third-order equations, this is no longer guaranteed for an eigenvalue of the monodromy matrix on the unit circle. We present numerical examples that illustrate the scenario in which the bifurcation index becomes zero, yet no bifurcation of the spectrum takes place away from the imaginary axis.
If the eigenfunctions of (1.1) exhibit an asymptotic behavior consistent with their WKB approximations at leading order as \(\lambda\) varies along the imaginary axis towards \(\pm i\infty\), and if these approximations coincide with the eigenfunctions of the limiting eigenvalue problem, with constant coefficients, then it becomes possible to determine the algebraic multiplicity of the eigenvalues \(\lambda\in i\mathbb{R}\) for \(|\lambda|\gg 1\). Additioanlly, Theorem 5 can be applied to establish that, for some classes of quasi-periodic eigenvalue problems for Hamiltonian PDEs, the \(L^{2}(\mathbb{R})\) essential spectrum remains along the imaginary axis outside a bounded region of the complex plane. This proves advantageous for numerical computations. It is worth noting that the first two authors and their collaborator [2] present an alternative method for bounding unstable spectra \(\notin i\mathbb{R}\), by utilizing an argument based on the Gershgorin circle theorem.
**Theorem 6** (Spectrum away from the imaginary axis towards \(\pm i\infty\)).: _Consider_
\[\lambda v=(\partial_{x}^{n}+a_{1}(x)\partial_{x}^{n-1}+\dots+a_{n}(x))v,\qquad \lambda\in\mathbb{C}, \tag{1.25}\]
_where \(a_{k}(x)\), \(k=1,2,\dots,n\), are real-valued and smooth functions, satisfying \(a_{k}(x+T)=a_{k}(x)\) for some \(T>0\), the period. For \(n\) odd, there exist only a finite number of zeros of \(\varPhi_{n}(\lambda)\) for \(\lambda\in i\mathbb{R}\), defined in (1.24), where \(p(\mu,\lambda)\) denotes the characteristic polynomial of the monodromy matrix associated with (1.25). Consequently, there are only a finite number of points along the imaginary axis where the \(L^{2}(\mathbb{R})\) essential spectrum of (1.25) bifurcates away from the axis in a transversal manner._
Proof.: Utilizing a WKB approximation to (1.25) (see, for instance, [44, Chapter 7] for details), we establish that the fundamental solutions of (1.25) satisfy
\[v_{k}(x,\lambda)\sim e^{\lambda^{1/n}\omega_{k}x},\qquad k=1,2,\dots,n,\]
for \(\lambda\in i\mathbb{R}\) as \(|\lambda|\to\infty\), where \(\omega_{k}^{n}=1\), that is, \(\omega_{k}\) represent the \(n\)-th roots of unity. Introducing \(\theta_{k}(\lambda)=e^{\lambda^{1/n}\omega_{k}T}\), \(k=1,2,\dots,n\), we can approximate the characteristic polynomial of the monodromy matrix associated with (1.25) as
\[p(\mu,\lambda)\sim\prod_{k=1}^{n}(\theta_{k}(\lambda)-\mu)=\sum_{k=0}^{n}(-1) ^{n-k}e_{k}(\theta_{1},\theta_{2},\dots,\theta_{n})(\lambda)\mu^{n-k}\]
for \(\lambda\in i\mathbb{R}\) as \(|\lambda|\to\infty\), where \(e_{k}(\theta_{1},\theta_{2},\dots,\theta_{n})(\lambda)\) denotes the \(k\)-th elementary symmetric polynomial in \(\theta_{1},\theta_{2},\dots,\theta_{n}\). Recall that the resultant of two polynomials, \(p_{1}\) and \(p_{2}\), can be determined as the product of \(p_{2}\) evaluated at the roots of \(p_{1}\)[15], and we can calculate
\[\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda)) \sim(-1)^{n(n-1)/2}\operatorname{disc}_{\mu}p(\mu,\lambda)( \theta_{1}\theta_{2}\cdots\theta_{n}\theta_{1}^{\prime}\theta_{2}^{\prime} \cdots\theta_{n}^{\prime})(\lambda)\] \[=(-1)^{n(n-1)/2+n+1}n^{-n}T^{n}\operatorname{disc}_{\mu}p(\mu, \lambda)\lambda^{1-n} \tag{1.26}\]
for \(\lambda\in i\mathbb{R}\) as \(|\lambda|\to\infty\), where the prime means ordinary differentiation. Here the equality uses \(\sum\omega_{k}=0\) and \(\prod\omega_{k}=(-1)^{n+1}\).
We can explicitly calculate the resultant for the limiting eigenvalue problem of (1.25) for \(\lambda\in i\mathbb{R}\) as \(|\lambda|\to\infty\) and, hence, for (1.25) itself at the leading order in \(\lambda\). Referring to (1.24), we can determine the bifurcation index for the \(L^{2}(\mathbb{R})\) essential spectrum of (1.25) away from the imaginary axis for \(\lambda\in i\mathbb{R}\) as \(|\lambda|\to\infty\). Since the characteristic polynomial for the limiting problem \(\lambda v=\partial_{n}^{n}v\), for \(n\) odd, will have \(n\) distinct roots, the discriminant does not vanish, and neither does the resultant. Therefore, any bifurcations of the spectrum of (1.25) away from the imaginary axis will ultimately terminate along the imaginary axis towards \(\pm i\infty\). This completes the proof.
**Remark**.: Let \(\lambda=i\nu^{n}\), where \(\nu\gg 1\), and we can calculate
\[\Phi_{n}(\lambda) \sim\prod_{1\leqslant j<k\leqslant n}(e^{\lambda^{1/n}\omega_{j} T}-e^{\lambda^{1/n}\omega_{k}T})^{2}\] \[=\prod_{1\leqslant j<k\leqslant n}(2i)^{n(n-1)}\sin^{2}\Big{(} \nu Te^{\frac{1+2(j+k)\pi i}{2n}}\sin\Bigl{(}\tfrac{j-k}{n}\pi\Bigr{)}\Big{)}, \tag{1.27}\]
which is real for \(n\) odd. Indeed, we observe that all arguments of the sine functions either result in purely imaginary values or occur in pairs. When these pairs are multiplied together, they yield real values because they consist of complex numbers symmetric with respect to reflections across either the real or imaginary axis. Utilizing the identity
\[\sin(a+ib)\sin(a-ib)=\tfrac{1}{2}(\cosh(2b)-\cos(2a)),\]
we can demonstrate that (1.27) is real for \(n\) odd. This also follows from the symmetry of the characteristic polynomial as well as Theorem 6.
For example, when considering the generalized KdV equation (for \(n=3\)) and the Kawahara equation (for \(n=5\)), Theorem 6 implies that there are only a finite number of points along the imaginary axis where the \(L^{2}(\mathbb{R})\) essential spectrum for the stability problem bifurcates away from the axis. In the case of even \(n\), it is possible to derive a leading-order approximation of such bifurcations at \(\lambda\in i\mathbb{R}\) and \(|\lambda|\gg 1\), provided that the limiting eigenvalue problem itself is Hamiltonian. Additionally, (1.26) and (1.27) offer a method to calculate the bifurcation index up to leading order in \(\lambda\) as \(\lambda\to\pm i\infty\) for some lower-order eigenvalue problems. This can also be used to calculate the discriminant of the characteristic polynomial. However, it is important to note that these formulae can become quite intricate, and our approach has been to evaluate each equation on a case-by-case basis. Sections 2.5, 3.5 and 4.2 provide more details.
### Numerical method
In what follows we present a number of numerical experiments to illustrate these results. The numerics presented are done using two different techniques. The spectra of the linear operators are computed using the Fourier-Floquet-Hill method[11], a type of spectral method. The potentials are expanded in a Fourier series, and the operators expressed as operators on the sequence space \(\ell_{2}\). These operators are then truncated, resulting in an \(N\times N\) matrix eigenvalue problem, which we then solve across the entire range of the Floquet exponent, to obtain the complete spectrum. Typically \(N=31\), encompassing wave numbers spanning from \(-15\) to \(15\).
For the majority of examples, the periodic potential function can be conveniently represented using elementary functions or Jacobi elliptic functions, and their Fourier coefficients can be calculated using well-established analytic formulae (see, for instance, [29] for more details). However, in one instance involving the generalized KdV equation, a traveling wave solution cannot be expressed in terms of elliptic functions, and the Fourier coefficients are computed numerically. This involves solving the ODE governing periodic traveling waves, followed by numerical integration.
To numerically compute the Floquet discriminant, which enables us to determine the algebraic multiplicity of the spectrum along the imaginary axis, as well as the bifurcation index for the spectrum away from the axis, we solve the ODEs corresponding to the linearized operator.
Throughout the course of the numerical experiments, the spectrum is visualized using blue curves, while magenta lines parallel to the imaginary axis indicate intervals of maximal algebraic multiplicities. Dashed red curves represent the bifurcation index. In some instances, the axes may be interchanged to better highlight bifurcation points because the bifurcation index remains real along the imaginary axis.
## 2 Third order equations
### The generalized KdV equation
We begin our discussion by taking \(n=3\) and the spectral problem for the generalized KdV equation
\[\lambda v=v_{xxx}+(Q(x)v)_{x},\qquad\lambda\in\mathbb{C}, \tag{2.1}\]
where \(Q(x)\) is a real-valued function satisfying \(Q(x+T)=Q(x)\) for some \(T>0\), the period. We do not impose additional assumptions such as evenness. Clearly, (2.1) can be written in the form of (1.1), where
\[\boldsymbol{\mathcal{J}}=\partial_{x}\quad\text{and}\quad\boldsymbol{\mathcal{ L}}=\partial_{xx}+Q(x),\]
and the \(L^{2}(\mathbb{R})\) essential spectrum of (2.1) remains invariant under the transformations
\[\lambda\mapsto-\lambda\quad\text{and}\quad\lambda\mapsto\overline{\lambda}.\]
We notice that the nonzero eigenvalues of (2.1) coincide with the nonzero eigenvalues of
\[\lambda w=w_{xxx}+Q(x)w_{x}. \tag{2.2}\]
Indeed, if \(\lambda\neq 0\) is an eigenvalue of (2.1) then the corresponding eigenfunction \(v\) must have a zero mean over the period. Let \(w_{x}=v\), where \(w\) is periodic, and we can show that \(w\) satisfies (2.2). More generally, two operators \(\boldsymbol{\mathcal{L}}_{1}\boldsymbol{\mathcal{L}}_{2}\) and \(\boldsymbol{\mathcal{L}}_{2}\boldsymbol{\mathcal{L}}_{1}\) share common nonzero eigenvalues. Additionally, (2.2) is the negative adjoint of (2.1), which gives half of Hamiltonian symmetry. Particularly, the eigenvalues of (2.1) and, hence, (2.2) are invariant with respect to reflection across the imaginary axis.
We can reformulate (2.2) as
\[\mathbf{w}_{x}=\begin{pmatrix}0&1&0\\ 0&0&1\\ \lambda&-Q(x)&0\end{pmatrix}\mathbf{w}=:\mathbf{A}(x,\lambda)\mathbf{w}, \tag{2.3}\]
and we verify that (A1) and (A2) hold for \(\mathbf{B}(\lambda)=\begin{pmatrix}-\lambda&0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix}\). Additionally, we verify that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) and \(\lambda\in\mathbb{C}\). We define the monodromy matrix of (2.3) as
\[\mathbf{M}(\lambda)=\mathbf{W}(T,\lambda),\quad\text{where}\quad\mathbf{W}_{x} =\mathbf{A}(x,\lambda)\mathbf{W}\quad\text{and}\quad\mathbf{W}(0,\lambda)= \mathbf{I}_{3},\]
\(\mathbf{I}_{3}\) is the \(3\times 3\) identity matrix. We define the characteristic polynomial of the monodromy matrix of (2.3) as
\[p(\mu,\lambda)=\det(\mathbf{M}(\lambda)-\mu\mathbf{I}_{3}).\]
The monodromy matrix and the characteristic polynomial inherit symmetry from (2.3). Particularly,
\[p(\mu,\lambda)=-\mu^{3}+\boldsymbol{f}(\lambda)\mu^{2}-\boldsymbol{f}(-\lambda )\mu+1\quad\text{for $\lambda\in\mathbb{C}$},\]
where \(\mathbf{f}(\lambda)=\operatorname{tr}(\mathbf{M}(\lambda))\), and
\[p(\mu,\lambda)=-\mu^{3}+\mathbf{f}(\lambda)\mu^{2}-\overline{\mathbf{f}(\lambda)}\mu+1 \quad\text{for }\lambda\in i\mathbb{R}.\]
We remark that \(\mathbf{f}(-\lambda)=\mathbf{f}(\overline{\lambda})=\overline{\mathbf{f}(\lambda)}\) for \(\lambda\in i\mathbb{R}\).
Our interest lies in the \(L^{2}(\mathbb{R})\) essential spectrum of (2.2) along the imaginary axis, which holds significant importance when investigating the stability and instability of periodic traveling waves of the generalized KdV equation and related equations. Theorem 2 establishes that the monodromy matrix of (2.3) at \(\lambda\in i\mathbb{R}\) has three simple eigenvalues on the unit circle, that is, \(\lambda\) belongs to the spectrum with an algebraic multiplicity three, if and only if the Floquet discriminant \(\mathbf{f}(\lambda)\), defined in (1.13), lies within the deltoidal region bounded by the deltoid curve in (1.14). Furthermore, Theorem 4 demonstrates that if the spectrum of (2.2) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis in a transversal manner then the bifurcation index \(\varPhi_{3}(\lambda)\), defined in (1.21), becomes zero. We will proceed with numerical experiments to validate these analytical results.
### Numerical experiments for equations of KdV type
We begin our numerical experiments with the Mathieu equation
\[\lambda w=w_{xxx}+(4+5\cos(x))w_{x},\qquad\lambda\in\mathbb{C}. \tag{2.4}\]
Although it may not seem directly linked to a stability problem for a periodic traveling wave of the generalized KdV equation, as far as we are aware, (2.4) does represent a spectral problem that exhibits the required symmetry.
Figure 2 depicts the deltoidal region in \(\mathbb{R}^{2}(=\mathbb{C})\), enclosed by the deltoid curve defined in (1.14), together with the trajectory of the Floquet discriminant \(\mathbf{f}(\lambda)\) for (2.4) as \(\lambda\) varies over the imaginary axis. Our numerical observation reveals that for \(\lambda\in i\mathbb{R}\) in the interval approximately \((-.4462i,.4462i)\,\cup\,\pm(6.0153i,6.022i)\,\cup\,\pm(6.0451i,6.0509i)\), the algebraic multiplicity is three. For other values of \(\lambda\in i\mathbb{R}\), the multiplicity appears to be one.
Moving on to Figure 3, the blue curves are the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (2.4). Additionally, the magenta lines, running parallel to the imaginary axis, indicate
Figure 2: The deltoidal region (blue) and the trajectory of the Floquet discriminant for (2.4) (black).
intervals where \(\mathbf{f}(\lambda)\), \(\lambda\in i\mathbb{R}\), resides within the deltoidal region and, consequently, \(\lambda\) is in the spectrum with an algebraic multiplicity three. The dashed red curves, representing the graph of the bifurcation index \(\Phi_{3}(\lambda)\), intersect the imaginary axis, identifying points where bifurcations of the spectrum away from the imaginary axis are anticipated. Note that these bifurcations manifest in regions of multiplicity three (on the left) or regions of multiplicity one (on the right).
We turn our attention to the generalized KdV equation
\[u_{t}+u_{xxx}+(u^{k})_{x}=0,\qquad k\geqslant 2\ \text{an integer}. \tag{2.5}\]
For \(k=2\) (the KdV equation) and \(k=3\) (the mKdV equation), (2.5) is integrable via the inverse scattering transform, which offers a rigorous method for determining the stability and instability of periodic traveling waves. There has, of course, been a great deal of work aimed at understanding the stability of generalized KdV traveling waves, both solitary waves and periodic wavetrains: see for instance[10, 18, 33, 36, 37, 40].
A traveling wave solution of (2.5) takes the form \(u(x,t)=\phi(x-ct)\) for some \(c\neq 0,\in\mathbb{R}\), the wave speed, and it satisfies by quadrature
\[\phi_{x}^{2}=2E+2a\phi+c\phi^{2}-\frac{2}{k+1}\phi^{k+1}=:G(\phi;c,a,E),\]
where \(a\) and \(E\) are real constants. Here we assume that \(\phi\) is periodic. Note that \(G(\phi;c,a,E)\) is a polynomial in \(\phi\), and is elliptic for \(k=2\) and \(3\) while it becomes hyperelliptic in general for \(k\geqslant 4\), an integer.
Linearizing (2.5) about a periodic traveling wave \(\phi(x;c,a,E)\) in the frame of reference moving at the speed \(c\), we arrive at
\[\lambda v-cv_{x}+v_{xxx}+(k\phi^{k-1}v)_{x}=0,\qquad\lambda\in\mathbb{C}. \tag{2.6}\]
At \(\lambda=0\), it is possible to explicitly calculate the associated monodromy matrix and the characteristic polynomial, and the followings hold true:
* The Floquet discriminant \(\mathbf{f}(0)=\mathrm{tr}(\mathbf{M}(0))=3\), where \(\mathbf{M}(\lambda)\) denotes the monodromy matrix. Specifically, \(1\) is an eigenvalue of \(\mathbf{M}(0)\) with an algebraic multiplicity three and a geometric multiplicity two.
* The discriminant of the characteristic polynomial \(\Delta_{3}(0)=0\), where \(\Delta_{3}(\lambda)\) is defined in (1.13).
* The bifurcation index \(\Phi_{3}(0)=0\), where \(\Phi_{3}(\lambda)\) is in (1.21).
As a result, Theorem 4 becomes inconclusive and one must conduct a detailed modulation calculation to determine whether the \(L^{2}(\mathbb{R})\) essential spectrum of (2.6) bifurcates at \(0\in\mathbb{C}\) away
Figure 3: The blue curves represent the numerically computed spectrum of (2.4) for \(\lambda\in(-0.5i,0.5i)\) (left) and \((5.95i,6.12i)\) (right). The magenta lines indicate intervals of algebraic multiplicity three, and the dashed red curves intersect the imaginary axis where the bifurcation index is zero.
from the imaginary axis. Interested readers can refer to [19, 3, 4, 23, 1], among many others, for further elaboration. Our emphasis here is on bifurcations of the spectrum at \(\lambda\neq 0,\in i\mathbb{R}\).
For example, Figure 4 depicts the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (2.6) for \(k=3\)--namely, the linearized mKdV equation--about a periodic traveling wave with parameters \(c=1\), \(a=\frac{1}{4}\), and \(E=0\). Recall [4] that if \(G(\phi;c,a,E)=2E+2a\phi+c\phi^{2}-\frac{1}{2}\phi^{4}\) has four real roots then the corresponding periodic traveling wave is modulationally stable and, correspondingly, the spectrum lies along the imaginary axis with an algebraic multiplicity three near \(0\in\mathbb{C}\). We verify that \(G(\phi;1,\frac{1}{4},0)\) indeed has four real roots. Our numerical findings corroborate this, revealing that for \(\lambda\in i\mathbb{R}\) in the interval \((-\lambda_{0}i,\lambda_{0}i)\), where \(\lambda_{0}\) is approximately \(0.354\), the algebraic multiplicity is three. In the remaining intervals of the imaginary axis, the multiplicity appears to be one. Additionally, apart from the zero at \(0\in i\mathbb{R}\), no other zeros of the bifurcation index are detected. We numerically observe no bifurcations of the spectrum near \(0\in\mathbb{C}\) away from the imaginary axis, implying the spectral stability of the underlying periodic traveling wave. Indeed, all numerically computed eigenvalues have a maximal real part \(<8\times 10^{-10}\), well within the numerical accuracy.
Figure 5 shows the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (2.6) for \(k=3\) about a periodic traveling wave for \(c=1\), \(a=\frac{1}{5}\), and \(E=\frac{2}{5}\). A direct calculation reveals that
Figure 4: The numerically computed spectrum of (2.6) for \(k=3\) (the mKdV equation) with \(c=1\), \(a=\frac{1}{4}\), and \(E=0\), suggesting spectral stability. The spectrum has an algebraic multiplicity three in the interval \(\approx(-0.354i,0.354i)\), while the multiplicity is one elsewhere along \(i\mathbb{R}\). The only zero of the bifurcation index is at \(0\), where the spectrum does not seem to bifurcate away from the imaginary axis.
Figure 5: The numerically computed spectrum of (2.6) for \(k=3\) with \(c=1\), \(a=\frac{1}{5}\), and \(E=\frac{2}{5}\), suggesting spectral instability. The spectrum appears to bifurcate away from the imaginary axis at \(0\in\mathbb{C}\) as well as \(\approx\pm 0.4i\). The dashed red curves represent the graph of the bifurcation index rotated by \(90^{\circ}\), with intersections along the imaginary axis indicating bifurcation points.
\(G(\phi;1,\frac{1}{5},\frac{2}{5})\) has two real roots and two complex roots, whereby it follows from [4] that the underlying periodic traveling wave is modulationally unstable. Indeed, we numerically observe that a spectral curve emerges from \(0\in\mathbb{C}\) along the imaginary axis, accompanied by two additional spectral curves emerging in a non-tangential manner, consistent with underlying Hamiltonian symmetry. Additionally, we numerically observe that the algebraic multiplicity is one for the entire spectrum. The dashed red curves represent the graph of the bifurcation index rotated by \(90^{\circ}\). The intersections of these curve with the imaginary axis identify the points where bifurcations of the spectrum away from the imaginary axis are anticipated. The agreement between the numerical findings and analytical predictions is strong.
Lastly, Figure 6 shows the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (2.6) for \(k=4\) about a periodic traveling wave for \(c=1\), \(a=2\), and \(E=0\), suggesting modulational instability. Indeed, our numerical observation reveals that bands of unstable spectrum bifurcate from \(0\in\mathbb{C}\), exhibiting a characteristic "figure eight" curve that is consistent with underlying Hamiltonian symmetry. These bands of unstable spectrum then rejoin the imaginary axis at approximately \(\pm 7.6i\), the zeros of the bifurcation index.
### The generalized BBM equation
Another illustrative example is the generalized BBM equation
\[u_{t}-u_{xxt}+u_{x}+g(u)_{x}=0 \tag{2.7}\]
for some nonlinearity \(g\) or, equivalently,
\[u_{t}-u_{xxt}+cu_{xxx}+(1-c)u_{x}+g(u)_{x}=0 \tag{2.8}\]
in the frame of reference moving at some \(c\neq 0,\in\mathbb{R}\), the wave speed. A traveling wave solution of (2.7) corresponds to a stationary solution of (2.8), and it satisfies
\[c\phi_{xxx}+(1-c)\phi_{x}+(g(\phi))_{x}=0.\]
Here we assume that \(\phi(x+T)=\phi(x)\) for some \(T>0\), the period.
Linearizing (2.8) about a stationary solution \(\phi(x)\), we arrive at
\[\lambda v-\lambda v_{xx}+cv_{xxx}+(1-c)v_{x}+(g^{\prime}(\phi)v)_{x}=0,\qquad \lambda\in\mathbb{C}, \tag{2.9}\]
where the prime means ordinary differentiation. Introducing
\[v_{2}=-\lambda v_{x}+cv_{xx}+(1-c)v+g(\phi)v\quad\text{and}\quad v_{3}=-\lambda v +cv_{x},\]
Figure 6: The numerically computed spectrum of (2.6) for \(k=4\) with \(c=1\), \(a=2\), and \(E=0\). The spectrum appears to bifurcate away from the imaginary axis at \(0\in\mathbb{C}\) and \(\approx\pm 7.6i\). The dashed red curves represent the graph of the bifurcation index rotated by \(90^{\circ}\), with intersections along the imaginary axis indicating bifurcation points.
we can reformulate (2.9) as
\[\begin{pmatrix}v\\ v_{2}\\ v_{3}\end{pmatrix}_{x}=\begin{pmatrix}\frac{\lambda}{c}&0&\frac{1}{c}\\ -\lambda&0&0\\ -Q(x)&1&0\end{pmatrix}\begin{pmatrix}v\\ v_{2}\\ v_{3}\end{pmatrix}=:\mathbf{A}(x;\lambda)\mathbf{v}, \tag{2.10}\]
where \(Q(x)=g^{\prime}(\phi(x))+1-c\), and we verify that (A1) and (A2) hold for
\[\mathbf{B}(\lambda)=\begin{pmatrix}-\lambda&0&1\\ 0&\frac{1}{\lambda}&0\\ -1&0&0\end{pmatrix}, \tag{2.11}\]
provided that \(\lambda\neq 0\). However, it is important to note that \(\operatorname{tr}(\mathbf{A}(x,\lambda))\neq 0\) for \(\lambda\neq 0\). A spectral analysis for (2.9) follows a similar approach to that in Section 2.1 for the generalized KdV equation. Further details on a modulational calculation can be found in [20] and [19], among others.
Let \(\mathbf{M}(\lambda)\) denote the monodromy matrix of (2.10). The characteristic polynomial of the monodromy matrix (see Johnson[20]) takes the form
\[p(\mu,\lambda)=\det(\mathbf{M}(\lambda)-\mu\mathbf{I}_{3})=-\mu^{3}+\mathbf{f}( \lambda)\mu^{2}-\mathbf{f}(-\lambda)e^{\frac{\lambda T}{c}}\mu+e^{\frac{\lambda T} {c}},\qquad\lambda\in\mathbb{C},\]
where \(\mathbf{f}(\lambda)=\operatorname{tr}(\mathbf{M}(\lambda))\). The discriminant of the characteristic polynomial is no longer real for \(\lambda\in i\mathbb{R}\). Instead,
\[\Delta_{3}(\lambda):= e^{-\frac{2\lambda T}{c}}\operatorname{disc}_{\mu}p(\mu,\lambda)\] \[= \mathbf{f}(\lambda)^{2}\mathbf{f}(-\lambda)^{2}-4\mathbf{f}(\lambda)^{3}e^{- \frac{\lambda T}{c}}-4\mathbf{f}(-\lambda)^{3}e^{\frac{\lambda T}{c}}+18\mathbf{f}( \lambda)\mathbf{f}(-\lambda)-27\]
is real for \(\lambda\in i\mathbb{R}\). We observe that \(\Delta_{3}(\lambda)=0\), \(\lambda\in i\mathbb{R}\), when \(\mathbf{f}(\lambda)\) lies on the deltoid curve, defined in (1.14), rotated through an angle of \(\frac{\lambda T}{3c}\), whereby \(\Delta_{3}\) can be used to determine the algebraic multiplicity of the \(L^{2}(\mathbb{R})\) essential spectrum of (2.9) along the imaginary axis. The resultant of \(p(\mu,\lambda)\) and \(p_{\lambda}(\mu,\lambda)\) is likewise no longer real for \(\lambda\in i\mathbb{R}\). Instead,
\[\varPhi_{3}(\lambda):= e^{-\frac{\lambda T}{c}}\operatorname{res}_{\mu}(p(\mu,\lambda),p_{ \lambda}(\mu,\lambda))\] \[= e^{-\frac{\lambda T}{c}}\mathbf{f}^{\prime}(\lambda)^{3}+e^{\frac{ \lambda T}{c}}\mathbf{f}^{\prime}(-\lambda)^{3}+\mathbf{f}(\lambda)\mathbf{f}^{\prime}(- \lambda)^{2}\mathbf{f}^{\prime}(\lambda)+\mathbf{f}(-\lambda)\mathbf{f}^{\prime}(\lambda) ^{2}\mathbf{f}^{\prime}(-\lambda)\] \[+e^{\frac{\lambda T}{c}}\frac{T^{2}}{c^{2}}(\mathbf{f}(\lambda)^{2} \mathbf{f}^{\prime}(\lambda)+\mathbf{f}(-\lambda)^{2}\mathbf{f}^{\prime}(-\lambda))\] \[-e^{\frac{\lambda T}{c}}\frac{2T}{c}(\mathbf{f}(\lambda)\mathbf{f}^{ \prime}(\lambda)^{2}+\mathbf{f}(-\lambda)\mathbf{f}^{\prime}(-\lambda)^{2})\] \[+\frac{T^{3}}{c^{3}}(\mathbf{f}(\lambda)\mathbf{f}(-\lambda)+1)+\frac{T^ {2}}{c^{2}}(\mathbf{f}(\lambda)\mathbf{f}^{\prime}(-\lambda)+\mathbf{f}^{\prime}(\lambda) \mathbf{f}(-\lambda))\] \[-\frac{T}{c}(\mathbf{f}(\lambda)\mathbf{f}^{\prime}(\lambda)\mathbf{f}(- \lambda)\mathbf{f}^{\prime}(-\lambda)+3\mathbf{f}^{\prime}(\lambda)\mathbf{f}^{\prime}(- \lambda))\]
is real for \(\lambda\in i\mathbb{R}\), and if the spectrum of (2.9) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis then \(\varPhi_{3}(\lambda)=0\).
We remark that the exponential prefactor in the definitions of \(\Delta_{3}\) and \(\varPhi_{3}\) does not affect the roots of \(\operatorname{disc}_{\mu}(p(\mu,\lambda))\) and \(\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))\), and it is included because, in practice, working with real-valued polynomials aids in root finding, compared with dealing with complex-valued polynomials. Additionally, as was the case for the generalized KdV equation, under some additional assumptions, \(\varPhi_{3}(\lambda)=0\) also serves as a sufficient condition for the spectrum to bifurcate away from the imaginary axis.
Alternatively, let
\[\mathbf{v}_{0}=e^{-\frac{\lambda T}{3c}}\mathbf{v},\]
and we can demonstrate that if \(\mathbf{v}\) satisfies (2.10) then \(\mathbf{v}_{0}\) satisfies
\[\mathbf{v}_{0}{}_{x}=\left(\mathbf{A}(x,\lambda)-\frac{\lambda}{3c}\mathbf{I}_{ 3}\right)\mathbf{v}_{0}=:\mathbf{A}_{0}(x,\lambda)\mathbf{v}_{0}. \tag{2.12}\]
We verify that (A1) and (A2) remain valid for (2.11). An important difference here is that \(\operatorname{tr}(\mathbf{A}_{0}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) for all \(\lambda\in\mathbb{C}\). Let \(\mathbf{M}_{0}(\lambda)\) denote the monodromy matrix of (2.12) and the characteristic polynomial of the monodromy matrix is
\[p_{0}(\mu,\lambda)=-\mu^{3}+\boldsymbol{f}_{0}(\lambda)\mu^{2}-\boldsymbol{f}_ {0}(-\lambda))\mu+1\quad\text{for }\lambda\in\mathbb{C},\]
where \(\boldsymbol{f}_{0}(\lambda)=\operatorname{tr}(\mathbf{M}_{0}(\lambda))\), and
\[p_{0}(\mu,\lambda)=-\mu^{3}+\boldsymbol{f}_{0}(\lambda)\mu^{2}-\overline{ \boldsymbol{f}_{0}(\lambda)}\mu+1\quad\text{for }\lambda\in i\mathbb{R}.\]
Note that if \(p(\mu,\lambda)=0\) then \(p_{0}\big{(}e^{\frac{\lambda T}{3c}}\mu,\lambda\big{)}=0\). In other words, a root of \(p_{0}(\cdot,\lambda)\) for \(\lambda\in i\mathbb{R}\) is a rotation of a root of \(p(\cdot,\lambda)\). Consequently, the number of eigenvalues of \(\mathbf{M}(\lambda)\) on the unit circle is the same as the number of eigenvalues of \(\mathbf{M}_{0}(\lambda)\) on the unit circle. Note that
\[\boldsymbol{f}_{0}(\lambda)=\operatorname{tr}(\mathbf{M}_{0}(\lambda))=e^{ \frac{\lambda T}{3c}}\operatorname{tr}(\mathbf{M}(\lambda))=e^{\frac{\lambda T }{3c}}\boldsymbol{f}(\lambda),\]
and we calculate
\[\operatorname{res}_{\mu} (p_{0}(\mu,\lambda),p_{0\lambda}(\mu,\lambda))\] \[= e^{\frac{\lambda T}{c}}\left(\frac{T}{3c}\boldsymbol{f}(\lambda )+\boldsymbol{f}^{\prime}(\lambda)\right)^{3}-e^{-\frac{\lambda T}{c}}\left( \frac{T}{3c}\boldsymbol{f}(-\lambda)+\boldsymbol{f}^{\prime}(-\lambda) \right)^{3}\] \[+\boldsymbol{f}(\lambda)\left(\frac{T}{3c}\boldsymbol{f}(- \lambda)+\boldsymbol{f}^{\prime}(-\lambda)\right)^{2}\left(\frac{T}{3c} \boldsymbol{f}(\lambda)+\boldsymbol{f}^{\prime}(\lambda)\right)\] \[-\boldsymbol{f}(-\lambda)\left(\frac{T}{3c}\boldsymbol{f}( \lambda)+\boldsymbol{f}^{\prime}(\lambda)\right)^{2}\left(\frac{T}{3c} \boldsymbol{f}(-\lambda)+\boldsymbol{f}^{\prime}(-\lambda)\right).\]
We deduce that \(\operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))=0\) for \(\lambda\in i\mathbb{R}\) if and only \(\operatorname{res}_{\mu}(p_{0}(\mu,\lambda),p_{0\lambda}(\mu,\lambda))=0\).
Therefore, Theorem 2 establishes that the monodromy matrix of (2.10) at \(\lambda\in i\mathbb{R}\) has three simple eigenvalues on the unit circle, that is, \(\lambda\) belongs to the \(L^{2}(\mathbb{R})\) essential spectrum with an algebraic multiplicity three if and only if \(\Delta_{3}(\lambda)=e^{-\frac{2\lambda T}{c}}\operatorname{disc}_{\mu}p(\mu, \lambda)\) or \(\operatorname{disc}_{\mu}(p_{0}(\mu,\lambda))<0\). The latter holds when the Floquet discriminant for (2.12) lies within the deltoidal region bounded by the deltoid curve in (1.14). Furthermore, Theorem 4 demonstrates that if the spectrum of (2.2) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis in a transversal manner then the bifurcation index \(\Phi_{3}(\lambda)=e^{-\frac{3\lambda T}{c}}\operatorname{res}_{\mu}(p(\mu, \lambda),p_{\lambda}(\mu,\lambda))\) or \(\operatorname{res}_{\mu}(p_{0}(\mu,\lambda),p_{0\lambda}(\mu,\lambda))\) becomes zero. We will present numerical experiments to validate these analytical findings.
**Remark**.: When \(\lambda=0\), we may assume without loss of generality that \(c=1\), and (2.9) simplifies by quadrature into the Hill's equation
\[v_{xx}+Q(x)v=v_{0}(x),\]
where \(v_{0}(x)\) represents the (time-independent) initial value of \(v\).
### Numerical experiments for equations of BBM type
We begin our numerical experiments with
\[\lambda(v-v_{xx})+v_{xxx}+((4+5\cos(x))v)_{x}=0,\qquad\lambda\in\mathbb{C}. \tag{2.13}\]
Similar to the Mathieu equation, (2.13) may not seem to correspond to a stability problem for a periodic traveling wave of the generalized BBM equation, as far as we are aware, but it serves as a valuable test case.
In Figure 7, the blue curves are the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (2.13). It is noteworthy that the imaginary axis is included in the spectrum. Additionally, the magenta lines, running parallel to the imaginary axis, indicate intervals where the spectrum has an algebraic multiplicity three. We numerically observe that these intervals are approximately \((-2.25i,-2.13i)\), \((-0.27i,0.27i)\) and \((2.13i,2.25i)\). The multiplicity appears to be one in the remaining intervals of the imaginary axis. The dashed red lines, representing the graph of the bifurcation index rotated by \(90^{\circ}\), intersect the imaginary axis, identifying the zeros of the bifurcation index where bifurcations of the spectrum away from the imaginary axis are anticipated. Our numerical findings corroborate this, revealing two isolas in the upper half-plane, approximately within the ranges of \(0.77i\) to \(0.128i\) as well as \(2.225i\) to \(2.285i\).
We turn our attention to the BBM equation
\[u_{t}-u_{xxt}+u_{x}+uu_{x}=0. \tag{2.14}\]
A traveling wave solution of (2.14) takes the form \(u(x,t)=\phi(x-ct)\) for some \(c\neq 0,\in\mathbb{R}\), the wave speed, and it satisfies by quadrature
\[c\phi_{x}^{2}=2E+2a\phi+(1-c)\phi^{2}-\tfrac{1}{3}\phi^{3},\]
where \(a\) and \(E\) are real constants. Here we assume that \(\phi\) is periodic.
For example, Figure 8 depicts the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearized operator of (2.14) about a periodic traveling wave for \(c=1\), \(a=\tfrac{7}{6}\) and \(E=-1\), which can be expressed in closed form as
\[\phi(x)=1+\mathrm{cn}^{2}\left(\sqrt{\tfrac{5}{12}}x;\tfrac{1}{5}\right). \tag{2.15}\]
Here "\(\mathrm{cn}\)" represents a Jacobi elliptic function, the elliptic cosine function. It is worth noting that \(\phi\) oscillates between a minimum of \(\phi=1\) and a maximum of \(\phi=2\) with a period \(2\sqrt{\tfrac{12}{5}}K\left(\tfrac{1}{5}\right)\approx 5.142\), where \(K\) is the complete elliptic integral of the first kind. We numerically observe that for \(\lambda\in i\mathbb{R}\) in the interval \((-\lambda_{0}i,\lambda_{0}i)\), where \(\lambda_{0}\) is approximately \(0.5\), the algebraic multiplicity
Figure 7: The numerically computed spectrum for (2.13). The magenta lines indicate intervals of algebraic multiplicity three, and the dashed red lines intersect the imaginary axis where the bifurcation index becomes zero.
is three. Elsewhere along the imaginary axis, the multiplicity appears to be one. Additionally, apart from the zero at \(0\in i\mathbb{R}\), no other zeros of the bifurcation index are found. We numerically observe no bifurcations of the spectrum near \(0\in\mathbb{C}\) away from the imaginary axis, implying the spectral stability of the underlying periodic traveling wave. Indeed, all numerically computed eigenvalues have a maximal real part \(<3.2\times 10^{-8}\).
Lastly, let's consider the (focusing) mBBM equation
\[u_{t}-u_{xxt}+(u^{3})_{x}=0 \tag{2.16}\]
and a cnoidal solution
\[u(x,t)=\sqrt{\frac{2m}{2m-1}}\operatorname{cn}\left(\frac{x-t}{\sqrt{2m-1}};m \right),\qquad m>1/2. \tag{2.17}\]
Although our numerical investigation is not exhaustive, the stability and instability of periodic traveling waves of (2.16) seem to resemble those of the mKdV equation. Specifically, the snoidal solutions (for the defocusing equation) and dnoidal solutions (for the focusing equation) appear stable while the cnoidal solutions seem to exhibit instability.
Figure 9 shows the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearized operator of (2.16) about a cnoidal solution (2.17) for \(m=\frac{1}{2}+\frac{\sqrt{5}}{10}\approx.7236\). Modulational instability is evident, with the spectrum bifurcating away from the imaginary axis at \(0\in\mathbb{C}\) and approximately \(\pm 0.3i\). These bifurcation points align with the three zeros of the bifurcation index.
### Spectrum along the imaginary axis towards \(\pm i\infty\)
Applying Theorem 6 to (2.6), where \(\lambda=i\nu^{3}\) and \(|\nu|\gg 1\), we obtain
\[\Delta_{3}(i\nu^{3})\sim 16\sinh^{2}\left(\tfrac{\sqrt{3}}{2}\nu T\right) \left(\cos\left(\tfrac{3}{2}\nu T\right)-\cosh\left(\tfrac{\sqrt{3}}{2}\nu T \right)\right)^{2}>0,\]
Figure 8: The numerically computed spectrum of the linearized operator of (2.14) about (2.15), suggesting spectral stability. The spectrum has an algebraic multiplicity three in the interval \(\approx(-0.5i,0.5i)\), while the multiplicity appears to be one elsewhere along \(i\mathbb{R}\). The only zero of the bifurcation index occurs at \(\lambda=0\), where bifurcation of the spectrum away from the imaginary axis does not seem to occur.
provided that \(\nu\neq 0\). Consequently, we deduce from Theorem 2 that \(\lambda\in i\mathbb{R}\), \(|\lambda|\gg 1\), is in the \(L^{2}(\mathbb{R})\) essential spectrum of (2.6) with an algebraic multiplicity one. Furthermore,
\[\varPhi_{3}(i\nu^{3}) \sim-\frac{1}{27}\frac{T^{3}}{\nu^{6}}\Delta_{3}(i\nu^{3})\] \[=-\frac{16}{27}\frac{T^{3}}{\nu^{6}}\sinh^{2}\left(\tfrac{\sqrt{3 }}{2}\nu T\right)\left(\cos\left(\tfrac{3}{2}\nu T\right)-\cosh\left(\tfrac{ \sqrt{3}}{2}\nu T\right)\right)^{2}<0.\]
Consequently, Theorem 4 implies that only a finite number of points along the imaginary axis can exhibit transversal bifurcation of the spectrum away from the axis.
Note that the linearization of (2.7) does not adhere to the form (1.25), which prevents us from directly applying Theorem 6. Had it been possible to construct a full-dimensional and linearly independent set of asymptotic solutions, either through the WKB method or alternative means, we could have followed the argument in a similar manner to the proof of Theorem 6 to derive leading-order expressions for the discriminant of the characteristic polynomial and the bifurcation index. However, our attempts to identify such a set of solutions were unsuccessful, leaving us unable to generate the required formulae.
## 3 Fourth order equations
### The nonlinear Schrodinger equation
We turn our attention to \(n=4\) and the spectral problem for the nonlinear Schrodinger equation
\[\lambda\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\begin{pmatrix}\mathbf{\mathcal{K}}&\mathbf{\mathcal{L}}_{+}\\ -\mathbf{\mathcal{L}}_{-}&\mathbf{\mathcal{K}}\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix},\] (3.1a) where \[\mathbf{\mathcal{L}}_{+}=\mathbf{\mathcal{L}}_{+}^{\dagger}=\partial_{xx}+Q_{+}(x), \qquad\mathbf{\mathcal{L}}_{-}=\mathbf{\mathcal{L}}_{-}^{\dagger}=\partial_{xx}+Q_{-}( x),\] (3.1b) and \[\mathbf{\mathcal{K}}=-\mathbf{\mathcal{K}}^{\dagger}=\tfrac{1}{2}R^{\prime}(x)+R(x) \partial_{x}. \tag{3.1c}\]
Figure 9: The numerically computed spectrum of the linearized operator of (2.16) about (2.17) for \(m=\tfrac{1}{2}+\tfrac{\sqrt{5}}{10}\). The spectrum has an algebraic multiplicity one except at \(0\in\mathbb{C}\). The bifurcation index has zeros at \(0\) and as well as \(\approx\pm 0.3i\).
Here \(Q_{\pm}(x)\) and \(R(x)\) are real-valued, smooth, and \(T\)-periodic functions for some \(T>0\). We introduce
\[w_{1}=v_{1}-\tfrac{1}{2}R(x)v_{2}\quad\text{and}\quad w_{2}=-v_{2x}-\tfrac{1}{2} R(x)v_{1}, \tag{3.2}\]
which is essentially the nonzero phase analogue of what is presented in [21], and we can reformulate (3.1) as
\[\begin{pmatrix}v_{1}\\ v_{2}\\ w_{1}\\ w_{2}\end{pmatrix}_{x}=\begin{pmatrix}0&\tfrac{1}{2}R(x)&1&0\\ -\tfrac{1}{2}R(x)&0&0&-1\\ -Q_{-}(x)+\tfrac{1}{4}R(x)^{2}&-\lambda&0&-\tfrac{1}{2}R(x)\\ -\lambda&Q_{+}(x)-\tfrac{1}{4}R(x)^{2}&\tfrac{1}{2}R(x)&0\end{pmatrix}\begin{pmatrix} v_{1}\\ v_{2}\\ w_{1}\\ w_{2}\end{pmatrix}. \tag{3.3}\]
In short,
\[\mathbf{v}_{x}=\mathbf{A}(x,\lambda)\mathbf{v}.\]
We verify that (A1) and (A2) hold for
\[\mathbf{B}=-\mathbf{B}^{-1}=\begin{pmatrix}0&0&-1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&-1&0&0\end{pmatrix}.\]
Additionally, we verify that \(\operatorname{tr}(\mathbf{A}(x,\lambda))=0\) for all \(x\in\mathbb{R}\) for all \(\lambda\in\mathbb{C}\).
We define the monodromy matrix of (3.3) as
\[\mathbf{M}(\lambda)=\mathbf{V}(T,\lambda),\quad\text{where}\quad\mathbf{V}_{x} =\mathbf{A}(x,\lambda)\mathbf{V}\quad\text{and}\quad\mathbf{V}(0,\lambda)= \mathbf{I}_{4},\]
\(\mathbf{I}_{4}\) is the \(4\times 4\) identify matrix. We define the characteristic polynomial of the monodromy matrix of (3.3) as
\[p(\mu,\lambda)=\det(\mathbf{M}(\lambda)-\mu\mathbf{I}_{4}).\]
We deduce from Lemma 1 that
\[p(\mu,\lambda)=\mu^{4}-\mathbf{f}(\lambda)\mu^{3}+\mathbf{g}(\lambda)\mu^{2}-\mathbf{f}(- \lambda)\mu+1,\qquad\lambda\in\mathbb{C},\]
where \(\mathbf{f}(\lambda)\) and \(\mathbf{g}(\lambda)\) are defined in (1.16). Importantly, \(\mathbf{g}(\lambda)\) is real for \(\lambda\in i\mathbb{R}\).
Our interest lies in the \(L^{2}(\mathbb{R})\) essential spectrum of (3.1) along the imaginary axis, which holds significant importance when investigating the stability and instability of periodic traveling waves of the nonlinear Schrodinger equation and related equations. Theorem 3 establishes distinct regions in \(\mathbb{R}^{3}\) corresponding to varying algebraic multiplicities, by utilizing the Floquet discriminant defined in (1.16). Theorem 5 demonstrates that if the spectrum of (3.1) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis in a transversal manner then the bifurcation index \(\Phi_{4}(\lambda)\), defined in (1.24), reaches zero.
### Trivial phase solutions
Suppose for simplicity that \(\mathbf{\mathcal{K}}=0\), and (3.1) becomes
\[\begin{split}&\lambda v_{1}=\mathbf{\mathcal{L}}_{+}v_{2}=v_{2xx}+Q_ {+}(x)v_{2},\\ &\lambda v_{2}=-\mathbf{\mathcal{L}}_{-}v_{1}=-v_{1xx}-Q_{-}(x)v_{1} \end{split} \tag{3.4}\]
for \(\lambda\in\mathbb{C}\), where \(Q_{\pm}(x)\) are real-valued smooth functions satisfying \(Q_{\pm}(x+T)=Q_{\pm}(x)\) for some \(T>0\), the period. This arises in the stability problem for trivial phase solutions of the nonlinear Schrodinger equation, among other applications. The first author and Rapti [5] have determined the conditions under which the eigenvalues of the monodromy matrix associated with (3.4) lie on
the unit circle. It is noteworthy that the monodromy matrix and the characteristic polynomial exhibit additional symmetry. Specifically,
\[\mathbf{M}^{\top}(x,\lambda)\begin{pmatrix}\mathbf{0}&-\mathbf{I}_{2}\\ \mathbf{I}_{2}&\mathbf{0}\end{pmatrix}=-\begin{pmatrix}\mathbf{0}&-\mathbf{I}_ {2}\\ \mathbf{I}_{2}&\mathbf{0}\end{pmatrix}\mathbf{M}(x,\lambda)\]
for all \(x\in\mathbb{R}\) for all \(\lambda\in\mathbb{C}\). Importantly, the characteristic polynomial of the monodromy matrix takes the form
\[p(\mu,\lambda)=\mu^{4}-f(\lambda)\mu^{3}+g(\lambda)\mu^{2}-f(\lambda)\mu+1,\]
where both \(f(\lambda)\) and \(g(\lambda)\) are real-valued and even for \(\lambda\in i\mathbb{R}\).
A direct calculation leads to that (1.9) becomes
\[p^{\sharp}(\nu,\lambda)=(2+2f(\lambda)+g(\lambda))\nu^{4}+(-12+2g(\lambda))\nu ^{2}+2-2f(\lambda)+g(\lambda).\]
Let
\[\Delta_{4}(\lambda):= \operatorname{disc}_{\nu}p^{\sharp}(\nu,\lambda)\] \[= -4096(8+f(\lambda)^{2}-4g(\lambda))^{2}(2+2f(\lambda)+g(\lambda) )(-2+2f(\lambda)-g(\lambda)),\]
and
\[P_{4}(\lambda)=16(-6+g(\lambda))(2-2f(\lambda)+g(\lambda)),\] \[D_{4}(\lambda)=-256(8+f(\lambda)^{2}-4g(\lambda))(2-2f(\lambda)+ g(\lambda))^{2}.\]
Suppose that \(\lambda\in i\mathbb{R}\) and \(\Delta_{4}(\lambda),P_{4}(\lambda),D_{4}(\lambda)\neq 0\). Following the proof of Theorem 3, we can establish that the number of distinct eigenvalues of the monodromy matrix for (3.4) on the unit circle or, equivalently, the algebraic multiplicity of the \(L^{2}(\mathbb{R})\) essential spectrum of (3.4) is:
* four if \(\Delta_{4}(\lambda)>0\) and \(P_{4}(\lambda),D_{4}(\lambda)<0\),
* two if \(\Delta_{4}(\lambda)<0\),
* zero if \(\Delta_{4}(\lambda)>0\) and either \(P_{4}(\lambda)\) or \(D_{4}(\lambda)\) is positive.
Figure 10 depicts the regions in the \((f,g)\) plane, corresponding to different numbers of eigenvalues of the monodromy matrix for (3.4) on the unit circle or, equivalently, varying algebraic multiplicities of the \(L^{2}(\mathbb{R})\) essential spectrum along the imaginary axis. The blue region corresponds to a multiplicity four, the red region corresponds to a multiplicity zero, and the remaining regions of \(\mathbb{R}^{2}\) corresponds to a multiplicity two. It is important to note that these regions are
Figure 10: The regions in \(\mathbb{R}^{2}\) for different algebraic multiplicities of the \(L^{2}(\mathbb{R})\) essential spectrum of (3.4) along the imaginary axis.
obtained by restricting the regions in the right panel of Figure 1, in the \((f_{1},f_{2},f_{3})\) coordinates, to the \(f_{2}=0\) plane.
These regions in Figure 10 effectively determine the algebraic multiplicity of the spectrum over a dense open subset of the \((f,g)\) plane. Further investigation would be required to classify the borderline cases where one or more of \(\Delta_{4},P_{4},D_{4}\) vanishes. Details regarding these cases are not provided here.
Additionally, let
\[\varPhi_{4}(\lambda):= \operatorname{res}_{\mu}(p(\mu,\lambda),p_{\lambda}(\mu,\lambda))\] \[= (g^{\prime}(\lambda)^{2}+(g(\lambda)-2)f^{\prime}(\lambda)^{2}-f( \lambda)f^{\prime}(\lambda)g^{\prime}(\lambda))^{2},\]
and Theorem 5 demonstrates that if the \(L^{2}(\mathbb{R})\) essential spectrum of (3.4) bifurcates at \(\lambda\in i\mathbb{R}\) away from the imaginary axis then \(\varPhi_{4}(\lambda)=0\).
As an alternative approach, we introduce \(\omega=\mu+\frac{1}{\mu}\), which is well-defined because \(p(0,\lambda)=1\) for all \(\lambda\in\mathbb{C}\). This conformally maps the unit circle to the interval \([-2,2]\). A straightforward calculation leads to that the characteristic equation for the monodromy matrix associated with (3.4) becomes
\[\omega^{2}-f(\lambda)\omega+g(\lambda)-2=0,\]
and the roots are given by
\[\omega_{\pm}(\lambda)=\frac{f(\lambda)\pm\sqrt{f(\lambda)^{2}-4g(\lambda)+8}} {2}.\]
We can demonstrate that the monodromy matrix for (3.4) has two eigenvalues on the unit circle provided that \(\omega_{+}(\lambda)\in[-2,2]\), and it has additional two eigenvalues on the unit circle provided \(\omega_{-}(\lambda)\in[-2,2]\). Furthermore, we can show that \(\varPhi_{4}^{\prime}(\lambda)=0\) if and only if either \(\omega_{+}^{\prime}(\lambda)=0\) or \(\omega_{-}^{\prime}(\lambda)=0\). The \(L^{2}(\mathbb{R})\) essential spectrum of (3.4) bifurcates at a critical point of \(\omega_{\pm}\) away from the imaginary axis if and only if \(\omega_{+}(\lambda)\) or \(\omega_{-}(\lambda)\) lies within the interval \((-2,2)\) respectively. The zeros of the bifurcation index do not lead to bifurcation when they are critical points of \(\omega_{\pm}\) but \(\omega_{\pm}(\lambda)\) lies outside the interval \((-2,2)\).
For example, Figure 11 presents the result from a numerical experiment with
\[\begin{split}\lambda v_{1}&=v_{2xx}+(6-\cos(x)+3 \sin(2x))v_{2},\\ \lambda v_{2}&=-v_{1xx}-(4-3\cos(x)+2\sin(2x))v_{1}. \end{split} \tag{3.5}\]
In the left panel, the blue curves represent the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum for 2000 values of the Floquet exponent for each \(v_{1}\) and \(v_{2}\). Similar to Sections 2.2 and 2.4, the magenta lines, running parallel to the imaginary axis, indicate intervals of algebraic multiplicity four. The numerical result supports this, revealing that for \(\lambda\in i\mathbb{R}\) in the interval \((-\lambda_{0}i,\lambda_{0}i)\), where \(\lambda_{0}\) is approximately \(2\), the algebraic multiplicity is four. Notably, the spectrum includes the interval approximately \((-1.58,1.58)\) in the real axis.
The dashed red curves, representing the graph of the bifurcation index, intersect the imaginary axis, identifying potential bifurcation points where the spectrum might move away from the imaginary axis. It is evident that the bifurcation index \(\varPhi_{4}(\lambda)=0\) at each of these bifurcation points along the imaginary axis. Unlike third-order equations, however, \(\varPhi_{4}(\lambda)=0\) is a necessary condition but not a sufficient condition for such bifurcations. The left panel displays 10 points where the bifurcation index becomes zero, yet only seven of these points result in actual bifurcations, including the interval along the real axis. The three points where the bifurcation index vanishes but does not lead to spectral bifurcations are approximately at \(3.6i\), \(3.75i\), and \(5.7i\).
The right panel of Figure 11 shows the spectrum in green, rotated by \(90^{\circ}\), alongside the graphs of \(\omega_{\pm}\) in blue and red, respectively. We numerically observe that the algebraic multiplicity of \(\lambda\in i\mathbb{R}\) changes when \(\omega_{\pm}(\lambda)\) exit the interval \([-2,2]\), either by remaining real and exiting the interval or by colliding and becoming complex. Additionally, we numerically observe that the
critical points of \(\omega_{\pm}\) within the interval \((-2,2)\) correspond to spectral bifurcations away from the imaginary axis. Three critical points of \(\omega_{\pm}\) occur when \(\omega_{\pm}\) is outside of the interval \((-2,2)\), approximately at \(3.6i\), \(3.75i\), and \(5.7i\), but they do not lead to bifurcations. The critical point of \(\omega_{-}\) at around \(3.6i\) is visible, while the other two critical points outside of \((-2,2)\) are not visible at this scale.
### Nontrivial phase solutions
We turn our attention to the nonlinear Schrodinger equation
\[i\psi_{t}=\psi_{xx}+g(|\psi|^{2})\psi \tag{3.6}\]
for some nonlinearity \(g\), and nontrivial phase solutions of the form
\[\psi(x,t)=A(x)e^{iK(x)+i\omega t}\quad\text{for some functions $A(x)$ and $K(x)$.}\]
As is the case for the KdV equation the stability of the traveling wave solutions to the NLS equation has been extensively studied: see [7, 8, 13, 14, 45] for some examples.
The spectral problem for (3.6) concerning a nontrivial phase solution does not possess the additional symmetry present in the spectral problem for a trivial phase solution. Consequently, the trace of the monodromy matrix \(\operatorname{tr}(\mathbf{M}(\lambda))\) takes complex values. However, it is noteworthy that \(\frac{1}{2}(\operatorname{tr}(\mathbf{M}(\lambda))^{2}-\operatorname{tr} \bigl{(}\mathbf{M}^{2}(\lambda)\bigr{)})\) is necessarily real for \(\lambda\in i\mathbb{R}\).
In what follows, we select \(g(x)=3x^{2}\)--namely, the quintic focusing nonlinearity. A direct calculation leads to
\[A_{xx}-\frac{\kappa^{2}}{A^{3}}+\omega A+3A^{5}=0\quad\text{and}\quad K_{x}= \frac{\kappa}{A^{2}}.\]
Here we select \(\kappa=\frac{3\sqrt{2}}{8}\) and \(\omega=\frac{31}{16}\). After integration, we obtain
\[A_{x}^{2}=-\frac{21}{32}-\frac{9}{32A^{2}}+\frac{31}{16}A^{2}-A^{6}.\]
Figure 11: On the left, the numerically computed “frog” spectrum of (3.5), alongside the intervals of algebraic multiplicity four and the graph of the bifurcation index. On the right, the spectrum, rotated by \(90^{\circ}\), together with the graphs of \(\omega_{\pm}\).
The turning points, which are the zeros of the right side, include \(A=\pm 1\), \(\pm\frac{i}{2}\), \(\pm\frac{\sqrt{6}}{2}\), and \(\pm i\frac{\sqrt{6}}{2}\), with a period approximately \(1.93\).
Figure 12 depicts the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearization of (3.6) with the quintic focusing nonlinearity about a nontrivial phase solution. The numerical result shows that in the depicted range, the imaginary axis is included in the spectrum with an algebraic multiplicity two. The dashed red curves represent the graph of the bifurcation index, and their intersections with the imaginary axis identify potential bifurcation points. The numerical result confirms that the spectrum bifurcates from the imaginary axis at \(0\) and approximately \(\pm 3.72i\).
### The Boussinesq equation
Another illustrative example is the Boussinesq equation
\[u_{tt}-u_{xx}+u_{xxxx}-g(u)_{xx}=0 \tag{3.7}\]
for some appropriate nonlinearity \(g\) or, alternatively,
\[u_{tt}-2cu_{xt}+(c^{2}-1)u_{xx}+u_{xxxx}-g(u)_{xx}=0 \tag{3.8}\]
in the frame of reference moving at some \(c\neq 0,\in\mathbb{R}\), the wave speed. Typically, the nonlinearity \(g(u)\) is chosen to be \(u^{2}\), but other choices are possible. Let \(\phi(x)\) denote a standing wave solution of (3.8) or, equivalently, \(\phi(x-ct)\) represents a traveling wave solution of (3.7). We assume that \(\phi(x+T)=\phi(x)\) for some \(T>0\), the period.
Linearizing (3.8) about a standing wave solution \(\phi(x)\) leads to[39, 16]
\[\lambda^{2}v-2c\lambda v_{x}+v_{xxxx}+(c^{2}-1)v_{xx}-(g(\phi)v)_{xx}=0,\qquad \lambda\in\mathbb{C}. \tag{3.9}\]
Introducing
\[w=v_{xx}-(g(\phi)+1-c^{2})v:=v_{xx}-Q(x)v,\] \[v_{1}=v_{x}\quad\text{and}\quad-\lambda^{2}w_{1}=w_{x},\]
Figure 12: The numerically computed spectrum of the linearized operator of the focusing quintic Schrödinger equation about a nontrivial phase solution.
and we can reformulate (3.9) as
\[\begin{pmatrix}v\\ w\\ v_{1}\\ w_{1}\end{pmatrix}_{x}=\begin{pmatrix}0&0&1&0\\ 0&0&0&-\lambda^{2}\\ Q(x)&1&0&0\\ 1&0&-\frac{2c}{\lambda}&0\end{pmatrix}\begin{pmatrix}v\\ w\\ v_{1}\\ w_{1}\end{pmatrix}. \tag{3.10}\]
In short,
\[\mathbf{v}_{x}=\mathbf{A}(x,\lambda)\mathbf{v}.\]
We verify that (A1) and (A2) hold for
\[\mathbf{B}(\lambda)=\begin{pmatrix}8c^{3}&-2c&\lambda&-4c^{2}\lambda\\ -2c&0&0&\lambda\\ -\lambda&0&0&0\\ 4c^{2}\lambda&-\lambda&0&2c\lambda^{2}\end{pmatrix},\quad\text{where}\quad \mathbf{B}(\lambda)^{-1}=\begin{pmatrix}0&0&-\frac{1}{\lambda}&0\\ 0&-2c&0&-\frac{1}{\lambda^{2}}\\ \frac{1}{\lambda}&0&0&-\frac{2c}{\lambda^{2}}\\ 0&\frac{1}{\lambda}&-\frac{2c}{\lambda^{2}}&0\end{pmatrix}.\]
Note that (3.10) is infinitesimally symplectic when \(c=0\). Indeed, \(\mathbf{A}(x,-\lambda)=\mathbf{A}(x,\lambda)\) and \(\mathbf{B}(\lambda)=\begin{pmatrix}\mathbf{0}&\lambda\mathbf{I}_{2}\\ -\lambda\mathbf{I}_{2}&\mathbf{0}\end{pmatrix}\) when \(c=0\). However, rather than having additional symmetry as seen in the case of the nonlinear Schrodinger equation, it appears that the symmetry reduces to the symplectic one.
Figure 13 presents the result from a numerical experiment with
\[Q(x)=5\cos(x)+\sin(2x)\quad\text{and}\quad c=1. \tag{3.11}\]
We pause to remark that this potential function does not arise in the stability problem for periodic traveling waves of the Boussinesq equation, as far as we are aware. The blue curves are the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of (3.10) for (3.11). We numerically observe that within the depicted range, the imaginary axis is included in the spectrum, and the algebraic multiplicity is zero or two. In fact, no portion of the spectrum has an algebraic multiplicity four. The dashed red lines represent the graph of the bifurcation index, and their intersections with the imaginary axis indicate potential bifurcation of the spectrum away from the imaginary axis.
Figure 13: The numerically computed “maple seed” spectrum for (3.10) for (3.11). The spectrum is depicted in blue, and the bifurcation index in red dashed.
Let's consider the Boussinesq equation with a cubic power-law nonlinearity
\[u_{tt}-u_{xx}+u_{xxxx}-(u^{3}-u)_{xx}=0 \tag{3.12}\]
and a periodic traveling wave solution in closed form
\[u(x,t)=\tfrac{\sqrt{7}}{2}\operatorname{dn}\Big{(}\sqrt{\tfrac{7}{8}}(x-t), \tfrac{6}{7}\Big{)}. \tag{3.13}\]
Here "dn" represents a Jacobi elliptic function, the delta amplitude. Note that (3.13) oscillates between a minimum of \(\tfrac{1}{2}\) and a maximum of \(\tfrac{\sqrt{7}}{2}\) with a period \(2\sqrt{\tfrac{8}{7}}\operatorname{K}(\tfrac{6}{7})\approx 5.156\), where K is the complete elliptic integral of the first kind.
Figure 14 shows the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearization operator of (3.12) about (3.13). The magenta curves represent the sign of the discriminant of the characteristic polynomial. The numerical result reveals that the algebraic multiplicity is either zero or four when the discriminant is positive, while the multiplicity is two when the discriminant is negative. The transition between multiplicities zero and two is detected by the change in the sign of the discriminant. There do not appear to be any intervals where the multiplicity is four, and the bifurcation index seems to have a simple zero at \(0\in\mathbb{C}\).
### Spectrum along the imaginary axis towards \(\pm i\infty\)
When applying (1.26) to a fourth-order equation, we obtain
\[\operatorname{res}_{4}(\lambda)\sim\frac{T^{4}\sin^{2}(\sqrt[4]{\lambda}T) \sinh^{2}(\sqrt[4]{\lambda}T)(\cos(\sqrt[4]{\lambda}T)-\cosh(\sqrt[4]{\lambda} T))^{4}}{\lambda^{3}},\]
which is generally not real even if \(\lambda\) belongs to the imaginary axis. This reflects the fact that the limiting eigenvalue problem, which is the ODE with \(\lambda\) as a parameter solved by the leading-order WKB approximations, is generally not Hamiltonian for a spectral problem of even order. Consequently, we do not expect to find information about spectrum and, hence, bifurcations along the imaginary axis. However, for the even-order equations discussed here, such as the Boussinesq equation and the nonlinear Schrodinger equation both with and without a phase,
Figure 14: The numerically computed spectrum of the linearized operator of (3.12) about (3.13). The magenta curves represent the sign of the discriminant of the characteristic polynomial.
the WKB approximation gives a leading-order approximation in the spectral parameter that corresponds to the solutions of a Hamiltonian ODE. Therefore, in theory, we can produce leading-order approximations of the discriminant of the characteristic polynomial as well as the bifurcation index for such equations.
For the case of the nonlinear Schrodinger equation, both with and without a phase, we set \(\lambda=i\nu^{2}\) for \(\nu\gg 1\). Plugging this into the formula, we obtain the following leading-order expression for the discriminant of the characteristic polynomial:
\[\Delta_{4}(i\nu^{2})\sim-256\sin^{2}(\nu T)\sinh^{2}(\nu T)(\cos(\nu T)-\cosh( \nu T))^{4}\leqslant 0.\]
Consequently, there are two eigenvalues of the monodromy matrix on the unit circle outside the discrete set of \(\mathbb{C}\) for \(\lambda\in i\mathbb{R}\) and \(|\lambda|\gg 1\). Furthermore, the leading-order expression for the bifurcation index is
\[\Phi_{4}(i\nu^{2})\sim\frac{16T^{4}\sin^{2}(\nu T)\sinh^{2}(\nu T)(\cos(\nu T)- \cosh(\nu T))^{4}}{\nu^{4}}\geqslant 0.\]
However, this vanishes to an even order when \(\nu\) is an integer multiple of \(\frac{2\pi}{T}\), suggesting that bifurcations of isolas are possible near these points, depending on how the bifurcation index perturbs. Determining whether there is an infinite number of bifurcations along the imaginary axis would require higher-order WKB approximations. For the nonlinear Schrodinger equation, with or without a phase, the next-order non-vanishing WKB approximation depends on the potential, and it can be numerically determined in any particular case. However, to the best of our knowledge, it is not particularly amenable to general analysis.
For the Boussinesq equation, the leading-order discriminant and bifurcation index are the same as those for the nonlinear Schrodinger equation. Therefore, higher-order WKB approximations, which depend on the potential, would be required to analyze the possibility of an infinite number of bifurcations.
## 4 The Kawahara equation
We conclude by considering a fifth-order KdV equation known as the Kawahara equation
\[u_{t}-u_{xxxxx}+\alpha u_{xxx}+uu_{x}=0,\qquad\alpha\in\mathbb{R}. \tag{4.1}\]
As with previous equations there have been a number of papers that have considered the stability of traveling wave solutions to the Kawahara equation[6, 17, 31, 38, 42, 43].
We will begin by providing comprehensive discussion of periodic traveling waves of (4.1), given explicitly in terms of a Jacobi elliptic function. This expands on the treatment in [27]. In the case of \(\alpha=0\), (4.1) admits a one-parameter family of periodic stationary wave solutions given explicitly as
\[\phi(x)=420\sigma^{4}\operatorname{cn}^{4}\left(\sigma x,\tfrac{1}{2}\right)- 168\sigma^{4},\qquad\sigma\in\mathbb{R}. \tag{4.2}\]
Since (4.1) enjoys the same Galilean invariance as the generalized KdV equation, we can boost and translate the solution to obtain periodic traveling wave solutions. However, since these transformations do not alter the stability or instability of the solution, we will limit our discussion to the form (4.2).
For \(\alpha\neq 0\), periodic traveling wave solutions of (4.1) become more intricate due to the loss of scaling invariance present in the \(\alpha=0\) case. Nevertheless, we obtain a one-parameter family of periodic stationary wave solutions, represented as
\[\phi(x)=C_{1}+C_{2}\operatorname{cn}^{2}(\sigma x,m^{*})+C_{3}\operatorname{ cn}^{4}(\sigma x,m^{*}), \tag{4.3a}\]
where
\[C_{1} =\frac{-31\alpha^{2}+264992\sigma^{4}{m^{*}}^{2}-264992\sigma^{4}{m^{ *}}-18928\sigma^{4}+3640\alpha\sigma^{2}-7280\alpha\sigma^{2}{m^{*}}}{507},\] \[C_{2} =-\frac{280}{13}\sigma^{2}{m^{*}}(-\alpha+104\sigma^{2}{m^{*}}-52 \sigma^{2}), \tag{4.3b}\] \[C_{3} =1680\sigma^{4}{m^{*}}^{2}.\]
This family of traveling wave solutions is parameterized by \(\sigma\) and, additionally, by Galilean and translational invariance, which we do not include here. It is worth noting that, unlike the \(\alpha=0\) case, where \(\sigma\) can take any real value, for \(\alpha\neq 0\), \(\sigma\) must satisfy \(|\frac{\alpha}{\sigma^{2}}|<52\). Interestingly, the number \(52\) is not a numerical approximation but the exact value, corresponding to the unique real root of
\[31x^{3}-56784x-1406080=0.\]
When \(|\frac{\alpha}{\sigma^{2}}|<52\), (4.3) defines periodic traveling waves of (4.1) and they are characterized by an elliptic parameter \(m^{*}\) determined by the unique root of
\[-703040\left(m-2\right)(m+1)(2m-1)+56784\frac{\alpha}{\sigma^{2}}(m^{2}-m+1)- 31\left(\frac{\alpha}{\sigma^{2}}\right)^{3}=0 \tag{4.4}\]
within the interval \(m\in(0,1)\). It is important to note that changing the sign of \(\alpha\) results in exchanging the parameter \(m\) with its complementary parameter \(m^{\prime}=1-m\). This arises from the imaginary Jacobi transformation, which relates elliptic functions with argument \(x\) and parameter \(m\) to those with argument \(ix\) and parameter \(1-m\), together with the fact that under a complex rotation \(x\mapsto ix\), we have \(\partial_{xx}\mapsto-\partial_{xx}\) and \(\partial_{xxxx}\mapsto\partial_{xxxx}\).
Figure 15 depicts the elliptic parameter \(m^{*}\) as a function of \(\frac{\alpha}{\sigma^{2}}\) within the range \((-52,52)\). The curve exhibits critical points \(m^{*}\approx 0.285665\) and \(m^{*}\approx 0.714335\), corresponding to the two roots of the polynomial
\[64-192m-391m^{2}+1102m^{3}-391m^{4}-192m^{5}+64m^{6},\]
which is the discriminant of (4.4) with respect to \(\frac{\alpha}{\sigma^{2}}\). These critical points reside in the interval \((0,1)\).
Additionally, we will rely on results regarding the real roots of a real quintic polynomial, following the notation and methodology in [9]. To summarize, a real quintic polynomial
\[p(x)=a_{0}x^{5}+5a_{1}x^{4}+10a_{2}x^{3}+10a_{3}x^{2}+5a_{4}x+a_{5},\]
\(a_{k}\in\mathbb{R}\), can be transformed into a monic depressed quintic polynomial
\[p_{0}(x)=x^{5}+10A_{2}x^{3}+10A_{3}x^{2}+5A_{4}z+A_{5},\]
using the change of variables \(x\mapsto\frac{x-a_{1}}{a_{0}}\). Let \[\Delta_{5}=\operatorname{disc}(p_{0}),\] \[P_{5}=4A_{2}^{A}+A_{3}^{2},\] \[D_{5}=A_{5}^{2}+16A_{2}A_{4}^{2}-76A_{2}A_{3}A_{5}-(272A_{2}^{3}- 108A_{3}^{2})A_{4}+24A_{2}^{2}(40A_{2}^{3}+27A_{3}^{2}).\] Suppose \(\Delta_{5},P_{5},D_{5}\neq 0\). Similar to quartic polynomials, the roots of \(p_{0}\) and, hence, \(p\) have: * five real roots if \(\Delta_{5}>0\) and \(P_{5}<0\), \(D_{5}<0\), * three real roots if \(\Delta_{5}<0\), * one real root if \(\Delta_{5}>0\) and either \(P_{5}>0\) or \(D_{5}>0\). For simplicity, we omit discussion of various higher co-dimension possibilities, where one or more of \(\Delta_{5},P_{5},D_{5}\) vanishes. It is worth noting that these quantities can be expressed entirely in terms of the Floquet discriminant for (4.1), but the resulting analytical expressions are quite extensive, and we do not reproduce them there.
### Numerical experiments
We begin our numerical experiment by setting \(\alpha=0\) and focusing on the periodic standing wave solutions in (4.2). Note that neither the scaling invariance transformation \(u(x,t)\mapsto\sigma^{4}u(\sigma x,\sigma^{9}t)\) nor the Galilean invariance transformation \(u(x,t)\mapsto c+u(x-ct,t)\) affects the stability or instability of the underlying wave. Consequently, the results obtained in our numerics apply to the entire family of traveling waves under consideration.
For our numerical computation, we have chosen \(\sigma=\frac{1}{4}\). From general modulation theoretic considerations, we expect that at \(0\in\mathbb{C}\), the associated monodromy matrix should have an eigenvalue \(\mu=1\) with an algebraic multiplicity three. Our numerical findings indeed confirms this and the spectral problem is numerically stiff. Specifically, at \(\lambda=0\), the eigenvalues of the monodromy matrix include \(\mu=1\) with a multiplicity three, \(\mu\approx 28284.5\) and \(\mu\approx 3.536\times 10^{-5}\).
In Figure 16, the blue curves represent the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearized operator of (4.1), without third-order dispersion, about (4.2) for \(\sigma=\frac{1}{4}\). On the left, the spectrum is observed to be entirely aligned with the imaginary axis, and all numerically computed eigenvalues have a maximum real part approximately \(3.4\times 10^{-10}\). The magenta lines, running parallel to the imaginary axis, indicate intervals where the spectrum has an algebraic multiplicity three. This interval is approximately \((-0.015i,0.015i)\). To validate the determination of the region with multiplicity three, we compared it with the results obtained from the spectral computation. As the Floquet exponent \(\mu\) varies over a range, it becomes
Figure 16: (Left) The numerically computed spectrum of the linearization of (4.1) about (4.2) for \(\alpha=0\) and \(\sigma=\frac{1}{4}\). (Right) A close-up near the interval of multiplicity five, alongside the graphs of the triple covering \(\lambda_{-1}\), \(\lambda_{0}\) and \(\lambda_{1}\).
evident that the interval \((-0.015i,0.015i)\) is triply covered. The right panel shows the imaginary parts of the triple covering \(\lambda_{-1}\), \(\lambda_{0}\), and \(\lambda_{1}\) as functions of \(\mu\). The independently computed region of multiplicity three, utilizing the Floquet discriminant, is in magenta, and the two are in excellent agreement.
Let's examine the spectral problem for (4.1) and the periodic traveling wave solution in (4.3) for \(\alpha=-2\) and \(\sigma=\frac{1}{4}\). This gives an elliptic parameter \(m\approx.6185\). It is important to note that for this particular sign of \(\alpha\), the two kinetic energy terms in the Hamiltonian, \(\int u_{x}^{2}\ dx\) and \(\int u_{xx}^{2}\ dx\), exhibit opposite signs. Based on physical intuition, the kind of competition between these two terms might lead to instabilities. Our numerical results indeed support the intuition.
In the left panel of Figure 17, the blue curves represent the numerically computed \(L^{2}(\mathbb{R})\) essential spectrum of the linearization of (4.1) about (4.3) for \(\alpha=-2\) and \(\sigma=\frac{1}{4}\). We numerically observe that the spectrum bifurcates from the imaginary axis at the origin and at six other points along the imaginary axis. Notably, the dashed red curves intersect the imaginary axis precisely at these bifurcation points. Within an interval along the imaginary axis around \(0\in\mathbb{C}\), excluding the origin itself, the algebraic multiplicity is one. On the other hand, there exist two short intervals, in magenta, on the imaginary axis with a multiplicity three, away from \(0\in\mathbb{C}\).
On the right, the blue curves show the imaginary parts of three purely imaginary eigenvalues as functions of the Floquet exponent \(\mu\), accompanied by subintervals on the imaginary axis where the multiplicity is three, in magenta. It is evident that the two are in very close agreement. It is worth emphasizing that these graphs were computed using different computer codes. The magenta intervals were determined using an ODE solver along the imaginary axis to compute the Floquet discriminant, while the blue curves were obtained through spectral decomposition together with an eigenvalue solver.
### Spectrum along the imaginary axis towards \(\pm i\infty\)
Similar to the case of the generalized KdV equation, Theorem 6 applies to the Kawahara equation. Specifically, for \(\lambda=i\nu^{5}\) and \(\nu\gg 1\), the discriminant of the characteristic polynomial is
Figure 17: (Left) The numerically computed spectrum of the linearization of (4.1) about (4.3) for \(\alpha=-2\) and \(\sigma=\frac{1}{4}\). (Middle) A close-up of the bifurcation index near \(0\in\mathbb{C}\). (Right) The graphs of triple covering \(\lambda_{0}\) and \(\lambda_{\pm 1}\) as functions of \(\mu\). The magenta lines are the intervals in \(i\mathbb{R}\) of multiplicity three.
approximated as
\[\Delta_{5}(i\nu^{5})\sim 4096\left(\cos\!\left(\tfrac{1}{2}\sqrt{5}\nu T\right)-\cosh\! \left(\tfrac{1}{2}\sqrt{5-2\sqrt{5}}\nu T\right)\right)^{2}\] \[\times\left(\cos\!\left(\tfrac{1}{4}(5+\sqrt{5})\nu T\right)- \cosh\!\left(\tfrac{1}{2}\sqrt{\tfrac{1}{2}(5-\sqrt{5})}\nu T\right)\right)^{2}\] \[\times\left(\cos\!\left(\tfrac{1}{4}(\sqrt{5}-5)\nu T\right)- \cosh\!\left(\tfrac{1}{2}\sqrt{\tfrac{1}{2}(5+\sqrt{5})}\nu T\right)\right)^{2}\] \[\times\left(\cos\!\left(\tfrac{1}{2}\sqrt{5}\nu T\right)-\cosh\! \left(\tfrac{1}{2}\sqrt{5+2\sqrt{5}}\nu T\right)\right)^{2}\] \[\times\sinh^{2}\left(\tfrac{1}{2}\sqrt{\tfrac{1}{2}(5-\sqrt{5})} \nu T\right)\sinh^{2}\left(\tfrac{1}{2}\sqrt{\tfrac{1}{2}(5+\sqrt{5})}\nu T \right).\]
While this expression may appear somewhat intricate, it is real and positive for all real values of \(\nu\) for all \(T>0\). Subsequently, (1.26) provides the formula for the bifurcation index. Our verification confirms, as with the generalized KdV equation, that only a finite number of isolas bifurcate away from the imaginary axis for the Kawahara equation.
|
2304.13541 | D-STACK: High Throughput DNN Inference by Effective Multiplexing and
Spatio-Temporal Scheduling of GPUs | Hardware accelerators such as GPUs are required for real-time, low-latency
inference with Deep Neural Networks (DNN). However, due to the inherent limits
to the parallelism they can exploit, DNNs often under-utilize the capacity of
today's high-end accelerators. Although spatial multiplexing of the GPU, leads
to higher GPU utilization and higher inference throughput, there remain a
number of challenges. Finding the GPU percentage for right-sizing the GPU for
each DNN through profiling, determining an optimal batching of requests to
balance throughput improvement while meeting application-specific deadlines and
service level objectives (SLOs), and maximizing throughput by appropriately
scheduling DNNs are still significant challenges. This paper introduces a
dynamic and fair spatio-temporal scheduler (D-STACK) that enables multiple DNNs
to run in the GPU concurrently. To help allocate the appropriate GPU percentage
(we call it the "Knee"), we develop and validate a model that estimates the
parallelism each DNN can utilize. We also develop a lightweight optimization
formulation to find an efficient batch size for each DNN operating with
D-STACK. We bring together our optimizations and our spatio-temporal scheduler
to provide a holistic inference framework. We demonstrate its ability to
provide high throughput while meeting application SLOs. We compare D-STACK with
an ideal scheduler that can allocate the right GPU percentage for every DNN
kernel. D-STACK gets higher than 90 percent throughput and GPU utilization
compared to the ideal scheduler. We also compare D-STACK with other GPU
multiplexing and scheduling methods (e.g., NVIDIA Triton, Clipper, Nexus),
using popular DNN models. Our controlled experiments with multiplexing several
popular DNN models achieve up to 1.6X improvement in GPU utilization and up to
4X improvement in inference throughput. | Aditya Dhakal, Sameer G. Kulkarni, K. K. Ramakrishnan | 2023-03-31T15:29:44Z | http://arxiv.org/abs/2304.13541v1 | D-STACK: High Throughput DNN Inference by Effective Multiplexing and Spatio-Temporal Scheduling of GPUs
###### Abstract.
Hardware accelerators such as GPUs are required for real-time, low latency inference with Deep Neural Networks (DNN). However, due to the inherent limits to the parallelism they can exploit, DNNs often under-utilize the capacity of today's high-end accelerators. Although spatial multiplexing of the GPU, while limiting the GPU resources (GPU%) to each DNN to the right amount, leads to higher GPU utilization and higher inference throughput, there remain a number of challenges. Finding the GPU% for right-sizing the GPU for each DNN through profiling, determining an optimal batching of requests to balance throughput improvement while meeting application-specific deadlines and service level objectives (SLOs), and maximizing throughput by appropriately scheduling DNNs are still significant challenges.
This paper, introduces a dynamic and fair spatio-temporal scheduler (D-STACK) that enables multiple DNNs to run in the GPU concurrently. To help allocate the appropriate GPU% (we call it the "Knee"), we develop and validate a model that estimates the parallelism each DNN can utilize.We also develop a lightweight optimization formulation to find an efficient batch size for each DNN operating with D-STACK. We bring together our optimizations and our spatio-temporal scheduler to provide a holistic inference framework. We demonstrate its ability to provide high throughput while meeting application SLOs. We compare D-STACK with an ideal scheduler that can allocate the right GPU% for every DNN kernel. D-STACK gets higher than 90% throughput and GPU utilization compared to the ideal scheduler. We also compare D-STACK with other GPU multiplexing and scheduling methods (e.g., NVIDIA Triton, Clipper, Nexus), using popular DNN models. Our controlled experiments with multiplexing several popular DNN models achieve up to 1.6\(\times\) improvement in GPU utilization and up to 4\(\times\) improvement in inference throughput.
## 1. Introduction
Deep Neural Networks (DNNs) are widely used for many applications, including image recognition, natural language processing, _etc._ Accelerators have become indispensable for DNN learning and inference. Accelerators such as GPUs, TensorFlow Cores (Krizhevsky et al., 2014), and TPU (Krizhevsky et al., 2014) reduce the DNN inference times, often by 2-3 orders of magnitude compared to even using a high-end CPU cluster. These accelerators are widely used by cloud services as a part of their _inference-as-a-service_ (IaaS) offerings, where trained DNN models are hosted in a Cloud or an Edge Cloud (especially for low-latency operation). User requests are inferred using the GPUs deployed in the cloud.
Most DNN models running in inference frameworks (PyTorch (LeCun et al., 2015), TensorFlow Serving (Abadi et al., 2016), NVIDIA's Triton (Abadi et al., 2016)_etc._ often execute far fewer floating-point operations per second (FLOPS) than the capacity of these high-end GPUs (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014), TPUs (Peters et al., 2015) and other accelerators (Krizhevsky et al., 2014). We observed that DNN models, when performing inference even using a single GPU, do not significantly reduce the DNN's processing latency when provided with additional GPU resources (_i.e.,_ number of Streaming Multiprocessors (SMs) - GPU compute units analogous to CPU cores) beyond a certain point. We call this point as a **"Knee"** for the DNN (expressed as a percentage of the total SMs available in the GPU, _e.g.,_ 50% of a V100 GPU (which has 80 SMs in total) is 40 SMs.). Running applications with resources matching the Knee is desirable for a cloud operator providing Inference as a Service, since multiplexing a GPU (or similar accelerator) across as many applications as possible keeps costs low. Operating at the Knee also keeps the latency low for the user. When more GPU resources are provided for a DNN (e.g., by giving the full GPU to an application, possibly using temporal sharing), it is wasteful as the GPU is not fully utilized.
We see two fundamental reasons for this under-utilization of multi-core accelerators such as GPUs by DNNs when given more than the Knee's resources: i) Amount of parallelism over the entirety of DNN's execution is not uniform, _i.e.,_ many DNN functions (_e.g.,_ convolution, ReLU _etc._) are unable to fully utilize the parallelism offered by the accelerator; ii) DNN operations also involve other overheads (_e.g.,_ kernel launches, memory read-write, _etc._). We study the execution of a variety of DNN models to understand the root causes of under-utilization of such accelerators, particularly GPUs, and develop methods to improve the overall system utilization, thus improving throughput and reducing inference latency.
_Multiplexing GPUs in the Edge Cloud_:
DNN inference requests for applications such as autonomous driving, augmented reality, _etc._, have stringent deadlines (_e.g.,_
\(<\)100ms). A cloud providing IaaS also has to account for the network latency. Edge Clouds offer a sweet spot reducing both latency and offering the necessary processing resources, although more constrained than centralized cloud services. Multiplexing the expensive hardware accelerator is therefore very desirable. Current GPU virtualization and inference service frameworks such as Nexus (Nevas, 2018), NVIDIA's Triton Inference Server (Triton) (Titon) (Titon, 2018), gPipe (Pipe, 2019), and PipeDream (Pipe, 2020) either use a'single GPU per DNN' model or time-share the GPU across multiple DNN models. These current state-of-the-art frameworks for DNNs allocate the full GPU (_i.e.,_ 100% of GPU) for the time quantum as shown in Fig. 1 (left). However, dedicating an entire GPU to run a single DNN model at a time can be wasteful. Furthermore, interleaving execution of tenant applications by temporally sharing increases inference latency for all of them, because of the significant cost of frequent switching between applications. Multiplexing several applications on the GPU to run concurrently, through spatial as well as temporal multiplexing, helps to better utilize the GPU and achieve much higher aggregate inference throughput.
Our approach utilizes the CUDA Multi-process Service (MPS) (Srivastava et al., 2017) to spatially share the GPU across several applications, similar to GSLICE (K
being multiplexed on one V100 GPU, each concurrently inferring 10000 images each. The results in Table. 1 show that the Triton server takes about 58 seconds to finish inference. The D-STACK scheduler completes inference on all requests more than 37% faster (only **36** seconds). D-STACK's spatial multiplexing, providing just the right amount of GPU% and its dynamic spatio-temporal scheduling results in more effective use of the GPU and achieving higher DNN inference throughput than NVIDIA's Triton server, while also lowering task completion time. Based on these experiments, we see that implementation of Spatio-temporal scheduling can further enhance throughput when inferring with multiple different models concurrently.
_Contributions_: D-STACK improves GPU utilization by 60% and increases in DNN inference throughput by 4\(\times\) compared to a pure temporal scheduler, while still avoiding any deadline (SLO) violations. Our key contributions are:
* We investigate the extent to which a DNN can exploit parallelism (SS3), and devise an analytical model to demonstrate this limitation of typical DNNs when performing inference with GPUs(SS 4).
* We develop a Spatio-Temporal scheduler for DNNs, using the GPU% and batch size derived from our analytical models, to maximize inference throughput while allocating GPU resources fairly (SS6).
* We develop an optimization framework to determine the optimal DNN batch size and GPU%. We evaluate the efficacy of GPU usage when choosing the optimal batch size and Knee GPU%. (SS 5).
* We compare D-STACK's approach with the Triton server and other state-of-the-art scheduling algorithms.
## 2. Related Work
**GPU Multiplexing**: Multiplexing GPU to increase the GPU utilization and system throughput has been discussed in many studies. Proprietary products such as Nutanix [45], vGPU [47] utilize GPU virtualization to multiplex GPU across VMs. Many consider temporal multiplexing and seek increased GPU utilization through batching and better scheduling [7; 13; 20; 21; 22; 52; 61]. Gandiva [59] and Mystic [56] address multiplexing the GPU while observing but not solving the interference caused while multiplexing DNNs in the GPU. Unlike these, our work can concurrently run multiple applications in GPU, improve GPU utilization _and_ reduce or eliminate the interference through controlled spatial multiplexing.
**Spatial Multiplexing of GPU:** GSLICE [16] utilizes CUDA MPS to spatially share the GPU among multiple DNN applications. However, it partitions the GPU statically and does not schedule the execution of DNNs. With GSLICE, executing a large number of models potentially cause each model get a small GPU slice (less than the Knee), leading to higher inference latency and lower throughput. Moreover, the lack of a scheduler means it is insufficient for deadline-driven inference scenarios. We compare D-STACK with GSLICE in SS7.
Laius [67], G-Net [64], Gost [70] and Baymax [11] spatially multiplex GPU kernels. Unlike these works, our platform focuses on the spatially multiplex entire DNNs consisting of multiple kernels. Moreover, we run DNN applications in their native DNN framework (e.g., PyTorch, TensorFlow) without any algorithmic modifications, unlike the whitebox approach of Laius and Baymax. S3DNN[69] (uses Streams) and Prophet [10] (uses MPS) and CuMAS [8] profile each kernel and use a shim to capture kernel launches and reorder kernel executions for proper spatial sharing. In contrast, our approach does not require a shim or reordering of kernels and works in a black box manner, without requiring an application's individual kernel profile (which may not be available).
**DNN's limits on Utilizing GPUs:** Several works [62; 30; 63] have discussed the under-utilization of GPU by DNNs, and have proposed algorithmic optimizations that make DNN kernel computation more efficient [12; 17; 32; 54]. These solutions require whitebox models that can be changed. There have been works analyzing how DNN's exploit parallelism. [28; 29] show that DNNs attain a much smaller number of FLOPS than what a GPU can provide. Poise [18] and [34] shows that the high data load latency from the GPU memory to the processing unit is also a reason for the limit in parallelism. [39] creates an analytical model to predict the inference latency and mainly utilize temporal queuing solution to meet deadlines. [39]'s model uses default MPS, and due to interference causing increased latency, they limit the number of models spatially sharing the GPU at a time. On the other hand, D-STACK provides fine-grained spatial and temporal control of resources of the GPU and thus is able to run far more models with larger batch sizes without interference. With a spatio-temporal scheduler D-STACK utilizes resources both spatially and temporally to meet the inference deadline. [27] shows lack of resources in CPU and GPU spatial resources will greatly slowdown GPU execution. Our work complements [27] by demonstrating a method to find the Knee beyond which applications fail to utilize GPU efficiently. We utilize understanding from these related work to create an analytical DNN model that helps deriving the Knee% necessary for inference without slowdowns. Furthermore, we evaluate our methods in a real system.
**Multi-Instance GPUs (MIGs)** such as the NVIDIA A100 are hardware-based approaches for coarser-grained, spatial multiplexing. MICs allow static partitioning of a GPU into multiple smaller GPU instances (up to 7 instances with the A100). However, MICs require the GPU to be reset or VMs to be restarted to change the resource allocation. This causes significant downtimes as all the processing using the GPU
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Triton & \multirow{2}{*}{D-STACK} & Latency \\ & Server & & Reduction(\%) \\ \hline Task completion (sec.) & 58.61 & 35.59 & 37\% \\ \hline \end{tabular}
\end{table}
Table 1. Triton and D-STACK with 4 DNN models
has to also be restarted. D-STACK's spatio-temporal scheduling avoids the GPU reset and quickly allocates the desired GPU resources. Moreover, note that MIG GPUs are also able to run as a single GPU (similar to V100). Thus, they can benefit from D-STACK without any modification.
## 3. Understanding DNN Parallelism through Measurement
**Experimental Setup and Testbed**: We used a Dell Server with Intel(R) Xeon(R) Gold 6148 CPU with 20 cores, 256 GB of system memory, and one NVIDIA V100 GPU, and an Intel X710 10GbE NIC as our testbed. The V100 has 80 SMs and 16 GB of memory. Our workload for the vision based DNNs (Alexnet (36), Mobilenet (25), ResNets (24), VGG (53), Inception (55), ResNet (60)) consists of color images of resolution 224\(\times\)224. This resolution choice is inspired by initial work (5; 37; 53). For BERT (15), a natural language processing DNN, we utilize sentences of 10 words.
We use OpenNetVM (68) to host our framework that runs multiple DNN models for inference. We use Moongen (19) to transmit -1920 images/sec. on a 10Gbps Ethernet link. Our platform can batch input data to the desired batch size. We primarily report the execution time for inference in the GPU for all our experiments and do not consider the additional latency contributed by network protocols. Therefore, our results are independent of the network transport protocol used. We utilize CUDA Multi-Process Service (MPS) to spatially multiplex the GPU. We use CUDA_MPS_ACTIVE_THREAD_PERCENTAGE environmental variable to provide GPU%. Once set, the GPU% cannot be changed for a process.
### Measurement with ML Models
We now present measurements performed on our testbed with multiple DNNs, to demonstrate the limits in the parallelism of those DNN models. We measured the latency for inferring a batch of 16 images/sentences using different GPU% for several popular DNN models using PyTorch framework. We utilize models with different compute requirements.
From Fig. 2, we see that the inference latency remains unchanged above 30-50% of GPU for most models (Knee point). With a smaller batch size, the Knee% is lower (20%-35%). However, we also observe that using fewer than necessary SMs (low GPU%) leads to an exponential increase in model latency (also observed in (27)). We observed a similar knee with other GPUs as well. We evaluated computationally light models, Alexnet (A-P100 and A-T4) and Squeezenet (Sq-P100 and Sq-T4) on both the P100 and T4 GPUs. The T4 GPU supports CSS, but the P100 only supports default MPS. We present their results in Fig. 3. Even with different GPUs, we see the knee behavior in Alexnet and Squeezenet. Only the computationally dense ResNet-50 (R-P100 and R-T4) does not show an obvious knee. Both the P100 and T4 GPUs have lower computational capacity than the V100, therefore, ResNet-50 can fully utilize those GPUs. As the knee for these models exists in other GPUs as well, our platform can be used more generally in other GPUs as well.
### Dynamic GPU Resource Reconfiguration
Due to the limitation of CUDA MPS (48), any GPU resource readjustment requires us to spin up a new CPU process with an updated GPU%. This results in several seconds of downtime (depending on the ML framework initialization). We utilize the overlapped execution approach of GSLICE (16), which maintain an _active-standby_ pair of process, where an active process keeps processing incoming requests while a standby process loads the DNN model into the GPU with updated GPU%. The standby takes over inference when ready, thus, avoiding downtime.
While changing the GPU%, two instances of the same model, the original and the new model, occupy the GPU during the brief overlap time. This increases the GPU memory demand. We overcome this drawback through DNN parameter sharing utilized in GSLICE (16). We use cudaIPC to share the weights and parameters loaded by the original model with the new loading model, thus, removing the need of loading the weights again.Parameter sharing reduces the memory required by the newly loaded DNN model by up to 40%.
### Loading models without known Knee%
When a model which is not profiled and whose knee is not known is started, our platform initially provides it a nominal, 30%, GPU. The GPU% is then readjusted using Dynamic GPU resource reconfiguration to find the knee based on the inference latency using a simple binary search.
## 4. Modeling DNN parallelism
### Compute Bound vs. Memory Bound Workloads
The latency of accessing parameters and weights of the DNN layer from the GPU DRAM can be significant. Many studies (65) have suggested that memory-bound DNN kernels may have a small amount of compute and are likely to be limited by GPU memory bandwidth. NVIDIA has proposed
Figure 2. V100 lat. vs.**Figure 3. P100 and T4 GPUs GPU%(Batch = 16)** profile
an _arithmetic intensity_ (A.int) metric (Srivastava et al., 2017) to estimate if a kernel is memory or compute bound. The \(A\). _int_ of a kernel is computed as a ratio of floating point operations to memory (bytes) it fetched. _i.e.,_\(A.int=\frac{operations}{bytes}\). NVIDIA reports the arithmetic index of V100 GPU (in our testbed) is 139.8 _FLOPS/Byte_(Srivastava et al., 2017). Any kernel lower than the GPU's arithmetic index is memory-bound, while a kernel with higher index is compute-bound.
We analyzed the most frequently occurring kernels of CNNs Alexnet (LeCun et al., 2015), ResNet-50 (He et al., 2015), VGG-19 (Vaswani et al., 2017), and an RNN, GNMT (Srivastava et al., 2017), to illustrate the behavior of compute and memory-bound DNNs. We present the results in Table. 2. Most convolution layers exceed the GPU's A.int, thus, are compute-bound. These layers can reduce their runtimes if more compute is available. However, kernels like LSTM in GNMT, which operate with large input and output features (1024 features in GNMT), require a lot of data but perform relatively fewer computations compared to convolution. Therefore, they score very low A.int. We should note that DNNs are not entirely constructed of convolution or LSTM layers. However, CNNs, in general, have more convolution kernels.
### Memory Contention While Multiplexing
Studies (Srivastava et al., 2017; He et al., 2015) of scientific computation workloads have shown that the GPU cache size and occupancy are important factors influencing the latency of kernel execution. We also examine the effect of cache contention while running multiple DNN models. However, we observe with DNNs, that the inference latency does not vary significantly _if_ SM isolation is maintained. Since we indeed maintain SM isolation with spatial multiplexing using CSS, the impacts of contention in the GPU cache or other memory resources is minimal. We present the \(99^{th}\)-percentile inference latency (batch = 16) of DNN models running in isolation (Fig. 2) versus the same model multiplexed at its knee GPU% with 4 other models in Table 3. Inference latency varies less than 3%, confirming this minimal impact. Thus, we do not utilize a separate variable for delay caused by the GPU cache. Instead, in the model of a DNN that we discuss in the next subsection, we consider all the memory related delays as a single variable.
### Modeling DNNs
We now model an analytical DNN model that exhibits the characteristics of most actual DNN models, in terms of the variation in the compute workload across their different kernels. We model the DNN composed of multiple sequential _kernels_ executing in GPU (and other accelerators) instead of _layers_ as often used in other ML studies. We have observed using NVPROF profiling that each layer (_e.g.,_ convolution layer) is often implemented as combination of multiple kernels in GPU, thus, we use kernel as basic component of DNN execution in this model. The model guides the determination of the best operating point (Knee) GPU% for a DNN. In our model, we breakdown the DNN workload into parallelizable operations (compute tasks), memory read/write as well as serialized (non-parallelizable) operations, and observe the effect of changing GPU resources. While our model is simple, it captures all the system level overheads that contributes to DNN latency, and provides us with good approximation of the Knee of each model. The simplicity of the model further aids in evaluating DNNs in different GPUs, with different numbers of SMs, as well as other accelerator hardware.
Selected notation used in the analysis is shown in Table. 4. As in typical GPUs, each of the S SMs allocated to a DNN will process one parallel operation per \(\mathbf{t_{p}}\) time. From a modeling perspective, we order the kernels by their amount of computation without losing generality. DNNs have an arbitrary order in kernel execution. However, the knee of the model is dependent on peak computation requirements of the kernels rather than the order of execution of each kernel.
We set the first kernel \(\mathbf{K_{1}}\) as that with the greatest amount of parallelizable operations \(\mathbf{N_{1}}\), which is selected as \(N_{1}=\mathbf{p}\) for modeling purposes. For subsequent kernels, the workload decreases by a fixed amount, so that \(\mathbf{N_{i}}>\mathbf{N_{i+1}}\). Eq. 1 specifies the amount of parallelizable operations for each kernel in the DNN. We decrease the amount of parallelizable tasks by a fixed amount, \(\frac{p\times b}{Kmax}\).
\begin{table}
\begin{tabular}{|c|c|} \hline
**Variable** & **Description** \\ \hline \(b\) & Batch Size \\ \(p\) & 1st kernel’s number of concurrent ops. (tasks) \\ \(Kmax\) & Maximum number of kernels \\ \(K_{i}\) & \(i^{th}\) kernel \\ \(N_{i}\) & Number of parallelizable operations for \(K_{i}\) \\ \(R_{i}\) & Number of repetition of \(K_{i}\) in DNN \\ \(M\) & Memory Bandwidth per SM \\ \(d_{i}\) & Data for \(i^{th}\) kernel (parameters \& input) \\ \(S\) & Number of allocated SMs \\ \hline \end{tabular}
\end{table}
Table 4. Table of Notations for DNN Model
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Layer & GFLOPs & \begin{tabular}{c} Bytes \\ (\(10^{6}\)) \\ \end{tabular} &
\begin{tabular}{c} Arit. \\ Int. \\ \end{tabular} & Limit \\ \hline Alexnet & Conv.2 & 0.30 & 0.22 & 182 & Compute \\ ResNet50 & Conv.2 & 0.103 & 0.121 & 393 & Compute \\ VGG-19 & Conv.11 & 3.7 & 9.44 & 391 & Compute \\ GNMT & LSTM & 0.016 & 8.38 & 2 & Memory \\ \hline \end{tabular}
\end{table}
Table 2. Compute & memory bound kernels
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Knee\% & Isolation & Multiplexed \\ \hline Mobilenet & 20\% & 9.8 (ms) & 9.9 \\ ResNet-18 & 30\% & 12.4 & 12.4 \\ BERT & 30\% & 9.3 & 9.3 \\ ResNet-50 & 40\% & 28.9 & 28.5 \\ VGG-19 & 50\% & 51.2 & 52.4 \\ \hline \end{tabular}
\end{table}
Table 3. Latency (ms) in isolation and multiplexed
\[N_{i}=\begin{cases}p\times b,&i=1\\ \left\lfloor N_{i-1}-\frac{p\times b}{Kmax}\right\rfloor,&i\geq 2\end{cases} \tag{1}\]
for each subsequent kernel. The number of concurrent operations decrease and reaches \(\sim 0\) for the last (\(K_{max}\)) kernel. Correspondingly, we define the total execution time for each kernel's parallelizable tasks as \(\mathbf{W_{i}}=N_{i}\times t_{p}\).
Note: Ideally, \(W_{i}\) can potentially be completed in \(t_{p}\) units of time when we allocate greater than or equal to the \(N_{i}\) SMs to execute \(W_{i}\). If we consider that the GPU hardware is able to provide \(S\) SMs to execute \(K_{i}\), then, without loss of generality, we can show that the time taken to finish processing the kernel would depend on the minimum of the inherent parallelism, as defined by \(N_{i}\), and the number of SMs allocated for executing the operation. Thus, the execution time for parallelizable operations at each kernel of the DNN can be computed using Eq. 2. Individual kernels
\[E_{i}=\frac{W_{i}}{max(1,min(S,N_{i}))} \tag{2}\]
in the DNN often run repeatedly during a DNN inference. We define the number of repetitions of kernel \(K_{i}\) as \(\mathbf{R_{i}}\). We then factor the time taken to run all the serialized operations, including for kernel starting and kernel waiting for data. The kernel starting time is considered a constant, \(\mathbf{t_{np}}\), per layer. The kernel's time waiting for data, however, depends on the kernel's input and parameters. Each kernel of a DNN has a certain amount of data (model parameters, input data) that has to be fetched from GPU DRAM (main/global memory of GPU) to the CUDA cores in the SMs. We have observed that the total global memory read/write bandwidth increases with the proportion to the number of SMs allocated. Other studies [43, 66] also point to a proportional increase. We define the latency per kernel, caused by kernel waiting for parameters, input, and other data to be loaded, as Eq. 3. Thus, we can define the total time of non-parallelizable (sequential) operations \(\mathbf{W_{se}}\) as Eq 4. We use Eqs. 2 and 4 to compute DNN execution time, \(\mathbf{E_{t}}\) as in Eq. 5.
\[E_{m}=\frac{d_{i}\times S}{M} \tag{3}\]
\[W_{se}=b\times\sum_{i=1}^{K_{max}}R_{i}\times\left(t_{np}+E_{m}\right) \tag{4}\]
\[E_{t}=W_{se}+\sum_{i=1}^{K_{max}}R_{i}E_{i} \tag{5}\]
We now simulate the total time to execute a DNN under varying conditions _i.e._, by varying the amount of parallelizable and non-parallelizable operations at each kernel and the number of SMs in the GPU. As in typical GPUs, we assume the number of SMs allocated for an DNN remains static. Fig. 3(a) shows the impact on the DNN execution time when assigning different numbers of SMs. First, we created a DNN with 50 kernels _i.e._, \(K_{max}=50\). We set the time taken for the parallel operation \(t_{p}\) to be 40 units and for serialized operations \(t_{np}\) to be 10 units. We repeat the simulation for 3 cases, varying the maximum amount of parallelization (concurrent operations at the first kernel) \(N_{1}\) as 60, 40, and 20.
For all three cases, the execution time is very high when the number of SMs is small (1 to 5 SMs), reflecting the penalty of insufficient resources for the inherent degree of parallelism while executing the DNN kernel. However, as the number of SMs increases, the execution latency decreases. Interestingly (see zoomed part of Fig. 3(a)), there occurs a point when giving more SMs beyond a point does not improve latency further, in each of the scenarios. When the number of SMs provisioned exceeds the amount of parallelism inherent in the DNN kernel, there is no further reduction in the latency. Even before reaching this point, the latency improvements from having an increased number of SMs reaches a point of diminishing returns1. We seek to find the most efficient number of SMs (\(S\)) needed for executing a given DNN, so that the utilization of the allocated SMs is maximized. To compute this, we have to find the maximum of \(\frac{1}{E_{t}*S}\), which represents the DNN work processed per unit time per SM. For this, we differentiate \(\frac{1}{E_{t}*S}\) with respect to the time taken to execute the DNN.
Footnote 1: _i.e._, showing marginal improvements. The DNN execution latency is impacted by both the number of parallelizable and non-parallelizable operations and it varies inversely with the number of allocated SMs, by Amdal’s law [6]. Batching increases parallelizable work [23].
\[\frac{d}{dE_{t}}\left(\frac{1}{E_{t}*S}\right)=-\frac{1}{\left(E_{t}\right)^{ 2}*S} \tag{6}\]
Fig. 3(b) shows this first order derivative of the inverse of latency (Eq.6), showing that SMs for \(N_{1}=20\), 40 and 60 reaches a maximum at 9, 24 and 31 SMs respectively. Hence, operating at this derived'maximum' point for a DNN guarantees that there are sufficient number of SMs to provide low latency while achieving the most efficient use of the SMs. Moreover, we can see from this that the'maximum' peaks at a much lower SMs than the corresponding value of \(N_{1}\). This is due to the impact of performing serialized tasks adjacent to the parallelizable tasks. This results in lower (or no) utilization of many of the allocated SMs for the serialized tasks. Thus, further reduction in latency by increasing SMs is minimal.
### Analyzing Execution of Typical DNNs
We profiled and analyzed Mobilenet, ResNet and GNMT DNNs using the NVPROF profiler [4] to capture the GPU resource usage and the execution time of the DNN kernels.
#### 4.4.1. CNN model: Mobilenet
We profiled the inference of Mobilenet using 100% of a V100 GPU. For each kernel, we show the GPU thread count on the y-axis (in log scale) and the corresponding runtime as the area of the bubble in Fig. 5. The approximate GPU% required for all the threads to run concurrently is on Y2-axis (log scale, on the right). We approximate this GPU% by considering that only 2048 threads can run in an SM concurrently, due to limits on the number of concurrent blocks and warps [1]. The kernel's
design and thread distribution across different threadblocks can lead to a higher SM demand than absolutely required.
We plot 11 distinct kernels of a Mobilenet model (each identified by a different color in Fig. 5). These kernels are executed a total of 156 times per inference. We observe that few of the kernels (kernel 3, 4 and 6, in particular) require more than 100% of the GPU to run. These kernels demand more threads than a GPU can run concurrently. However, these kernels run for a very short time and do not contribute significantly to the total inference latency. The kernels that contribute more to the total latency, such as kernels 10 and 7 utilize less than 10% of the GPU. This is due to the fact that the DNN's inference feature matrix gets smaller, thus, resulting in limiting the inherent parallelism. Thus, these kernels use fewer parallel GPU threads and run for long time with low GPU% demand. They contribute to lowering the Knee GPU% of the entire DNN model. From this understanding, when the amount of parallelism of a kernel is low, increasing the number of GPU SMs will not reduce the execution time of the kernel, since the additional SMs will not be utilized.
We also analyzed the inference time with different batch sizes of Mobilenet (Fig. 4c). In all the cases, for a given batch size, the latency reduces with an increase in GPU%. But, across all evaluated GPU percentages, the latency _increases_ with increasing batch sizes. Fig. 4d shows the first derivative of the inverse of Mobilenet's latency obtained using Eq. 6. The maximum of the derivative, _i.e._, the most efficient point for DNN operation, for batch sizes of 1, 2, 4 and 8 occurs at GPU% of \(\sim\) 10, 20, 40, and 50 respectively. This shows that with increasing batch size, _i.e._, increased parallelism, the GPU% at which the maximum utilization point occurs, based on Eq. 6, also increases. Fig. 6a shows the different maximum utilization points for the different models. Lightweight models such as Inception and ResNet-18 have a maximum at a lower GPU%, while compute-heavy VGG-19 does not see an inflection point up to 100% GPU. These characteristics of the individual DNN's execution strongly correlate and match with the theoretical DNN model we presented.
#### 4.4.2. Transformer Model BERT
We also present the evaluation of the inference latency for the transformer-based natural language processing DNN, BERT, as well as the first order derivative, per GPU% in Fig. 6b. We evaluated sentences with 10 and 20 words. We can observe that longer sentences results in higher inference latency. But again, we see that the inference latency does not improve after a point. The first order derivative of the latency for 10 and 20 word
Figure 4. (a), (b) Inference characteristics of analytical DNN models with varying amounts of parallelism and hardware resources.(c), (d) Demonstration of analytical model’s understanding on real DNN Mobilenet
Figure 5. Thread count & runtime (shown as area of circle) of 156 kernel of Mobilenet.
sentences shows a peak at around 30% and 40% GPU respectively. Thus, both our model prediction and our evaluation of representative compute-heavy CNN and memory-bound Transformer models show that there is indeed a limit to parallelism utilized by DNNs. This motivates our approach to further examine improving GPU utilization with spatio-temporal scheduling.
## 5. Optimal Batching for DNNs
Batching is a trade-off between improving throughput at the cost of higher latency. Inferring a batch of requests requires more computation, thus increasing inference time. Preparing a bigger batch, _i.e.,_ receiving and transferring data from the network to GPU also contributes additional latency. Providing a higher GPU% for a bigger batch can mitigate the inference latency increase. However, giving more than a certain GPU% may be wasteful. We use the metric
\[\textit{Efficacy}\left(\eta\right)=\frac{\textit{Throughput}}{\textit{ Latency}\times\textit{GPU}\%} \tag{7}\]
of _Efficacy_ (\(\eta\)) of using GPU resources as the basis to find a good operating point with respect to batch size and GPU%. We define \(\eta\) of a DNN at a certain batch size and GPU% as Eq. 7. Efficacy, \(\eta\), lets us know how much throughput the GPU produces per unit time, per unit of GPU resource (GPU%).
### Optimum Batch Size for Inference
We profiled the ResNet-50 model for inference at different batch sizes & GPU% configuration. Fig. 7 shows that both very high and very low batch size leads to low Efficacy due to high latency and reduced throughput respectively, thus, an optimal batch size is desired. We now develop an optimization formulation that can provide us with the right batch size and GPU% for a model, given a deadline. First, we present the key notations used for the optimization in Table 5.
The batch size is a product of the average incoming request rate and request assembly time. Thus, \(b_{i}=\texttt{Request-Rate}\times C_{i}\). Throughput \(T_{i}\) is number of images inferred per unit time (Eq. 8. Knowing throughput (Eq. 8) we can write \(\eta\) (Eq. 7), as Eq. 9. Eq. 9 is of the same form as the first derivative of inverse of latency, Eq. 6, SS4.2.
\[T_{i}=\frac{b_{i}}{f_{L}(p_{i},b_{i})}\hskip 14.226378pt(8)\hskip 14.226378pt \eta=\frac{b_{i}}{\left(f_{L}(p_{i},b_{i})\right)^{2}\times\textit{GPU}\%} \tag{9}\]
We seek to maximize Efficacy (\(\eta\)) to get the best balance in parameters based on the constraints 10, 11, and 12. The constraints express following requirements: **Eq. 10**: Batch size must be less than or equal to maximum batch size a
\[1\leq b_{i}\leq\textit{MaxBatchSize} \tag{10}\]
\[f_{L}(p_{i},b_{i})+C_{i}\leq\textit{SLO}_{i} \tag{11}\]
\[f_{L}(p_{i},b_{i})\leq\frac{\textit{SLO}_{i}}{2} \tag{12}\]
model can accept. **Eq. 11**: The sum of times taken for aggregation of batch via network(\(C_{i}\)), and its inference execution, which has to satisfy the SLO. **Eq. 12**: When working with a high request rate, we can regularly gather large batch sizes for inference. However, a request that cannot be accommodated into the current batch due to constraint Eq. 11, has to be inferred in the next batch. Then the deadline for next batch is the deadline of the oldest pending request. Therefore, we make sure that SLO is twice the time required to run a batch.
We computed the latency function \(f_{L}(p_{i},b_{i})\), by fitting the latency observed while inferring DNN models with a batch size of 1,2,4,8,10,12,16 and GPU% from 10-100 at 10% intervals on our testbed. The optimization is solved using the non-linear programming solver 'fmincon' in MATLAB. Requests (images of resolution \(224\times 224\)) arrive over a 10 Gbps link. 1 image is assembled every \(\sim 481\mu\)s. We use an SLO of 50 ms, allowing for an interactive system that can be used in safety critical environments such as autonomous driving(Rajaj et al., 2019). We present the feasibility region (where the SLO constraints are fulfilled) and optimal point provided by the optimization formulation in Fig. 8. The infeasible area is in a lighter shade. It is particularly revealing that Mobilenet has an optimal point close to 30%.
**Estimation of the Knee for Real Systems:** We view these optimal values in relative terms, representative of the limit to
\begin{table}
\begin{tabular}{|c c|} \hline Notation & Description \\ \hline \(p_{i}\) & GPU\% for Session \(i\) \\ \(b_{i}\) & batch size for Session \(i\) \\ \(f_{L}(p_{i},b_{i})\) & inference latency of batch \(b_{i}\) for model \(M_{i}\) at GPU\% \(p_{i}\) \\ \(C_{i}\) & Request assembly time for Session \(i\) \\ \hline \end{tabular}
\end{table}
Table 5. Notation for Optimization Formulation
Figure 6. DNN Latency, first derivative as in Eq. 6
Figure 7. _Efficacy_ of ResNet-50
parallelism that the model exhibits, because the optimization does not necessarily factor all the aspects that influence the execution of the model in the real system. We, however, pick a batch size and GPU% values from the high efficacy region in the optimization output in Fig. 8 and over-provision the GPU% by 5-10% while deploying the model in a real system.
## 6. GPU Scheduling of DNN models
We now discuss the Spatio-temporal scheduling with D-STACK. We run the DNN models concurrently and meet their SLO while keeping the GPU from over-subscription. Over-subscription occurs when the aggregate GPU% of concurrent models exceed 100%.
### Scheduling with varying SLO
We schedule multiple models with different SLOs (deadlines), optimal batch sizes, and GPU% with D-STACK. Our scheduler considers two primary constraints. First, the DNN model must be scheduled at least once before an interval equal to its SLO, using an optimal batch size as predicted by the model in SS 5. Second, the aggregate GPU demand at any point in the schedule should not exceed 100%. We choose a time period defined by the largest SLO to be a _Session_. Models with an SLO smaller than a session will run multiple times in a session. e.g., for a 100 ms session, a model with 25ms SLO will run at least 4 times. Our spatio-temporal scheduling also accommodates dynamic arrivals of requests by utilizing a Fair, Opportunistic and Dynamic scheduling module which dynamically recomputes the schedule, thus increasing the effective utilization of the GPU.
We use 8 different DNN models and present their optimal batch size, GPU% and the latency of inference at that batch-size/GPU% in Table 6. We obtain the knee GPU% and Batch Size from the model in SS 5. We chose our SLO based on safety-critical work such as autonomous driving(Kraus et al., 2019), where it is determined that less than 130ms processing is required to safely stop a car running at 80 miles/hr (\(\sim\)130 kmph). We choose a much more conservative 100 ms (effectively about 50 ms as rest is spent for preparing batch) for higher accuracy (VGG-19 and ResNext-50) and smaller SLOs (50 ms and 25 ms) for latency-optimized models (ResNet-50, Inception, Mobilenet, Alexnet and ResNet-18) aimed for application such as 30fps video stream. Unlike (Wang et al., 2019), we realistically consider that a model's execution cannot be preempted from GPU.
We first examine a temporal schedule with Alexnet, ResNet-50, and VGG-19. We provide time slices proportional to the model's SLOs. We utilize an adaptive batching algorithm mentioned in clipper (Kraus et al., 2019) and Nexus (Navarro et al., 2019) to obtain the batch size for each model's time slice. Fig. 9a is the visualization of such a schedule. The SLOs are visualized as the vertical dotted lines. We compute GPU utilization by using Knee% for each model as shown in Table 6. With temporal sharing, we achieve mean GPU utilization of 44%.
#### 6.1.1. D-Stack: Spatio-Temporal Scheduling
Our D-STACK's scheduler aims to fit as many models as possible (potentially being different from each other) and run them concurrently in the GPU. We seek to be able to meet each model's (potentially different) SLO. We employ a simple version of the Earliest Deadline First Scheduling (EDF) algorithm to schedule all the models. EDF schedules the model with the tightest deadline to run first. However, we should note that as a model's inference is not preempted, this simple schedule cannot guarantee that the GPU will not be over-subscribed at any moment in the schedule. To aid in fitting in as many models as possible, we schedule consecutive executions of any model with the shortest SLOs to be as far apart as possible. This allows us to fit longer running models in the GPU in the interim without oversubscribing it. We demonstrate a schedule generated by spatio-temporal only algorithm in Fig. 9b. We observe that the model with the smallest SLO, Alexnet (bottom), is scheduled to meet its SLO, but the time between the execution of the first instance and the second can be large because its execution time is short. This allows us to run ResNet-50 (second from the bottom) and VGG-19 (third) in between consecutive executions of Alexnet. Note that D-STACK's scheduler can also schedule a model with GPU% lower than its Knee, albeit with high inference latency when necessary. D-STACK also considers the additional latency of launching a new DNN model at lower GPU% into the schedule. This latency-GPU% trade-off
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Knee\% & \begin{tabular}{c} SLO \\ (ms) \\ \end{tabular} & \begin{tabular}{c} Batch (\(B_{i}\)) \\ Sentence len. \\ \end{tabular} &
\begin{tabular}{c} Runtime (\(L_{i}\)) \\ (ms) \\ \end{tabular} \\ \hline Mobilenet & 20 & 25 & 16 & 10 \\ Alexnet & 30 & 25 & 16 & 8 \\ BERT & 30 & 25 & 16 (10-words) & 9 \\ ResNet-50 & 40 & 50 & 16 & 28 \\ VGG-19 & 50 & 100 & 16 & 55 \\ ResNet-18 & 30 & 25 & 16 & 12 \\ Inception & 40 & 50 & 16 & 25 \\ ResNeXt-50 & 50 & 100 & 16 & 40 \\ \hline \end{tabular}
\end{table}
Table 6. Characteristics of different DNN models
Figure 8. Mobilenet feasibility region (darker shade)
has to be considered carefully before starting inference. Once a DNN process starts with its allocated GPU%, it cannot be changed for that instance's execution lifetime.
#### 6.1.2. Fair, Opportunistic, Dynamic Scheduling
To efficiently utilize the GPU resource while ensuring that the system meets SLO guarantees, we further propose an opportunistic dynamic scheduling enhancement. The dynamic scheduling is triggered when a new request dynamically arrives for a model and when a model ends inference. The dynamic scheduler picks a model that is not active. This opportunistic addition is allowed as long as the GPU is not oversubscribed (so as to not interfere with the already scheduled models). To ensure fairness among available models, we use a scoreboard that tracks how many times each model has run in the last few (e.g., ten) sessions and prioritizes the models that have run the fewest. The algorithm then finds a time slice for the model to finish inferring and also determines a batch size that can complete within the time slice. If the highest priority model cannot be run, the algorithm picks the model with the next higher priority. We show the output of the D-STACK scheduling in Fig. 9c. With this dynamic scheduling packing more models to be scheduled opportunistically, the average GPU utilization increases from 60% in the plain spatio-temporal schedule (Fig. 9b) to 74% with the D-STACK schedule (Fig. 9c).
### An Ideal Spatio-Temporal Schedule vs D-STACK
We compare D-STACK against an ideal scheduler, which is a theoretical spatial and temporal schedule at the granularity of individual DNN kernels. For the ideal case, we assume GPU kernel preemption is allowed, a DNN's instantaneous GPU demand is known and the GPU's allocated resources are adjusted instantaneously. Any realistic system that does not preempt a currently running DNN model until its inference is completed, together with scheduling overheads to switch from one model to another inevitably under-utilizes the GPU. Thus, the ideal scheduler provides a theoretical 'optimal' performance achievable by D-STACK or other schedulers.
We consider a time-slotted system (_e.g._, 100\(\mu\)s for experiments with a small scale DNN), where \(S_{i}\) represents \(i^{th}\) time slot in the schedule. We schedule the kernel \(k_{m}\) from DNN model \(m\). We include as many model's kernels as will fit in the GPU at their Knee%, ordered by their earliest deadline. We compute the aggregate GPU% as \(G_{ui}=\sum_{k\in S_{i}}GPU\%_{k}\) for each time slot \(S_{i}\). We use an exhaustive search-based schedule to maximize the GPU utilization for every time slot (Eq. 13). The overall GPU utilization \(G_{u}\) is maximized as:
\[\max G_{u},\quad\text{where}\;\;G_{u}=\sum_{i}G_{ui}=\sum_{i}\sum_{i}\sum_{k \in S_{i}}GPU\%_{k} \tag{13}\]
\[\text{such that}\;\;G_{ui}\leq 100\%\text{ and }\;k_{i}\in E\Longrightarrow k_{i-1}\in E \tag{14}\]
The first constraint for scheduling kernels of different models (Eq. 14) is that the sum of the GPU% of all concurrent kernels in a time slot should not exceed 100%. Second, only eligible kernels (set \((E)\)) can run concurrently in the time slot \(S_{i}\) being scheduled. DNN kernels are executed sequentially.
We experimented by scheduling 3 convolution neural networks (ConvNet) based on LeNet (Wang et al., 2017). Each ConvNet has 3 convolution, 2 average-pool and 2 linear kernels. The dimensions of filters of the convolution layers are varied, varying the compute requirement for each ConvNet model. The inference image has a resolution of 224\(\times\)224. The knee-runtime combination for ConvNet-1, ConvNet-2 and ConvNet-3 are 30%-10.3ms; 40%-14.6ms, and 60%-15.4ms, respectively. We computed the knee of each kernel of each model, for use by the ideal scheduling during inference. We present the GPU utilization and throughput in Fig. 9d. Temporal scheduling has a much lower GPU utilization, as it runs a single kernel on the GPU at a time. GSLICE improves the GPU utilization, but its static schedule leads to lower utilization when not enough models are running on the GPU. Ideal scheduling attains almost 95% GPU utilization, because it schedules kernels leveraging preemption. D-STACK schedules without preemption of a kernel, runs a DNN kernel to completion even if a kernel that could utilize the GPU better is waiting. Nonetheless, D-STACK still achieves \(\sim\)86% GPU utilization. The throughput attained by the three CNN models follows the same trend. D-STACK's overall throughput is slightly higher than 90% of the throughput of ideal scheduling - a measure of how close it comes to the ideal scheduler.
### Evaluation of D-STACK Scheduler
We evaluate D-STACK using four popular DNN models (Alexnet, Mobilenet, ResNet-50, and VGG-19) that are run with fixed SLOs, GPU%, and runtime as presented in Table 6. We ran the models concurrently for 10 seconds. We took the workload mix from the Imagenet (Mobilenet, 2017) (vision DNNs), and IMDB dataset (Zheng et al., 2018) (sentence classification with BERT). We introduce a random, uniformly distributed inter-arrival delay between requests destined for the same DNN model.
We compare the throughput, and GPU runtime of D-STACK with the baseline temporal sharing, and a schedule that maximizes the sum of the throughput across all the models ( _max-throughput_). We also evaluate the fairness of the schedulers, measured by the GPU runtime each model gets. For this, we compare D-STACK against a Max-Min fair scheduler (Becker et al., 2017), which maximizes the placement of the minimum (smallest) demand (GPU%). The throughput result is shown in Fig. 10a, and the GPU runtime each model gets is in Fig. 10b.
D-STACK gets 2\(\times\) the throughput of temporal sharing for the two compute-heavy models, ResNet-50 and VGG-19 (Fig. 10a). At the same time, the lighter-weight Alexnet and Mobilenet get 4\(\times\) higher throughput. In temporal scheduling, running compute-heavy DNNs with longer runtimes results in fewer opportunities for the other models, as there is no spatial sharing. Temporal scheduling runs models for only 1.6 sec. out of 10 secs. time, negatively impacting their throughput. Fig. 10b shows that the D-STACK runs all the models
longer than temporal sharing. This is because D-STACK can run multiple DNNs concurrently, providing higher throughput compared to temporal sharing (Fig. 10a). We compare D-STACK's throughput with the'max-throughput' schedule. D-STACK gets more than 80% throughput of the max-throughput for the model with the lowest runtime (Alexnet) while providing better fairness as we see next.
The Max-Min fair schedule provides higher runtime for Mobilenet (Fig. 10b) than D-STACK since Mobilenet has the minimum demand (25% knee%). However, D-STACK achieves higher throughput than Max-Min for the medium runtime ResNet-50 (Fig. 10a). D-STACK's fairness measure picks the model that has run for the least time in the GPU over past sessions to schedule. Thus, D-STACK seeks to act like a proportional fair scheduler, as with the Completely Fair Scheduler (CFS) in Linux (Shi et al., 2018). The fairness of D-STACK is shown in Fig. 10b. Max-Min gives more time to a low-demand model like Mobilenet. With D-STACK, all the models get similar GPU time, thus boosting the total throughput of higher demand models like ResNet-50. Overall, the D-STACK scheduling beats temporal sharing's throughput by 4%, gets more than 80% of the max-throughput scheduler and fairly shares GPU execution time while meeting SLOs.
## 7. Validating Our Overall Approach
We compare D-STACK with other multiplexing methods.
**Multiplexing DNN models on the GPU**: We evaluate three different cases of multiplexing by running 2, 3, 4 and 7 DNNs, respectively. By multiplexing 7 different DNNs, we demonstrate how D-STACK is still successful in scheduling a number of models with tight latency constraints, even if the sum-total of their demand (_i.e.,_ knee-capacity) is substantially higher than 100% GPU. We show D-STACK can improve throughput and utilize the GPU better while reducing the SLO violations compared to the other approaches, with all, including D-STACK having to compromise by missing the deadline on some inference requests. We compare our approach, including D-STACK, with four other methods of GPU multiplexing, namely, Fixed batching with Default CUDA MPS (FB), and temporal sharing (T), Triton Inference Server (Tri) and GSLICE (G). In Fixed batching with CUDA MPS (FB), the largest batch size of 16 is picked for inference every time and the multiplexing models share the GPU with MPS without an explicit GPU%. In temporal sharing (T), time slices are set in the proportion of the models' SLO length. With Triton server (Tri), we request the inference with multiple clients concurrently, allowing Triton server to dynamically batch and infer our requests. With GSLICE (G), we use all GSLICE's features, including adaptive batching and spatial sharing of the GPU at each DNN's knee. Finally, in D-STACK, we use the batch size and GPU% from our optimization formulation and utilize D-STACK scheduling to schedule the models.
We evaluate the throughput and the SLO violations per second for each model in Fig. 11a. We measure SLO violations per second as the sum of all the inference requests that violate the SLO and all the unserved requests. Inference requests are generated at the rate of \(\sim\)1920 images/sec (max. request rate limited by the 10 Gbps link in testbed). Requests are divided into the multiplexed models in proportion to their SLOs. Thus, for the experiments C-2, C-3 and C4, Alexnet and Mobilenet get 700 inference requests/sec, ResNet-50 gets 320 requests/sec and VGG-19 gets 160 requests/sec.For the experiment with 7 DNN models running concurrently (_i.e.,_ C-7), Alexnet, Mobilenet and ResNet-18 receive 440 inference requests/sec, ResNet-50 and Inception receive 220 requests/sec while ResNeXt-50 and VGG-19 get 80 requests/sec. We observe from Fig. 11a that our framework provides more than a 3\(\times\) increase in aggregate throughput when multiplexing 7 different models. D-STACK achieves the highest throughput even when fewer models are running concurrently. For MPS, the lack of batching causes it to miss most of the SLOs for requests. Fixed batch, temporal sharing, GSLICE and Triton server provide good throughput while
Figure 10. (a) Throughput of models running with different scheduling algo. and (b) Total runtime (s) per model
Figure 9. (a, b, c) Scheduling Algorithms; (A-N=Alexnet, R-50=ResNet-50, V-19=VGG-19) (d) Comparison with ideal scheduler
running just 2 models. However, as the number of models multiplexed increases, each new added model contends for GPU resources in Fixed Batch, decreasing the throughput. Meanwhile, in temporal sharing, each model gets less and less GPU time, impacting throughput.
Models hosted in Triton server too have to multiplex GPU temporally, thus, get lower throughput when more models are added. With GSLICE, multiplexing more models means some models get resources lower than knee GPU%, exponentially increasing the inference latency. D-STACK provides both the right amount of GPU resources and the appropriate batch size. Furthermore, there are no SLO violations in D-STACK when multiplexing 2-4 models. However, when overloading the GPU by multiplexing 7 DNNs, we see a few SLO violations for the models with longer runtime (Inception, Resnet-50, ResNeXt-50 and VGG-19). D-STACK misses SLOs for 10% of all requests, compared to more than 68% for the alternatives. SLO misses for D-STACK are from the smaller fraction of requests sent to compute heavy models such as ResNet-50, ResNext-50 and VGG-19. Even with some of the medium-to-large sized models with longer runtimes, such as ResNet-50 and Inception, only 13% of requests see a SLO violation. This is due to the fact that running 7 models concurrently exceeds the capacity of GPU even with D-STACK. With D-STACK the average GPU utilization is 92% while multiplexing with 7 models. With all the models having a knee greater than 10%, this is close to fully utilizing the GPU.
**Benefit of D-STACK Scheduler**: Wherever possible, D-STACK tries to opportunistically schedule additional model instances during the session, possibly with a smaller batch size to utilize the available GPU. To show the effectiveness of the D-STACK, we present a scenario where the request rate of the multiplexed DNN models varies dynamically. To start with, in session \(T_{0}\), we have 4 models, Alexnet, Mobilenet, ResNet-50 and, VGG-19, same as in 'C-4' in Fig. (a)a running with their request rates high enough to support the optimal batch size, as determined in Table 6. The GPU utilization we achieve is \(\sim 85\%\). We then change the request rate of one model (Alexnet in session \(T_{1}\)) by a random amount. We still allow for the optimal batch to form for each model. The throughput of the models dynamically adjust with the throughput of other models increasing due to use of the un-utilized resources left by Alexnet (see \(T_{1}\)). Since these three models have a high GPU% requirement, there is not enough GPU to accommodate an instance of another model. Thus, the GPU utilization drops very slightly. At \(T_{2}\), Alexnet's request rate goes back up, while Mobilenet request rate lowers, once again by a random amount. Alexnet opportunistically uses the GPU to achieve a throughput higher than what it achieved in the baseline session \(T_{0}\). Similarly, when ResNet-50 and, VGG-19's arrival rates drop at \(T_{3}\) and \(T_{4}\), respectively, the other models increase their throughput. We also see that across these sessions, the GPU utilization is nearly unchanged, remaining high, indicating that the D-STACK effectively uses the GPU.
### D-STACK in Multi-GPU Clusters
We evaluated D-STACK in a multiple GPU cluster of 4 NVIDIA T4 GPUs, each having 40 SMs (fewer than a V100) and 16 GB of memory. We utilized 4 different vision models, Mobilenet, Alexnet, ResNet-50 and VGG-19 (knee GPU% is different for T4 GPU vs. V100). We compare throughput of 3 different multiplexing and scheduling scenarios. First, we provide one T4 GPU for each DNN model exclusively. In the second scenario, we place all 4 models in each GPU, temporally sharing the GPU. Finally, we evaluate D-STACK with the 4 DNN models.
Fig. 12 shows temporal scheduling has almost the same throughput as each model having an exclusive GPU. This is because of the under-utilization of the GPU by the DNN models. D-STACK has much higher throughput for every model, with 160% overall higher throughput than temporal sharing.
Figure 11. (a) C-2 = ResNet-50 + VGG-19, C-3 = C-2 + BERT, C-4 = C-3 + Mobilenet, C-7 = C-4 + ResNet-18 + Inception + ResNeXt-50. (b) Throughput adjustment in D-STACK with varying request rate
The overall inference throughput increases substantially as the multi-GPU cluster is better utilized by D-STACK.
## 8. Conclusions
DNNs critically depend on GPUs and other accelerators, but often under-utilize the parallel computing capability of current high-performance accelerators. Due to uneven workloads of different DNN kernels, a DNN as a whole is unable to fully utilize all the parallelism of the GPU (i.e., all SMs). Furthermore, there are non-parallelizable tasks while executing a DNN on a GPU-based system limiting the effective use of a GPU's parallelism. We validated these conclusions from our model of a DNN through measurements of different types of DNNs (CNNs, and Transformers) on an V100 GPU. Since batching DNN requests improves inference throughput and GPU utilization, we develop an optimization framework to establish an optimal operating point (GPU%, Batch Size) for a DNN utilizing the GPU at the highest efficacy. We bring the optimal batch size and GPU% together in D-STACK to develop a spatio-temporal, fair, opportunistic, and dynamic scheduler to create an inference framework that effectively virtualizes the GPU. D-STACK accounts for a DNN model's SLO, GPU resource allocation, and batch size, to provide a schedule that maximizes meeting SLOs, across multiple DNN models while seeking to utilize the GPU fully. D-STACK benefits both single GPUs and multi-GPU clusters. Our enhancements in D-STACK do not require modifications to the GPU architecture, the runtime, or the DNN models themselves. D-STACK's features can easily help improve existing DNN inference platforms (e.g., Triton server) as well. We show that D-STACK can attain higher than 90% throughput of an ideal scheduler, which we speculate can switch tasks instantaneously at a very fine time granularity, ignoring practical limitations. Our controlled testbed experiments with 4 T4 GPU clusters show the throughput improvement of 160%-180% with D-STACK compared to providing an entire GPU to each individual DNN model. With an NVIDIA V100 GPU, D-STACK shows benefit in the range of \(\sim\)1.6\(\times\) improvement in GPU utilization and 3\(\times\) to 4\(\times\) increase in throughput with no impact in latency compared to the baseline temporal sharing.
|
2309.05452 | Evaluating the Deductive Competence of Large Language Models | The development of highly fluent large language models (LLMs) has prompted
increased interest in assessing their reasoning and problem-solving
capabilities. We investigate whether several LLMs can solve a classic type of
deductive reasoning problem from the cognitive science literature. The tested
LLMs have limited abilities to solve these problems in their conventional form.
We performed follow up experiments to investigate if changes to the
presentation format and content improve model performance. We do find
performance differences between conditions; however, they do not improve
overall performance. Moreover, we find that performance interacts with
presentation format and content in unexpected ways that differ from human
performance. Overall, our results suggest that LLMs have unique reasoning
biases that are only partially predicted from human reasoning performance and
the human-generated language corpora that informs them. | Spencer M. Seals, Valerie L. Shalin | 2023-09-11T13:47:07Z | http://arxiv.org/abs/2309.05452v2 | # Evaluating the Deductive Competence of Large Language Models
###### Abstract
The development of highly fluent large language models (LLMs) has prompted increased interest in assessing their reasoning and problem-solving capabilities. We investigate whether several LLMs can solve a classic type of deductive reasoning problem from the cognitive science literature. The tested LLMs have limited abilities to solve these problems in their conventional form. We performed follow up experiments to investigate if changes to the presentation format and content improve model performance. We do find performance differences between conditions; however, they do not improve overall performance. Moreover, we find that performance interacts with presentation format and content in unexpected ways that differ from human performance. Overall, our results suggest that LLMs have unique reasoning biases that are only partially predicted from human reasoning performance.
## Introduction and Related Work
1
Footnote 1: The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government. Approved for public release, case number: AFRL-2023-4238
The development and availability of highly fluent large language models (LLMs) (i.e., [1, 19, 20]) has increased interest in assessing their reasoning and problem solving abilities [1, 18, 2, 23, 24]) has increased interest in assessing their reasoning and problem solving abilities [1, 18, 2, 23, 24]. Despite considerable performance improvements on benchmark tasks, LLMs exhibit mixed results on reasoning tasks. Some research has suggested that LLMs may have emergent reasoning abilities that enables better performance levels than those of human subjects [16]. Other research has suggested that LLM reasoning performance is more mixed and task dependent. Such research has suggested that some tasks, such as four term analogy problems [12] and different natural language inference tasks [1, 19], are more straightforward to solve. Other types of reasoning tasks such as analogy generation [10] and deductive competence [10] are more challenging.
[10] has investigated deductive competence in LLMs with characteristically mixed results. They demonstrated that one LLM, [12], showed content effects on reasoning similar to human behavior documented in the cognitive science literature. For zero-shot performance, they found 50% accuracy for what they call realistic problems but chance accuracy for unrealistic problems. A 5-shot presentation resulted in some performance improvement for realistic problems, but unrealistic performance remained low.
In this paper, we extend this previous research in several ways. First, we investigate the extent to which limited performance may be due to how the task was formatted. Prior research has demonstrated that overall performance can vary according to how a particular task is formatted [1, 10, 11, 12]. Research on analogy generation, [10] demonstrates that performance depends on the specific prompt that models were given. Thus the inability of a model to solve one particular format of a task only provides a lower limit for assessing whether a model can successfully solve that task [11].
Second, limited performance on this task may be due to how the researchers employed content familiarity, of direct relevance to the distribution of content in the training corpus. For familiar content, they cover several different types of problems including social rules (i.e. _If people are driving, they must have a license_) and other relationships (i.e., _If the plants are flowers, they must be fertilized_).
However, content familiarity does not fully capture the content benefit seen in human subjects [11, 12]. Instead, previous research indicates that people perform substantially better on problems that involve social rules than those that do not [10], even when the problems contain other types of familiar content [11].
We expand the research base on the reasoning capabilities of LLMs by: 1) examining the role of specifically social-rules in reasoning about realistic content, 2) investigating the role of alternative presentation formats in deductive reasoning performance, and 3) expanding the set of candidate
LLMs to evaluate potential algorithmic effects. Our results show that social content does benefit LLM performance, but not to the extent that might be expected based on training corpus content. While presentation formats do influence performance, they interact with content in a surprising (non-human) fashion. These findings are independent of the LLM examined.
## Evaluating Deductive Competence
The Wason selection task is a reasoning task from the cognitive science literature that evaluates deductive competence [22]. Participants are presented with a rule of the form _If \(p\), then \(q\)_ and four cards with p status on one side and q status on the other that correspond to the options \(P,\neg P,Q,\) and \(\neg Q\). Participants are asked to determine which card or cards must be flipped over to validate whether the rule holds for this set of cards.
In the traditional _abstract_ version of the task, participants are given rules about letters and numbers (i.e., _if there is an odd number on one side of the card, there is a vowel on the other side of the card_). The correct response requires the identification of two cards. Typical human accuracy for these problems is 10-20% with common errors consistent with confirmation bias. In contrast, problems that deal with a social rule (i.e., _if a person is drinking beer, they must be at least 21 years old_) are easy to solve- most participants (70%+) select the correct cards [14].
There are four potential conditional inferences in task: _modus ponens_, denial of the antecedent, affirmation of the consequent, and _modus tollens_[15]. Of these four inferences, only _modus ponens_ and _modus tollens_ are logically valid. These inferences and their logical forms are illustrated in Table 1.
The Wason task makes a good candidate task for evaluating the reasoning performance of LLMs for several reasons.
First, the task is relatively close to certain language modeling objectives. While the task involves a reasoning component, it can be formatted as a completion task, where the objective is to predict the answer given the problem text. This suggests that prior training, particularly for LLMs with high numbers of parameters that have been trained on large text corpora, should provide sufficient information for performing the Wason task. Moreover, the construction of the task minimizes the potential for confounds that may artificially inflate performance [16, 17, 18]. Previous work has demonstrated that high performance on some natural language inference tasks [1, 19] can be due to exploitable properties of the training data [10]. The standardized format of the Wason task allows for the creation of a large number of carefully constructed examples without the risk that some answers may be easily determined from the original prompt alone.
Second, the problem examines both straightforward and challenging aspects of deductive competence. As noted above, a correct answer to a Wason problem involves two logical processes: _modus ponens_ and _modus tollens_. Results from the cognitive science literature indicate that these rules are not equally difficult- applying _modus ponens_ is considerably easier than apply _modus tollens_. The vast majority of people correctly apply _modus ponens_, even for difficult problems (i.e., [22, 14]. In contrast, people fail to apply _modus tollens_, unless the problem has a particular form of semantic content associated with it. Similarly, we might expect presentation format to assist LLMs.
Third, the format of the task allows for careful examination of how problem content influences reasoning performance. Because LLMs are built on co-occurring content, they should benefit from problems that contain familiar relationships. This should be especially true for problems where the relationship between the antecedent and the consequent is highly familiar. For example, in the rule _If a person is driving, they must have a driver's license_, the antecedent _driving a car_, and the consequent _driver's license_ have a familiar (and commonly occurring) relationship. In comparison, the antecedent and the consequent in a rule such as _If a person is driving, they must have a book bag_ do not have the same familiar relationship. While it is likely that some problems may be more difficult than others (i.e., because some completions are more probable) we control for this experimentally. We create sets of problems where both arguments are contain realistic content, but the relationship between them is unfamiliar (see 2).
Lastly, there is a large body of previous research in cognitive science on this task. This literature provides a comparison point for evaluating the performance of LLMs.
## Experiments
In this section, we discuss the task formatting, models, implementation details, and evaluation metrics associated with our experiments.
### Task Conditions
We evaluate a total of 350 problems, 325 of which we created for this project. The remaining 25 were drawn from recent work [13] and sorted into our problem categories. To facilitate comparison between content conditions, our problems are created as _matched sets_. For each condition (except the arbitrary condition), we create problems that are nearly identical in content except for the feature at issue and minor grammatical corrections. We evaluate three different types of problem content: **realistic, shuffled**, and **arbitrary**. A diagram of the different problems we evaluate is in Figure 1. Example problems for each condition are in Table 2.
For the **realistic** condition, we evaluate a total 140 problems. Of these 140, 70 take the form of **social rules** and 70
\begin{table}
\begin{tabular}{|l|l|} \hline Inference & Definition \\ _Modus ponens_ & _If \(p\) then \(q\), if \(q\) therefore \(p\)_ \\ Deny the antecedent & _If \(p\) then \(q\), \(\neg p\), therefore \(\neg q\)_ \\ Affirm the consequent & _If \(p\) then \(q\), \(q\) therefore \(p\)_ \\ _Modus tollens_ & _If \(p\) then \(q\), \(\neg q\) therefore \(\neg p\)_ \\ \hline \end{tabular}
\end{table}
Table 1: Four conditional inferences in the Wason Task
take the form of **non-social rules**. Of the social rule problems, 35 problems take the form of **familiar social rules**. These problems are designed to take the form of social rules governing human behavior and are designed to be _familiar_ such that they reflect social rules that are consistent with the real world. The other 35 social rule problems take the form of **unfamiliar social rules**. These problems have the form of social rules but do not contain familiar relationships.
Of the 70 non-social rule problems, 35 are designed to be **familiar non-social rules** and 35 are designed to be **unfamiliar non-social rules**. For the **familiar non-social rule** condition, we evaluate 35 problems that are not social rules and are familiar such that the antecedent and the consequent have a relationship that is consistent with real-world expectations. For the **unfamiliar non-social rule** condition, we evaluate 35 problems that do not take the form of social rules and are designed such that the antecedent and the consequent do not have a familiar real-world relationship.
The **realistic** grouping is intended to capture the same types of realistic problems that have been used in previous work [10]. For some of our analyses, we compare these problems as a group.
For the **shuffled** condition, we evaluate problems where the antecedent and the consequent are switched. We create shuffled versions of each of the different categories of realistic problems. The purpose of the shuffled condition is twofold. First, the creation of shuffled non-social rules allows for the ability to stress the semantics of plausibility beyond mere co-occurrence. Second, the creation of shuffled social rules allows evaluating evaluating sensitivity to the cost-benefit structure of social rules. Standard social rules typically have an if-then format such that if a person receives a benefit, then they must pay the cost for that benefit, per [12]. In comparison, switched social rules occur in past tense- if a person has paid the cost, then they may receive the benefit. We create shuffled prompts for both the familiar non-social rule and the familiar social rule conditions. We make syntactic corrections to make problems grammatically correct.
For the **arbitrary** condition, we evaluate 70 problems where there is no particular relationship between the antecedent and the consequent. For example, in the problem _The rule is that if the cards have a type of food then they must have an outdoor activity_, there is no particular pre-supposed relationship between types of food and outdoor activities.
### Task Format
A complete prompt for each problem consists of the instruction sentence, a context sentence, the rule, and a list of cards formatted as a multiple choice question. The instruction sentence was the same for all questions. The context sentences were consistent with the content type. For the arbitrary problems, a neutral context sentence was used to prevent a potential length confounds. The instruction sentence and a sample context sentence are in Table 2.
### Models
We test four recently released large language models with approximately 7 billion parameters: Guanaco, MPT, BLOOM, and Falcon. Guanaco is a family of LLMs fine-tuned with QLoRa, a fine tuning approach designed to reduce memory demands while preserving model performance [13]. We use the 16-bit version. MPT is an open-source family of LLMs released by Mosaic that are designed to support fast inference [14, 15]. BLOOM was trained on a large multilingual corpus [11] and has a decoder only transformer architecture [15]. Falcon is an open source LLM designed to support fast inference [12, 13, 14].
### Implementation Details
We run our experiments on a A100 GPU with 12 vCPUs and 85GB of RAM, running Debian 10. Experiments were conducted in Python 3.10. A complete list of libraries is available in the supplementary materials. Total run time for all experiments was approximately 3 hours.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Realistic (n = 140)} \\ \hline \hline Non Social Rule (n = 70) & Social Rule (n = 70) \\ \hline Unfamiliar & Familiar & Unfamiliar & Familiar \\ (n = 35) & (n = 35) & (n = 35) \\ \hline \hline \multicolumn{4}{|c|}{Shuffled (n = 140)} \\ \hline Non Social Rule (n = 70) & Social Rule (n = 70) \\ \hline Unfamiliar & Familiar & Unfamiliar & Familiar \\ (n = 35) & (n = 35) & (n = 35) \\ \hline \hline \multicolumn{4}{|c|}{Arbitrary (n = 70)} \\ \hline \end{tabular}
\end{table}
Table 2: Example Problems
Figure 1: Breakdown of the different types of problems we examine.
### Evaluation Metrics
Previous work has proposed various methods to correct for interactions between the specific form of a prompt and the answer generated by a language model Brown et al. (2020); Holtzman et al. (2021); Zhou et al. (2019). We use one of these metrics, domain conditional PMI, as our scoring metric Holtzman et al. (2021). Domain conditional PMI measures how much information a particular instruction domain provides about a particular answer. Formally, a correct answer is equivalent to
\[argmax\frac{P(y_{i}|x)}{P(y_{i}|baseline)}\]
where \(y_{i}\) is the \(i\)th answer choice, \(x\) is the input prompt, and \(baseline\) is the probability associated with a task-specific premise. Candidate answers are evaluated independently. Chance performance is \(1/6\).
To facilitate comparison between content conditions, our problems are created as _matched sets_. We model this shared variance statistically via random effects terms for sets of stimuli. We use a mixed-effects approach, which allows for modeling the hierarchical structure of the data Gelman (2006). Mixed-effects models are commonly used to analyze linguistic data (i.e., Matuschek et al. (2017); Baayen et al. (2008)) and allow the ability to generalize performance beyond a specific set of problems Clark (1973).
## Results
### Analysis 1 Results
For our first analysis, we evaluate two different crossed factors: social rule status and content type, using domain conditional PMI as our scoring metric. For content type, we evaluate **arbitrary**, **shuffled**, and **realistic** rules. The realistic group contains social rules and non-social rules. We find a significant beneficial main effect for realistic rules compared to shuffled rules (odds ratio (OR) = 1.30, 95% CI=[1.05 - 1.62], \(z=2.29\), \(p<0.05\)). We find support for an interaction between social rule status and content type (OR = 1.59, 95% CI=[1.26 - 2.02], \(Z=3.97\), \(p<0.001\)).
Factors for LLM and familiarity do not improve model fit, suggesting that the overall pattern of results does not significantly differ between LLMs or between familiar and unfamiliar problems. Overall performance is illustrated in Figure 2; see Figure 3 for the interaction. Performance remains rather low overall.
Follow-up tests for the interaction find significant differences between realistic non-social rules and realistic social rules (Odds ratio (OR) = 0.33, 95% CI=[0.22 - 0.44], \(z=-3.34\), \(p<0.001\)), between shuffled social rules and shuffled non-social rules (OR = 2.25, 95% CI=[1.47 - 3.03], \(z=2.35\), \(p<0.05\)), and between realistic social rules and shuffled social rules (OR = 0.22, 95% CI=[0.15 - 0.29], \(z=-4.382\), \(p<0.001\)). The interaction is plotted in Figure 3.
### Analysis 1 Discussion
Initial results from analysis 1 do seem to replicate the general _pattern_ of results demonstrated for human subjects: performance is better for social rule problems than non-social rule problems. We also find an effect for switched social rules such that switched social rules have considerably lower performance than standard social rule problems. This effect is similar to one reported for human subjects Cosmides (1989).
However, we do not replicate the _magnitude_ of the content effect. While humans find abstract problems quite difficult, they successfully solve social rule problems \(\bar{7}0\%\) of the time Griggs and Cox (1982). In comparison, the LLMs solve social rule problems approximately 30% of the time when using domain conditional PMI scoring. Comparison with traditional highest probability scoring indicates that this pattern of results is not due to domain conditional scoring; highest probability scoring produces worse overall performance and the overall pattern of results is similar.
Additionally, we find an interaction between social rule status and content type. As observed here, the pattern of human subject responses predicts that performance should be low for shuffled social rules compared to standard social rules Cosmides (1989).However, human subject responses do not predict the observed differences between shuffled social rules and shuffled non-social rules. Performance for LLMs is influenced by problem content in a manner that is not parallel to human subject results.
### Analysis 2 Results
A follow-up experiment examines whether alternative presentation formats may improve performance. In addition to the standard presentation format (**classic**), we investigate three additional formats. In the **front** condition, the problems include descriptions of the front of each card. In the **back** condition, the problems include descriptions of the hypothetical category of the item on the back of the card. In the **both** condition, the problems include descriptions of both the front and the hypothetical category on the back of the card. We created alternative formats for all the content types: **realistic social rules**, **realistic non-social rules**, **shuffled
Figure 2: Model performance by content type for **arbitrary** (AR), **shuffled** (SH), and **realistic** (RE) rules. RE contains both social and non-social rules. We do not find effects for LLM or familiarity, thus performance is collapsed. Relative to arbitrary content, most models result in a benefit for realistic rules, with mixed influences of shuffling.
social rules**, and **shuffled non-social rules**. We use domain conditional PMI as our scoring metric.
The best fitting statistical model includes an interaction between presentation format, social rule status, and content type plus a random effect for item instances. Adding factors for LLM and problem familiarity did not improve overall model fit, suggesting that performance does not vary substantially by model or by familiarity. We find significant main effects for classic versus front presentation format (OR = 1.19, 95% CI: [1.06 - 1.34], \(z=2.79\), \(p<0.01\)), front versus back presentation format (OR = 1.28, 95% CI: [1.09 - 1.50], \(z=3.17\), \(p<0.01\)) and back versus both presentation format (OR = 1.15, 95% CI: [1.00 - 1.31], \(z=1.98\), \(p<0.05\)).
Focusing further on presentation formats, we also find significant interactions, between the front versus back presentation format and social rule status (OR = 1.22, 95% CI: [1.09- 1.40], \(z=2.60\), \(p<0.01\)) and back versus both presentation format and social rule status (OR=1.25, 95% CI: [1.09 - 1.44], \(z=3.30\), \(p<0.001\)). For shuffled versus realistic rules, we find significant interactions between classic and front presentation formats (OR=1.22, 95% CI: [1.08 - 1.37], \(z=2.96\), \(p<0.01\)) and front versus back presentation formats (OR=1.17, 95% CI: [1.02 - 1.34], \(z=2.04\), \(p<0.05\)). We find a significant interaction between social rule status and shuffled versus realistic rules (OR = 1.32, 95% CI: [1.17 - 1.48], \(z=4.66\), \(p<0.001\)). No three way interactions were significant (\(z\)-values \(=(1.92,1.88,1.50)\), all \(p>0.05\)). See Figures 5 and 6 for plots of the interactions.
For the interaction between front versus back presentation format and social rule status, none of the follow up tests were significant (ORs=[1.35, 1.01, 1.41, 1.06], \(z\)-values = [1.94, 0.06, 1.90, 0.33], all \(p>0.05\)). For the interaction between back versus both and social rule status, none of the follow up tests were significant (ORs=[1.33, 1.28,1.21, 1.26], \(z\)-values = [1.70, 1.25, 1.04, 1.43], all \(p>0.05\)). For the classic versus front presentation format and shuffled versus realistic interaction, there was a significant difference between shuffled and realistic rules for the classic presentation format (OR = 1.56, 95% CI=[1.29-1.83], \(z=2.5\), \(p<0.05\)). Other tests were non-significant (ORs=[1.08, 0.97, 1.39], \(z\)-values = [0.46, -0.16, 1.89], all \(p>0.05\)). For the front versus back presentation format and shuffled versus realistic interaction, all follow up tests were non-significant (ORs=[1.03, 1.08, 1.33, 1.19], \(z\)-values = [0.13,0.41,1.53,0.95], all \(p>0.05\)).
For the interaction between social rule status and shuffled versus realistic rules, the follow up test between shuffled and realistic non-social rules was significant (OR = 1.68, 95% CI=[1.39 - 1.96], \(z=3.049\)\(p<0.001\)). The follow up test between realistic social rules and shuffled social rules was significant (OR = 1.84, 95% CI=[1.53-2.15], \(z=3.551\), \(p<0.001\)). The other two tests were not significant.
We conduct a follow-up set of comparisons to examine differences within content types (shuffled or realistic). For shuffled problems, only the effect for social rules with the both presentation format was significant (OR = 0.60,
Figure 4: Performance across all models for **arbitrary** (AR), **shuffled** (SH), and **realistic** (RE) rules. The realistic category contains both social rules and non-social rules. We collapse across LLM and familiarity.
Figure 5: Interaction between presentation format (**classic, front**, **back**, or **both**), content type (**shuffled** (SH) or **realistic** (RE)), and social rule status (**social rule** or **non social rule**) broken out by presentation format.
Figure 3: Interaction between content type and social rule status for Analysis 1. Content type: **arbitrary** (AR), **shuffled** (SH), or **realistic** (RE) rules. The realistic category contains social rules and non-social rules. Social rule status: **social rule** or **non- social rule** problems. We do not find effects for LLM or familiarity, thus performance is collapsed.
\(-2.78,p<0.05\)). For realistic problems, effects for front non-social rules (OR = 0.65, \(z=-2.40,p<0.05\)), back non-social rules (OR = 0.61, \(z=-2.73,p<0.05\)), classic social rules (OR = 2.35, \(z=5.95,p<0.001\)), and front social rules were significant (OR = 1.61, \(z=3.20,p<0.01\)).
### Analysis 2 Discussion
In general, we do not find performance improvements for different presentation formats. Results suggest that there are some interactions between presentation format and problem content; however, most of our follow up tests were not significant. A follow up analysis of treatment effects broken out by content type suggests that presentation formats have more of an effect on realistic rules than shuffled rules. Such interactions are not anticipated in human performance data.
### Analysis 3 Results
For analysis 3, we examine incorrect answers, scored with domain conditional PMI, across all conditions. Specifically, we evaluate whether the selected answer contains any antecedent card varies according to content type, presentation format, or social rule status.
The best-fitting statistical model contains a main effect for social rule status and an interaction between presentation format and content type, plus a random effect for overall item. A term for LLM did not improve model fit. For content type, we find significant differences between arbitrary versus non-arbitrary problems (OR = 1.53, 95% CI=[1.23 - 1.90], \(z=3.74\), \(p<0.001\)) and between shuffled and realistic problems (OR = 1.30, 95% CI=[1.14 - 1.50], \(z=3.59\), \(p<0.001\)). For presentation formats, we find significant main effects for classic versus front presentation formats (OR = 1.34, 95% CI=[1.15 - 1.57], \(z=3.76\), \(p<0.001\)), front versus back presentation formats (OR = 1.29, 95% CI=[1.10 - 1.51], \(z=3.10\), \(p<0.01\)), and back versus both presentation formats (OR = 1.23, 95% CI=[1.07 - 1.41], \(z=3.06\), \(p<0.01\)). The main effect for social rule status was not significant.
We find significant interactions between arbitrary and non-arbitrary problems and classic versus front presentation formats (OR=1.28, 95% CI=[1.03 - 1.59], \(z=2.17\), \(p<0.05\)), shuffled versus realistic problems and classic versus front presentation formats (OR = 1.29, 95% CI=[1.08 - 1.54], \(z=2.74\), \(p<0.01\)), shuffled versus realistic problems and front versus back formats (OR=1.28, 95% CI=[1.05 - 1.56], \(z=2.54\), \(p<0.05\)), shuffled versus realistic problems and back versus both formats (OR=1.33, 95% CI=[1.14-1.56], \(z=3.61\), \(p<0.001\)).
For the interaction between arbitrary and non-arbitrary problems and classic versus front presentation formats, follow up tests between arbitrary classic versus non-arbitrary classic and non-arbitrary classic versus non-arbitrary first were significant (ORs=[2.73, 0.57], 95% CIs: [[1.49-5.19], [0.39-0.83]], \(z\)-values: [4.09, -3.68], all \(p<0.001\)). For the interaction between shuffled and realistic problems and classic versus front presentation formats, the tests between shuffled classic and realistic classic, shuffled front and realistic front and and realistic front and realistic classic were significant (ORs=[0.33,0.59,0.25], 95% CIs=[[0.17-0.65], [0.34-1.0], [0.13-0.49]], \(z\)-values \(=[-4.091,-2.35,-5.23]\), \(p\)-values \(<[0.001,0.05,0.001]\)). For the shuffled and realistic problems and front versus back presentation format interaction, all four follow up tests were significant (shuffled front versus realistic front: OR = 0.59, 95% CI=[0.34-1.0], \(z\) = -2.35, \(p<0.05\)), shuffled back versus realistic back: OR = 0.53, 95% CI=[0.30-0.93], \(z\) = -2.77, \(p<0.05\)), shuffled back versus realistic front: OR = 0.62, 95% CI=[0.36-1.0], \(z\) = -2.09, \(p<0.05\), shuffled front versus realistic back: OR = 0.50, 95% CI=[0.29-0.88], \(z\) = -3.03, \(p<0.01\)). For the shuffled versus realistic and back versus both interaction, follow up tests between shuffled back versus realistic back and shuffled both versus realistic back were significant (ORs=[0.53, 0.58], 95% CIs: [[0.30-0.93], [0.33-1.0]], \(z\)-values: [-2.77, -2.41], all \(p<0.05\)). Interaction plots are in Figure 7.
Figure 6: Interaction between presentation format (**classic**, **front**, **back**, or **both**), content type (**shuffled** (SH) or **realistic** (RE)), and social rule status (**social rule** or **non social rule**) broken out by content type.
Figure 7: Evaluation of whether the LLMs select an antecedent card. Content type: **arbitrary** (AR), **shuffled** (SH), and **familiar** (FM). Presentation formats: **classic**, **front**, **back**, and **both**. Social rule status: **non-social rule**, **social rule**. Collapsed over LLM.
### Analysis 3 Discussion
Results from analysis 3 are particularly interesting because there is limited variance in human performance according to this metric. Regardless of content type, human subjects select at least one antecedent card [12, 13]. Even for conditions that influence antecedent card selection, the overwhelming tendency is for participants to select the alternative antecedent card [10]. In contrast, our results suggest that whether LLMs select antecedent cards varies significantly according to content type and presentation format.
Despite differences in training datasets and tasks and model architecture, we do not find any effects for LLM. Additionally, the interaction between content type and presentation format suggests that different presentation formats have differential effects on antecedent selection for different content types. LLMs may benefit from different types of formatting depending on the content of the reasoning task.
## Discussion
We set out to expand the research base on evaluating the reasoning capabilities of LLMs with a classic experiment contrasting a priori conditions. We do replicate some effects found in the literature for human subjects. Performance is higher for familiar problems than arbitrary ones and social rule have higher performance than non-social rules. However, we do not replicate the magnitude of the effects; social content does not benefit LLM performance as might be expected based on training corpus content.
We do find effects for different presentation formats; however, they do not improve performance. For shuffled problems, the specific presentation format does not make a difference. For realistic problems, we find presentation type does make a difference-the classic presentation type has the highest performance.
However, our systematic, content and format controlled experimentation and performance measurement has also revealed a number of inexplicable interactions that appear to be consistent across different LLMs. Given the literature on human performance, an interaction between social rule status and content type is expected. However, many of the other interactions are not expected and are inconsistent with human performance.
We find some evidence that LLMs benefit from different types of presentation formats, (depending on the specific content of the problem), as might be expected from popular compensatory prompt engineering efforts. However, it is not immediately clear what types of information facilitate overall reasoning performance. This limits the ability to make general predictions about the conditions under which LLM reasoning is accurate.
In addition to content and presentation interactions, we find that LLMs do not pick antecedent cards at the same rate that human subjects do. Moreover, this behavior is influenced by the the task condition- LLMs are less likely to select antecedent cards for arbitrary and realistic problems than for social rules.
Overall low performance is particularly surprising for realistic social and non-social rules as the relationships for solving these problems are plausibly available in the training data of LLMs. Yet we find that all LLMs diverge from documented human performance.
Overall performance is also remarkably consistent across models despite different training data, objectives, and model architectures. Moreover, we find that the interaction results are also independent of the LLM examined. Some consistency between LLMs is to be expected, given that LLMs are trained on human-generated text corpora. However, the commonalities are not consistent with human reasoning. This suggests a common emergent reasoning bias.
Fine-tuning the models for this task would likely improve task performance. We did not fine-tune the models for several reasons. First, the Wason task is intended to be a general task for evaluating reasoning performance. These tap into general knowledge and a set of reasoning skills that transfers to new tasks. Thus, the position that networks specifically trained on large reasoning task corpora is the best way to evaluate the reasoning performance of models is questionable. This is particularly true given that models often have high performance on the specific training task and limited performance related reasoning tasks [11].
Second, several researchers have proposed that the Wason task can be solved via linguistic and real world knowledge [13, 14]. Human participants achieve high performance on problems that deal with familiar social rules with no experience with the task, using prior knowledge. However, the knowledge that this task requires is plausibly available in the training data for LLMs. The words used in this task are all common English words. Moreover, many of the relationships between items are plausibly available in training text, particularly for problems that deal with familiar social rules. Yet, performance remained quite low.
Previous work has proposed that ideal tasks for evaluating the reasoning of computational algorithms are those that do not require task-specific training [1, 11]. We concur with this position and suggest that the Wason task is an ideal task in this regard.
## Conclusion
Despite substantial performance improvements on standard benchmark datasets, existing LLMs have considerable room for improvement with regards to many aspects of human intelligence [10, 11]. In these experiments, we specifically investigate two of these aspects: generalized performance on related tasks and generation of answers at the limits of available knowledge.
Overall, our results replicate some of the same patterns found in the cognitive science literature. However, we find inexplicable interactions between problem content and overall performance. Additionally, we find that LLMs are sensitive to different sets of task criteria than human subjects. These criteria are not predictable across conditions and suggest areas where the reasoning of LLMs is not consistent with that of human capability. |
2309.15899 | Nonstationary Laguerre-Gaussian states vs Landau ones: choose your
fighter | Although the widely used stationary Landau states describe electrons with a
definite orbital angular momentum (OAM) in a magnetic field, it is the lesser
known nonstationary Laguerre-Gaussian (NSLG) states that appropriately
characterize vortex electrons after their transfer from free space to the
field. The reason is boundary conditions lead to oscillations of the r.m.s.
radius (the transverse coherence length) of the electron packet that has
entered a solenoid. We comprehensively investigate properties of the NSLG
states and establish their connections with the Landau states. For instance, we
show that the transverse coherence length of an electron in the field usually
oscillates around a value greatly exceeding the Landau state coherence length.
We also discuss sensitivity of the NSLG states to a small misalignment between
the propagation axis of a free electron and the field direction, which is
inevitable in a real experiment. It is shown that for any state-of-the-art
parameters, the corrections to the observables are negligible, and the electron
OAM stays robust to a small tilt of the propagation axis. Finally, we draw
analogies between a quantum wave packet and a classical beam of many particles
in phase space, calculating the mean emittance of the NSLG states, which acts
as a measure of their quantum nature. | G. K. Sizykh, A. D. Chaikovskaia, D. V. Grosman, I. I. Pavlov, D. V. Karlovets | 2023-09-27T18:00:00Z | http://arxiv.org/abs/2309.15899v1 | # Nonstationary Laguerre-Gaussian states vs Landau ones: choose your fighter
###### Abstract
Although the widely used stationary Landau states describe electrons with a definite orbital angular momentum (OAM) in a magnetic field, it is the lesser known nonstationary Laguerre-Gaussian (NSLG) states that appropriately characterize vortex electrons after their transfer from free space to the field. The reason is boundary conditions lead to oscillations of the r.m.s. radius (the transverse coherence length) of the electron packet that has entered a solenoid. We comprehensively investigate properties of the NSLG states and establish their connections with the Landau states. For instance, we show that the transverse coherence length of an electron in the field usually oscillates around a value greatly exceeding the Landau state coherence length. We also discuss sensitivity of the NSLG states to a small misalignment between the propagation axis of a free electron and the field direction, which is inevitable in a real experiment. It is shown that for any state-of-the-art parameters, the corrections to the observables are negligible, and the electron OAM stays robust to a small tilt of the propagation axis. Finally, we draw analogies between a quantum wave packet and a classical beam of many particles in phase space, calculating the mean emittance of the NSLG states, which acts as a measure of their quantum nature.
## I Introduction
During the last two decades, electrons with orbital angular momentum (OAM), also known as twisted or vortex electrons, have successfully transitioned from theoretical concept [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12] to experimental realizations [13; 14; 15; 16; 17; 18] and practical implementations [19; 20; 21; 13]. Nevertheless, this is still a relatively new area in quantum microscopy and particle physics [22]. In particular, generation and lensing of twisted electrons should be thoroughly investigated so they could become a reliable and useful tool in atomic and particle physics, studies of magnetic properties of materials [19; 20], and other associated fields.
There are two common approaches to obtain twisted electrons: using phase plates [23; 24; 25] and computer-generation holograms [13; 26; 27]. In free space such electrons are modelled by either Bessel beams [2; 3; 5; 7; 9] or Laguerre-Gaussian states [6; 10]. Whereas the former possess a definite energy, they cannot appropriately characterize real-life electron states, as Bessel beams are non-normalizable. Laguerre-Gaussian states, on the contrary, are normalizable non-stationary wave packets with an energy spread.
Regardless of the generation method, control over the twisted beams transfer through magnetic lenses is crucial for their further use as a diagnostic tool or in other applications. There have already been attempts to investigate the propagation of electrons carrying OAM in magnetic fields [3; 4; 28; 29; 30; 31]. Nonetheless, for practical applications, the transfer of a vortex electron across a boundary between free space and a solenoid (in a setup similar to that of Fig. 1) should be taken into account. The boundary conditions are defined by the state of the electron entering the magnetic field from free space or generated in the field, for example, with a magnetized cathode [29]. These conditions crucially affect further propagation of the electron inside the magnetic lens.
Commonly, an electron in a magnetic field is presumed to be in a stationary Landau state [29; 32; 33]. However, it seems highly unlikely that an electron evolves to the Landau state right after crossing the boundary in an infinitesimal period of time. Therefore, the common approach with the Landau states employed, e.g., in [29; 33], seems to have limited applicability. Moreover, we can set the problem of an electron in a constant and homogeneous magnetic field using one of the two distinct gauges for the vector potential \(\mathbf{A}\), both leading to the same field \(\mathbf{H}=\{0,0,H\}\)[34], but to different sets of solutions: namely, Hermite-Gaussian and Laguerre-Gaussian beams. Clearly, these are two distinct physical states with different quantum numbers, and it is the boundary (or initial) conditions that determine the choice of the gauge and of the electron quantum state. Here we argue that, generally, it is _nonstationary Laguerre-Gaussian_ (NSLG) states rather than the Landau ones that correctly describe the
Figure 1: Transfer of a free Laguerre-Gaussian electron through a magnetic lens. \(z_{\text{g}}\) and \(z_{0}\) are the positions of the electron source and the boundary, respectively. |
2309.13556 | LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning | Current high-performance semantic segmentation models are purely data-driven
sub-symbolic approaches and blind to the structured nature of the visual world.
This is in stark contrast to human cognition which abstracts visual perceptions
at multiple levels and conducts symbolic reasoning with such structured
abstraction. To fill these fundamental gaps, we devise LOGICSEG, a holistic
visual semantic parser that integrates neural inductive learning and logic
reasoning with both rich data and symbolic knowledge. In particular, the
semantic concepts of interest are structured as a hierarchy, from which a set
of constraints are derived for describing the symbolic relations and formalized
as first-order logic rules. After fuzzy logic-based continuous relaxation,
logical formulae are grounded onto data and neural computational graphs, hence
enabling logic-induced network training. During inference, logical constraints
are packaged into an iterative process and injected into the network in a form
of several matrix multiplications, so as to achieve hierarchy-coherent
prediction with logic reasoning. These designs together make LOGICSEG a general
and compact neural-logic machine that is readily integrated into existing
segmentation models. Extensive experiments over four datasets with various
segmentation models and backbones verify the effectiveness and generality of
LOGICSEG. We believe this study opens a new avenue for visual semantic parsing. | Liulei Li, Wenguan Wang, Yi Yang | 2023-09-24T05:43:19Z | http://arxiv.org/abs/2309.13556v2 | # LogicSeg: Parsing Visual Semantics with Neural Logic Learning and Reasoning
###### Abstract
Current high-performance semantic segmentation models are purely data-driven sub-symbolic approaches and blind to the structured nature of the visual world. This is in stark contrast to human cognition which abstracts visual perceptions at multiple levels and conducts symbolic reasoning with such structured abstraction. To fill these fundamental gaps, we devise LogicSeg, a holistic visual semantic parser that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge. In particular, the semantic concepts of interest are structured as a hierarchy, from which a set of constraints are derived for describing the symbolic relations and formalized as first-order logic rules. After fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training. During inference, logical constraints are packaged into an iterative process and injected into the network in a form of several matrix multiplications, so as to achieve hierarchy-coherent prediction with logic reasoning. These designs together make LogicSeg a general and compact neural-logic machine that is readily integrated into existing segmentation models. Extensive experiments over four datasets with various segmentation models and backbones verify the effectiveness and generality of LogicSeg. We believe this study opens a new avenue for visual semantic parsing.
## 1 Introduction
Interpreting high-level semantic concepts of visual stimuli is an integral aspect of human perception and cognition, and has been a subject of interest in computer vision for nearly as long as this discipline has existed. As an exemplar task of visual semantic interpretation, _semantic segmentation_ aims to group pixels into different semantic units. Progress in this field has been notable since the seminal work of fully convolution networks (FCNs)[1] and been further advanced by the recent launch of fully attention networks (Transformer) [2].
Despite these technological strides, we still observe current prevalent segmentation systems lack in-depth reflection on some intrinsic nature of human cognition. **First**, standard segmentation systems simply assume the semantic concepts in the set of interest have no underlying relation and predict all these concepts _exclusively_. By contrast, humans interpret a scene by components. For example in Fig. 1, we can effortlessly recognize many pieces of furniture, such as chairs and tables, and identify various utensils, _e.g._, bottles, and plates. Such capacity of structured understanding of visual semantics is an innate aspect of human perception [3], complies with our way of the organization of knowledge [4, 5], and has a close relation to many metacognitive skills including _compositional generalization_ (_i.e._, making infinite use of finite means) [6], _systematicity_ (_i.e._, cognitive capacity comes in groups of related behaviours) [7], and _interpretability_ (_i.e._, interpreting complex concepts with simpler ones) [8, 9]. Despite its significance and ubiquity, surprisingly little has been done on the computational mod
Figure 1: (a) We humans abstract our perception in a structured manner, and conduct reasoning through symbol manipulation over such multi-level abstraction. (b) We aim to _holistically_ interpret visual semantics, through the integration of both data-driven sub-symbolic learning and symbolic knowledge-based logic reasoning.
eling of structured visual perception in the segmentation literature. Though exceptions exist [10, 11, 12, 13, 14], in general they are scattered, lacking systematic study. **Second**, the latest semantic segmentation systems, label structure aware or not, have developed a pure sub-symbolic learning approach. They enjoy the advantages of robust distributed representation of concept entities, but struggle with explicit reasoning with the relations among entities by discrete symbolic representations [15]. Nevertheless, studies in cognition suggest that our perception works at multiple levels of semantic abstraction [16], intertwined with logical reasoning through manipulation of symbolic knowledge/concepts[17]. For example, after recognizing many utensils from Fig. 1, we know the scene is more likely a kitchen, rather than a bathroom or gym. This judgement comes as a result of reasoning with some abstract knowledge, such as "_utensils typically appear in the kitchen_" and "_utensils are seldom seen in the bathroom_," which are generalized from our daily experience. The judgement of the scene type may become a belief and in turn cause reallocation of our visual attention [18], hence driving us to find out more relevant details, such as small forks.
Filling the gaps identified above calls for a fundamental paradigm shift: **i)** moving away from pixel-wise 'flat' classification towards semantic structure-aware parsing; and **ii)** moving away from the extreme of pure distributed representation learning towards an ambitious hybrid which combines both powerful sub-symbolic learning and principled symbolic reasoning. To embrace this change, we develop LogicSeg, a structured visual parser which exploits neural computing and symbolic logic in a neural-symbolic framework for holistic visual semantic learning and reasoning. In particular, given a set of hierarchically-organized semantic concepts as background knowledge and parsing target, we first use _first-order logic_, a powerful declarative language, to comprehensively specify relations among semantic classes. After _fuzzy logic_ based relaxation, the logical formulae of hierarchy constraints can be grounded on data. During training, each logical constraint is converted into a differentiable loss function for gradient descent optimization. During inference, the logical constraints are involved into an iterative process, and calculated in matrix form. This not only ensures the observance of the compositional semantic structure but also binds logic reasoning into network feed-forward prediction.
By accommodating logic-based symbolic rules into network training and inference, our LogicSeg **i)** blends statistical learning with symbolic reasoning, **ii)** obtains better performance, and **iii)** guarantees its parsing behavior compliant with the logically specified symbolic knowledge. We also remark that our study is relevant to a field of research called _neural-symbolic computing_ (NSC) [19, 20, 21]. With the promise of integrating two critical cognitive abilities [22]: inductive learning (_i.e_., the ability to learn general principles from experience) and deductive reasoning (_i.e_., the ability to draw logical conclusions from what has been learned), NSC has long been a multi-disciplinary research focus and shown superiority in certain application scenarios, such as program generation [23, 24, 25], and question answering [26, 27]. This work unlocks the potential of NSC in visual semantic parsing - a fundamental, challenging, and large-scale vision task.
LogicSeg is a principled framework. It is fully compatible with existing segmentation network architectures, with only minor modification to the classification head and a plug-and-play logic-induced inference module. We perform experiments on four datasets covering wide application scenarios, including automated-driving (MapillaryVistas 2.0 [28], Cityscapes [29]), object-centric (Pascal-Part[30]), and daily (ADE-20K[31]) scenes. Experimental results show that, on the top of various segmentation models (_i.e_., DeepLabV3+ [32], Mask-2Former [33]) and backbones (_i.e_., ResNet-101 [34], Swin-T [35]), LogicSeg yields solid performance gains (**1.12**%-**3.29**% mIoU) and suppresses prior structured alternatives. The strong generalization and promising performance of LogicSeg evidence the great potential of integrating symbolic reasoning and sub-symbolic learning in machine perception.
## 2 Related Work
**Semantic Segmentation.** Since the proposal of fully convolutional networks (FCNs) [1], research studies in pixel-level semantic interpretation have witnessed a phenomenal growth. Tremendous progress has been achieved by, for example, polishing context cues [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52], investigating boundary information [53, 54, 55, 56, 57], incorporating neural attention [58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70], adopting data structure-aware learning [71, 72, 73, 74, 75], and automating network engineering [76, 77, 78, 79]. More recently, the engagement of advanced Transformer [2] architecture, which specializes in long-range dependency modeling, is widely viewed as a promising route for further development [33, 80, 81, 82, 83, 84, 85].
Though impressive, existing segmentation solutions are mainly aware of straightforward prediction for _flatten_ labels. They are largely blind to the rich structures among semantic concepts and lack an explicit mechanism for symbol manipulation/logical calculus, which is what distinguishes humans from other animals [86, 87, 88]. This work represents a small yet solid step towards addressing these fundamental limitations through an integrated neural-logic machine, and inspects semantic segmentation from a brand-new standpoint.
**Label Structure-aware Semantic Segmentation.** Till now, only a rather small number of deep learning based segmentation models [10, 13, 89, 90, 91] are built with structured label taxonomies. The origin of this line of research can be traced back to the task of _image parsing_[90, 92, 93, 94, 95, 96] raised in the pre-deep learning era. Basically, image parsing seeks for a holistic explanation of visual observation: scenes can be understood as a sum of novel objects, and the objects can be further broken down into fine-grained parts. In the deep learning era, the majority of structured segmentation models
are dedicated to _human parsing_[89, 90, 97, 98], which is customized to human-part relation understanding. As for the case of general-purpose segmentation, there are far rare literature [10, 11, 12, 91, 13], and many of them incorporate label taxonomies into the network topology, losing generality [10, 11, 12]. As a notable exception, [13] converts the task as _pixel-wise multi-label classification_ and exploits the class hierarchy for training regularization, with only trivial architectural change.
In a nutshell, previous efforts highlight the limits of standard segmentation models for semantic structures. However, they typically **i)** resolve to stand on the side of sub-symbolic learning, **ii)** make usage of only fragments of structured relations (for instance, the exclusion relation is neglected by [13]), **iii)** lack structure-ware inference, and/or **iv)** rely on sophisticated and specialized neural structures. By contrast, we formulate the structured task into a neural-symbolic framework. We derive a comprehensive set of symbolic relational knowledge in the form of first-order logic and deeply embed logical constraints into network training and inference. Our algorithm is a general framework that is applicable to existing standard hierarchy-agnostic segmentation architectures.
**Neuro-Symbolic Computing.** There has been a line of research, called neural-symbolic computing (NSC), that pursues the integration of the symbolic and statistical paradigms of cognition [19, 20, 21]. NSC has a long history, dating back to McCulloch and Pitts 1943 paper [99], even before AI was recognized as a new scientific field. During 2000s, NSC received systematic study [100, 101, 102, 103]. Early NSC systems were meticulously designed for hard logic reasoning, but they are far less trainable, and fall short when solving real-world problems. NSC has recently ushered in its renaissance, since it shows promise of reconciling statistical learning of neural networks and logic reasoning of abstract knowledge - which is viewed as a key enable to the next generation of AI [104, 105]. Specifically, recent NSC systems [106, 107] show the possibility for modern neural networks to manipulate abstract knowledge with diverse forms of symbolic representation, including knowledge graph[108, 109, 110], propositional logic[111, 112, 113], and first-order logic[114, 115, 116]. They also demonstrate successful application in several domains and disciplines, _e.g._, scientific discovery[117, 118], program generation [23, 24, 25], (visual) question-answering [26, 27], robot planning [119, 120, 121], and mathematical reasoning[122, 123, 124].
To date, none of NSC systems reports advanced performance in large-scale vision, to our best knowledge. In this work, we take the lead to promote and implement the idea of conciliating the methodologies of symbolic and neural paradigms, in visual semantic interpretation. Moreover, many previous NSC systems only exploit logical constraints during network training [116, 125, 126, 127, 128], while our solution is more favored as logic rules are involved throughout network training and inference. As a result, impressive performances across diverse challenging datasets are delivered, and in turn, provide solid empirical evidence for the power of NSC.
## 3 Methodology
**Task Setup and Notations.** In this work we are interested in structured visual parsing [13] - a more challenging yet realistic setting for semantic segmentation - where both semantic concepts and their relations are considered in a form of a tree-shaped class hierarchy \(\mathcal{T}=\langle\mathcal{V},\mathcal{E}\rangle\). The node set \(\mathcal{V}=\cup_{l=1}^{L}\mathcal{V}_{l}\) represents the classes/concepts at \(L\) abstraction levels. For instance in Fig. 2(a), the leaf nodes \(\mathcal{V}_{1}\) are the finest classes (_e.g._, chair, pot), while the internal nodes are higher-level concepts (_e.g._, furniture, utensil), and the roots \(\mathcal{V}_{L}\) are the most abstract ones (_e.g._, object). The edge set \(\mathcal{E}\) encodes relational knowledge among classes. For example, a directed edge \(u\!\rightarrow\!v\!\in\!\mathcal{E}\) denotes a _part-of_ relation between classes \(u,v\!\in\!\mathcal{V}\) in _adjacent_ levels (_e.g._, utensil\(\rightarrow\)pot).
Given \(\mathcal{T}\), the target goal is to assign each pixel a _valid_ root-to-leaf path in \(\mathcal{T}\). For instance, associating a pixel with object\(\rightarrow\)utensil\(\rightarrow\)pot is valid, yet with object\(\rightarrow\) furniture\(\rightarrow\)pot is _invalid_. Thus standard semantic segmentation can be viewed as a specific case of such structured setting -- only assigning pixels with one single class label from the leaf nodes \(\mathcal{V}_{1}\) without considering the hierarchy.
**Algorithmic Overview.** LogicSeg is a unified, neural-logic learning and reasoning model for visual parsing, supported by large-scale data and the structured symbolic knowledge \(\mathcal{T}\).
* From the _neural_ aspect, LogicSeg is _model-agnostic_. After dense feature extraction, its classification head outputs a total of \(|\mathcal{V}|\)_sigmoid_-normalized scores, _i.e._, \(\boldsymbol{s}\!\in\![0,1]^{|\mathcal{V}|}\), over all the classes \(\mathcal{V}\) for each pixel, like [13]. Here \(|\cdot|\) counts its elements. A set of logic rules, derived from \(\mathcal{T}\), are injected into network training and inference.
* From the _logic_ aspect, LogicSeg uses _first-order logic_ to express the complex and abstract relational knowledge in \(\mathcal{T}\). The network is learnt as approximation of logic predicates by following the logical specifications. Once trained, it conducts iterative reasoning on the basis of logic rules.
After introducing our logic based visual relational knowledge representation (SS3.1), we will elaborate on our logic-induced
Figure 2: Illustration of the (a) class hierarchy \(\mathcal{T}\), and (b-d) abstract relational knowledge specified by first-order logic formulae (§3.1).
network training (SS3.2) and inference (SS3.3) strategies.
### Parsing Visual Semantics with Logic Rules
We formalize our target task -- _learning and reasoning visual semantics with logic_ -- as a triple \(\langle\mathcal{T},\mathcal{X},\Pi\rangle\). \(\mathcal{X}\) is a data collection, _i.e._, \(\mathcal{X}=\{(x_{k},\mathbf{y}_{k})\}_{k=1}^{K}\), where \(x_{k}\) is a pixel data point, and \(\mathbf{y}_{k}\!\in\!\{0,1\}^{\mathcal{V}}\!\) is its groundtruth symbolic description in terms of the semantic hierarchy \(\mathcal{T}\). \(\Pi\) is a set of hierarchy rules declaratively expressed by _first-order logic_, containing **i)**_constants_, _e.g._, pixel samples \(x_{1},x_{2},\cdots;\textbf{ii)}\)_variables_ ranging over constants, _e.g._, \(x\); and **ii)**_unary predicates_, one for each class \(v\!\in\!\mathcal{V}\), denote the semantics of variables and return _true_ and _false_, _e.g._, \(\texttt{bed}(x)\!=\!\textit{true}\) states the fact that pixel \(x\) belongs to a bed. A logic rule/formula is a sequence of finite predicates with _connectives_ (_i.e._, \(\wedge,\vee,\neg,\Rightarrow\)) and _quantifiers_ (_i.e._, \(\forall\), \(\exists\)), and organized in _prenex_ form in our case.
Concretely, \(\Pi\) is composed of three types of rules, _i.e._, _composition_, _decomposition_, and _exclusion_, for comprehensively describing the structured symbolic knowledge \(\mathcal{T}\).
\(\bullet\)_Composition Rule_ (\(C\)-rule) expresses our knowledge about the _composition_ relations between semantic concepts, such as "_bed and chair are (subclasses of) furniture_," in a form of:
\[\begin{split}\forall x(\texttt{bed}(x)& \Rightarrow\texttt{furniture}(x)),\\ \forall x(\texttt{chair}(x)&\Rightarrow\texttt{furniture }(x)),\end{split} \tag{1}\]
where bed, chair, furniture are predicates, and '\(\phi\Rightarrow\varphi\)' indicates \(\varphi\) is a logical consequence of antecedence \(\phi\).
**Definition 3.1.1** (\(C\)-rule).: _If one class is labeled true, its superclass should be labeled true_ (Fig. 2(b)):
\[\forall x(v(x)\Rightarrow p_{v}(x)), \tag{2}\]
where \(p_{v}\) is the parent node of \(v\) in \(\mathcal{T}\), _i.e._, \(p_{v}\!\to\!v\!\in\!\mathcal{E}\) (the tree structure of \(\mathcal{T}\)restricts each class to possess only one superclass). \(C\)-rule generalizes the famous _tree-property_[129, 130].
\(\bullet\)_Decomposition Rule_ (\(D\)-rule) states our knowledge about the _decomposition_ relations among semantic concepts, such as "_furniture is the superclass of bed_, _chair_, \(\cdots\), _table_," via:
\[\begin{split}\forall x(\texttt{furniture}(x)\Rightarrow& \texttt{bed}(x)\vee\texttt{chair}(x)\vee\\ &\cdots\vee\texttt{tabel}(x)).\end{split} \tag{3}\]
**Definition 3.1.2** (\(D\)-rule).: _If one class is labeled true, at least one of its subclasses should be labeled true_ (Fig. 2(c)):
\[\forall x(v(x)\Rightarrow c_{v}^{1}(x)\lor c_{v}^{2}(x)\vee\cdots\lor c_{v}^{N}(x)), \tag{4}\]
where \(c_{v}^{n}\!\in\!\mathcal{C}_{v}\) are all the child nodes of \(v\) in \(\mathcal{T}\), _i.e._, \(v\!\to\!c_{v}^{n}\!\in\!\mathcal{E}\). \(C\)-rule and \(D\)-rule are not equivalent. For instance in Eq. 1, \(\texttt{bed}(x)\) is sufficient but not necessary for \(\texttt{furniture}(x)\): given the fact "\(x\) is furniture", we cannot conclude "\(x\) is bed".
\(\bullet\)_Exclusion Rule_ (\(E\)-rule) specifies our knowledge about _mutual exclusion_ relations between _sibling_ concepts, such as "_a bed cannot be at the same time a chair_," in a form of:
\[\forall x(\texttt{bed}(x)\Rightarrow\neg\texttt{chair}(x)). \tag{5}\]
**Definition 3.1.3** (\(E\)-rule).: _If one class is labeled true, all its sibling classes should be labeled false_ (Fig. 2(d)):
\[\forall x(v(x)\Rightarrow\neg a_{v}^{1}(x)\wedge\neg a_{v}^{2}(x)\wedge\cdots \wedge\neg a_{v}^{M}(x)), \tag{6}\]
where \(a_{v}^{m}\!\in\!\mathcal{A}_{v}\) are all the peer nodes of \(v\) in \(\mathcal{T}\). Note that \(E\)-rule is ignored by many hierarchy-aware algorithms [13, 131, 132].
### Logic-Induced Training
So far, we shown the logic rules \(\Pi\) provide LogicSeg a flexible language for comprehensively expressing the complex _meronymy_ and _exclusion_ relations among symbolic concepts in the hierarchy \(\mathcal{T}\). Unfortunately, these rules are logic formulae working with variables (assuming a boolean value), and non-differentiable logic symbols (_e.g._, \(\forall\), \(\Rightarrow\)). This prevents the integration with end-to-end network learning.
Inspired by [128, 133], a _fuzzy logic_ based _grounding_ process is adopted to interpret logic formulae as differentiable fuzzy relations on real numbers for neural computing (Fig. 3).
**Fuzzy relaxation.** Fuzzy logic is a form of soft probabilistic logic. It deals with reasoning that is approximate instead of fixed and exact; variables have a truth degree that ranges in \([0,1]\): zero and one meaning that the variable is _false_ and _true_ with certainty, respectively [134]. Hence we can ground predicates onto segmentation network outputs. For instance, given a pixel sample \(x\), corresponding network prediction score w.r.t. class _bed_ is a grounded predicate w.r.t. \(\texttt{bed}(x)\). Logical connectives, _i.e._, \(\wedge,\vee,\neg,\Rightarrow\) are approximated with _fuzzy operators_, _i.e._, _t-norm_, _t-conorm_, _fuzzy negation_, and _fuzzy implication_. As suggested by [133], we adopt the operators in _Goguen fuzzy logic_[135] and _Godel fuzzy logic_[136]:
\[\begin{split}\phi\wedge\varphi&=\phi\cdot\varphi, \hskip 14.226378pt\phi\vee\varphi=\texttt{max}(\phi,\varphi),\\ \neg\phi&=1-\phi,\hskip 14.226378pt\phi \Rightarrow\varphi=1-\phi+\phi\cdot\varphi.\end{split} \tag{7}\]
Figure 3: Illustration of our logic-induced network training (§3.2). For clarity, the pixel-wise binary cross-entropy loss is omitted.
The existential quantifier \(\exists\) and universal quantifier \(\forall\) are approximated in a form of generalized mean:
\[\begin{array}{rl}\exists x\phi(x)=&(\frac{1}{K}{\sum_{k=1}^{K}}\phi(x_{k})^{q})^{ \frac{1}{q}},\\ \forall x\phi(x)=&1-(\frac{1}{K}{\sum_{k=1}^{K}}(1-\phi(x_{k}))^{q})^{\frac{1}{q }},\end{array} \tag{8}\]
where \(q\!\in\!\mathbb{Z}\). Please refer to[128, 133] for detailed discussion regarding the rationale behind such approximation of \(\exists\) and \(\forall\).
**Logic Loss.** With fuzzy relaxation, we are ready to convert our first-order logic rules \(\Pi\) into loss functions.
\(\bullet\)_**C-rule Loss.** For a non-root node \(v\in\mathcal{V}/\mathcal{V}_{L}\), its corresponding _C_-rule (_cf._ Eq.2) is grounded as:
\[\mathcal{G}_{C}(v)\!=\!1-\Big{(}\tfrac{1}{K}{\sum_{k=1}^{K}}(\mathbf{s}_{k}[v]- \mathbf{s}_{k}[v]\cdot\mathbf{s}_{k}[p_{v}])^{q}\Big{)}^{\frac{1}{q}}, \tag{9}\]
where \(\mathbf{s}_{k}[v]\!\in\![0,1]\) refers to the score (confidence) of \(x_{k}\) for class \(v\). Then the _C_-rule based training objective is given as:
\[\mathcal{L}_{C}\!=\!\tfrac{1}{|\mathcal{V}|-|\mathcal{V}_{L}|}{\sum_{v\in \mathcal{V}/\mathcal{V}_{L}}}1-\mathcal{G}_{C}(v). \tag{10}\]
\(\bullet\)_**D-rule Loss.** For a non-leaf node \(v\in\mathcal{V}/\mathcal{V}_{1}\), its corresponding _D_-rule (_cf._ Eq.4) is grounded as:
\[\mathcal{G}_{D}(v)\!=\!1-\Big{(}\tfrac{1}{K}{\sum_{k=1}^{K}}(\mathbf{s}_{k}[v]-\bm {s}_{k}[v]\!\cdot\!\max(\{\mathbf{s}_{k}[c_{v}^{n}]\}_{n}))^{q}\Big{)}^{\!\frac{1}{ q}}. \tag{11}\]
Similarly, our _D_-rule loss is given as:
\(\bullet\)_**E-rule Loss.** During the grounding of _E_-rule (_cf._ Eq.6), we first translate such _one-vs-all_ exclusion statement to a semantically equivalent expression, _i.e._, the aggregation of multiple _one-vs-one_ exclusion (\(\{(v(x)\!\Rightarrow\neg a_{v}^{1}(x)),\cdots,\{(v(x)\!\Rightarrow\neg a_{v }^{M}(x))\}\)). Adopting such translation is to avoid the _sorites paradox_, _i.e._, a long chain of only slightly unreliable deductions can be very unreliable [137] (_e.g._, \(0.9^{10}\approx 0.34\)), happened during approximating a series of \(\wedge\). Then, for each node \(v\!\in\!\mathcal{V}\), its corresponding _E_-rule is grounded as:
\[\mathcal{G}_{E}(v)\!=\!1\!-\tfrac{1}{\!M}{\sum_{m=1}^{M}}\!\Big{(} \tfrac{1}{K}{\sum_{k=1}^{K}}(\mathbf{s}_{k}[v]\!\cdot\!\mathbf{s}_{k}[a_{v}^{m}])^{q} \Big{)}^{\!\frac{1}{q}}. \tag{13}\]
Similarly, our _E_-rule loss is given as:
\[\mathcal{L}_{E}\!=\!\tfrac{1}{|\mathcal{V}|}{\sum_{v\in\mathcal{V}}}\,1- \mathcal{G}_{E}(v). \tag{14}\]
In this way, it is possible to backpropagate the gradient from the logic loss into the network. The network is essentially learned as neural predicates obeying the logical constraints. It is worth mentioning that, due to large-scale training, it is infeasible to compute the full semantics of \(\forall\); batch-training can be viewed as sampling based approximation [133].
Our overall training target is organized as:
\[\mathcal{L}\!=\!\alpha(\mathcal{L}_{C}\!+\!\mathcal{L}_{D}\!+\!\mathcal{L}_{E })\!+\!\tfrac{1}{K}{\sum_{k=1}^{K}}\mathcal{L}_{\text{BCE}}(\mathbf{s}_{k},\mathbf{y }_{k}). \tag{15}\]
Here \(\mathbf{y}\!\in\!\{0,1\}^{|\mathcal{V}|}\) is the groundtruth, \(\mathcal{L}_{\text{BCE}}\) is the binary cross-entropy loss, and the coefficient is empirically set as \(\alpha\!=\!0.2\).
### Logic-Induced Inference
We just showed that LogicSeg can approximate the predicates by integrating symbolic logic constraints into large-scale network training. However, during inference, there is no explicit way to ensure the alignment between the class hierarchy \(\mathcal{T}\) and network prediction, neither sound reasoning with the logic rules \(\Pi\). We thus put forward _logic-induced reasoning_ (Fig.4), where the logic rules \(\Pi\) are encapsulated into an iterative optimization process. Such process is non-learnable, based on only matrix operations and thus can be seamlessly embedded into network feed-forward inference, yielding an elegant yet compact neural-logic visual parser.
Our solution is built upon the classic _message passing_ algorithm which is to estimate the marginal likelihood for a given tree structure by _iteratively_ exchanging messages between nodes. Specifically, at each iteration, for each pixel sample \(x_{k}\), node \(v\!\in\!\mathcal{V}\) sends different types of messages to different neighboring nodes, according to the logic rules \(\Pi\):
\[\begin{split}\text{\it C-message}\!:&\,h_{v,v_{k}}^{C} \!=\!v(x_{k})\Rightarrow p_{v}(x_{k})\\ &\!=\!1\!-\!\mathbf{s}_{k}[v]\!+\!\mathbf{s}_{k}[v]\!\cdot\!\mathbf{s}_{k}[p_{v }],\end{split} \tag{16}\]
\[\begin{split}\text{\it D-message}\!:&\,h_{v,c_{v}}^{D} \!=\!v(x_{k})\!\Rightarrow c_{v}^{1}(x_{k})\!\vee\!\cdots\!\vee\!c_{v}^{N}(x_{ k})\\ &\!=\!1\!-\!\mathbf{s}_{k}[v]\!+\!\mathbf{s}_{k}[v]\!\cdot\!\max(\{\mathbf{s}_{k }[c_{v}^{n}]\}_{n}),\end{split} \tag{17}\]
\[\begin{split}\text{\it E-message}\!:&\,h_{v,a_{v}}^{E} \!=\!-\!1\!\cdot\!\big{(}v(x_{k})\!\Rightarrow\!\neg a_{v}^{1}(x_{k})\! \wedge\!\cdots\!\neg a_{v}^{M}(x_{k})\big{)}\\ &\!=\!-\big{(}1\!-\!\tfrac{1}{M}{\sum_{m=1}^{M}}\mathbf{s}_{k}[v]\! \cdot\!\mathbf{s}_{k}[a_{v}^{m}]\big{)}.\end{split} \tag{18}\]
Figure 4: Illustration of our logic-induced inference (§3.3). (a-b) Iterative reasoning is made by exchanging and absorbing messages between nodes, following the logic rules \(\Pi\). For clarity, we only show the message creation (Eq.16) and aggregation (Eq.17) stages for one single node. (c) Structured parsing (Eq.18) is conducted by selecting the top-scoring path \(\mathcal{P}^{*}\) (highlighted in red) after logic-guided iterative reasoning. (d) With logic-induced inference, LogicSeg is able to generate more accurate and hierarchy-compliant predictions.
Node \(v\) is updated by aggregating the received messages:
\[\begin{split}\mathbf{s}_{k}[v]\!\leftarrow\!\mathbf{s}_{k}[v]\!+\!\frac{1}{N} \!\sum\nolimits_{c^{m}_{v}\in\mathcal{C}_{v}}\mathbf{s}_{k}[c^{m}_{v}]\!\cdot\!h^{C} _{c^{m}_{v},v}\!+\!\mathbf{s}_{k}[p_{v}]\!\cdot\!h^{D}_{p_{v},v}\\ +\frac{1}{M}\!\sum\nolimits_{a^{m}_{v}\in\mathcal{A}_{v}}\!\mathbf{s}_ {k}[a^{m}_{v}]\!\cdot\!h^{E}_{a^{m}_{v},v}.\end{split} \tag{17}\]
Each message (_cf._ Eq. 16) accounts for the certainty degree that \(v\) satisfies the corresponding logic rule (_cf._ SS3.1) when being grounded on pixel data point \(x_{k}\), with fuzzy logic based approximation (_cf._ SS3.2). Intuitively, the more certainty a node meets the logic rules, the more message it can propagate to other nodes. Note that, \(v\) creates a _negative_ message \(h^{E}_{v,a^{m}_{v}}\) to "suppress" other peer nodes due to their exclusion relations. In Eq. 17, the received messages are weighted by the confidence of the source nodes themselves - the grounded predicates, _i.e_., \(\mathbf{s}_{k}[c^{n}_{v}]\), \(\mathbf{s}_{k}[p_{v}]\), and \(\mathbf{s}_{k}[a^{m}_{v}]\). After each iteration, the score vector \(\mathbf{s}_{k}\) is _softmax_-normalized per hierarchy level.
Finally, each pixel \(x_{k}\) is associated with the top-scoring _root-to-leaf_ path in the hierarchy \(\mathcal{T}\) (red path in Fig. 4(c)):
\[\mathcal{P}^{*}=\{v^{*}_{1},\cdots,v^{*}_{L}\}=\operatorname*{argmax}_{ \mathcal{P}\subset\mathcal{T}}\sum\nolimits_{v^{p}\in\mathcal{P}}\mathbf{s}_{k}[v ^{\mathcal{P}}], \tag{18}\]
where \(\mathcal{P}\!=\!\{v^{\mathcal{P}}_{1},\cdots,v^{\mathcal{P}}_{L}\}\subset \mathcal{T}\) indicates a feasible root-to-leaf path in \(\mathcal{T}\), _i.e_., \(\forall v^{\mathcal{P}}_{1},v^{\mathcal{P}}_{1-1}\in\mathcal{P}\!\Rightarrow\! v^{\mathcal{P}}_{l}\!\rightarrow\!v^{\mathcal{P}}_{l-1}\!\in\!\mathcal{E}\).
It is easy to find that all the logic-induced inference steps (_cf._ Eq. 18-18) can be formulated in _matrix_ form with only a couple of matrix multiplications (see corresponding pseudo-code in the supplementary). Hence it is efficient on GPU and can be straightforward injected into the network, making LogicSeg a fully-integrated neural-logic machine. In practice, 2-iteration message passing is enough for robust prediction. Through logic-induced reasoning (_cf._ Eq. 17) and hierarchy-aware parsing (_cf._ Eq. 18), LogicSeg is able to **i)** obtain _improved performance_, and **ii)** guarantee the parsing results to _respect the hierarchy \(\mathcal{T}\), with **iii)** only _negligible speed delay_ (about 3.8%). See SS4.4 for related experiments.
## 4 Experiment
### Experimental Setup
**Datasets.** We conduct extensive experiments on four datasets, _i.e_., Mapillary Vistas 2.0[28], Cityscapes [29], Pascal-Part-108[30], and ADE20K [31]. The four datasets are selected to cover the rich application scenarios of semantic segmentation, including urban street segmentation for automated driving (_i.e_., [28, 29]), object part parsing (_i.e_., [30]), and fine-grained understanding of daily scenes (_i.e_., [31]), so as to comprehensively examine the utility of our algorithm.
* **Mapillary Vistas 2.0** is a large-scale urban scene dataset. It contains \(18,000\!/2,000\!/5,000\) images for train/val/test. A three-level semantic hierarchy, covering \(4/16/124\) concepts, is officially provided for dense annotation.
* **Cityscapes** has \(2,975\!/\!500\!/1,524\) finely annotated, urban street images for train/val/test. The label hierarchy consists of \(19\) fine-grained concepts and \(6\) superclasses.
* **Pascal-Part-108** is the largest object part parsing dataset. It consists of \(4,998\!/5,105\) images for train/test. To establish the class hierarchy, we group \(108\) part-level labels into \(20\) object-level categories, as in [138, 139, 140, 141].
* **ADE20K** is a large-scale generic scene parsing dataset. It is divided into \(20,210\!/2,000\!/3,000\) images for train/val/test. It provides pixel-wise annotations for \(150\) fine-grained semantic classes, from which a three-layer label hierarchy (with \(3/14/150\) concepts) can be derived.
**Evaluation Metric.** We adopt the standard metric, mean intersection-over-union (mIoU), for evaluation. For detailed performance analysis, the score is reported for each hierarchy level \(l\) (denoted as mIoU\({}^{l}\)), as suggested by [13, 89].
**Base Models and Competitors.** To demonstrate our wide benefit, we approach our algorithm on two famous segmentation architectures, _i.e_., DeepLabV3+ [32] and Mask2Former [33], with ResNet-101 [34] and Swin-T [35] backbones. For performance comparison, we involve several hierarchy-aware segmentation models [13, 138, 141], and view Hssn[13] as our major rival as it is a general framework that reports strong results over several datasets, instead of the others that are dedicated to specific dataset(s) or task setup(s). For comprehensive evaluations, we include a group of previous hierarchy-agnostic segmentation algorithms [10, 38, 82, 80, 81, 38], whose segmentation results on coarse-grained semantics are obtained by merging the predictions of the corresponding subclasses.
**Training.** For the sake of fairness, we follow the standard training setup in [142, 143, 143, 83, 44]. In particular, we train \(240\)K\(/80\)K iterations for Mapillary Vistas 2.0/Cityscapes, with batch size \(8\) and crop size \(512\!\times\!1024\); \(60\)K\(/160\)K iterations for Pascal-Part-108/ADE20K, with batch size \(16\) and crop size \(512\!\times\!512\). For data augmentation, the images are horizontally flipped and scaled with a ratio between 0.5 and 2.0 at random. For network optimization, SGD (with initial learning rate 1e-2, momentum 0.9, and weight decay 1e-4) and Adam (with initial learning rate 6e-5 and weight decay 0.01) are respectively used for CNN-based and neural attention-based models, where the learning rate is scheduled by the polynomial annealing rule. For network initialization, ImageNet[144] pre-trained weights are pre-loaded.
**Testing.** For Mapillary Vistas 2.0 and Cityscapes, we keep the original image aspect ratio but resize the short edge to 1024. Sliding window inference with the identical window shape as the training size is adopted to save memory. For ADE20K and Pascal-Part-108, the short edge is resized to 512 so as to enable one-time inference for the whole image. As in [146, 68, 83, 89], performance of all the models is reported at multiple scales (\(\{0.5,0.75,1.0,1.25,1.5,1.75\}\)) with horizontal flipping.
**Hyperparameters.** We set \(\alpha\!=\!0.2\) for the loss coefficient (_cf._ Eq. 15), and \(q\!=\!5\) for logic quantifier approximation (_cf._ Eq.8), as suggested by [128]. For network inference, we find 2 iterations of message passing are enough.
### Quantitative Comparison Result
**MapillaryVistas 2.0 [28] val.** From Table 1 we can observe that our approach provides notable performance gains over the baselines. For example, our algorithm promotes classic DeepLabV3+ [32] by **3.65%/3.42%/3.29%** over the three semantic levels. On top of MaskFormer [83], our algorithm further lifts the scores by **2.42%/2.73%/2.96%**, suppressing previous hierarchy-agnostic models, as well as Hssn[13] - a newly proposed hierarchy-aware segmentation model.
**Cityscapes[29] val.** Table 2 confirms again our compelling performance in challenging urban street scenes and wide benefits for different segmentation models, _i.e_., **1.21%/1.12%** over DeepLabV3+, and **1.35%/1.28%** over MaskFormer. Though both encoding concept structures into segmentation, our algorithm greatly outperforms Hssn, suggesting the superiority of our logic reasoning framework.
**Pascal-Part-108 [30] test.** As illustrated by Table 3, our algorithms yields remarkable performance on explaining the compositionality of object-centric semantic structures. Specifically, our algorithm not only consistently boosts the performance of base segmentation models [32, 33], but also defeats two outstanding hierarchy-agnostic competitors [141, 38] as well as three structured alternatives [13, 138, 141].
**ADE20K[31] val.** Table 4 presents our parsing results in general scenes. With a relatively conservative baseline, _i.e_., DeepLabV3+ [32], our algorithm earn **79.60%**, **59.04%**, and **48.46%**, in terms of mIoU\({}^{1}\), mIoU\({}^{2}\), and mIoU\({}^{3}\) respectively. It delivers a solid overtaking against Mask2Former [33], which is built upon a more advanced architecture. When applied to MaskFormer [83], our algorithm achieves **82.45%/62.44%/52.82%**, pushing forward the state-of-the-art.
Taking together, our extensive benchmarking results provide solid evidence that our algorithm successfully unlocks the power of logic reasoning in large-scale visual parsing, and owns broad applicability across various task scenarios, segmentation architectures, and backbone networks.
based on DeepLabV3+ [32] with ResNet-101 [34] backbone.
**Logic-Induced Training.** We first study the effectiveness of our logic-induced training strategy (SS3.2) in Table 4(a). 1\({}^{st}\) row reports the results of our baseline model - DeepLabV3+. 2\({}^{nd}\), 3\({}^{rd}\), and 4\({}^{th}\) rows respectively list the scores obtained by individually adding our _C_-rule loss \(\mathcal{L}_{C}\) (_cf_. Eq.10), _D_-rule loss \(\mathcal{L}_{D}\) (_cf_. Eq.12), and _E_-rule loss \(\mathcal{L}_{E}\) (_cf_. Eq.14). The last row gives the performance of our full loss \(\mathcal{L}\) (_cf_. Eq.15). We can find that: **i)** Taking each of our logic losses into consideration can provide consistent performance gains. This demonstrates that different logic rules can describe different properties of semantic structure and verify that the segmentation model can indeed benefit from our proposed logic losses. **ii)** Combing all three logic losses together results in the best performance. This suggests that our logic rules provide a comprehensive description of the relational knowledge in the semantic hierarchy \(\mathcal{T}\), and supports our core idea that exploiting symbolic knowledge is crucial for visual semantic interpretation and can boost sub-symbolic learning.
**Training Speed.** As shown in the last column of Table 4(a), our logic-induced training regime causes a trivial delay (\(\sim\)5.0%).
**Logic-Induced Inference.** We next investigate the impact of our logic-induced inference strategy (SS3.3) in Table 4(b). 1\({}^{st}\) row reports the results of network feed-forward output. The rest rows give the scores obtained with different iterations of message passing (_cf_. Eq.17). These results demonstrate the efficacy of our strategy and the necessity of incorporating logic reasoning into network inference. We accordingly set 2-iteration as the default to pursue the best performance.
**Inference Speed.** We also report inference speed (fps) in Table 4(b). As seen, our logic-induced inference strategy only slows the speed slightly during model deployment (\(\sim\)3.8%).
**Aggregation Coefficient.** For the approximation of \(\forall\) quantifier (_cf_. Eq.8), we adopt the generalized mean for stable training, as suggested by [128]. Basically, a higher coefficient \(q\) renders \(\forall\) a stronger focus on outliers. For completeness, the results with different values of \(q\) are reported in Table 4(c).
## 5 Conclusion and Discussion
The creation of intelligent systems that integrate the fundamental cognitive abilities of reasoning and learning has long been viewed as a core challenge for AI [22]. While the community recently witnessed great advances in high-level perception tasks such as visual semantic interpretation, top-leading solutions are purely driven by sub-symbolic learning, far from such effective integration. The present study represents an innovative and solid attempt towards closing this gap. By embedding symbolic logic into both network training and inference, a structured and powerful visual semantic parser is delivered. We hope this work can stimulate our community to rethink current _de facto_, sub-symbolic paradigm and investigate new methodologies, from the perspective of achieving a better understanding of human and machine intelligence.
**Acknowledgements** This work was supported in part by the Australian Research Council (ARC) under Grant DP200100938.
Figure 5: **Visual results (§4.3) on Mapillary Vistas 2.0[28]. _Left_: DeepLabV3+[32] _vs_ LogicSeg; _Right_: Mask2Former[33] _vs_ LogicSeg. |
2309.03745 | Asymptotic growth patterns for class field towers | Let $p$ be an odd prime number. We study growth patterns associated with
finitely ramified Galois groups considered over the various number fields
varying in a $\mathbb{Z}_p$-tower. These Galois groups can be considered as
non-commutative analogues of ray class groups. For certain
$\mathbb{Z}_p$-extensions in which a given prime above $p$ is completely split,
we prove precise asymptotic lower bounds. Our investigations are motivated by
the classical results of Iwasawa, who showed that there are growth patterns for
$p$-primary class numbers of the number fields in a $\mathbb{Z}_p$-tower. | Arindam Bhattacharyya, Vishnu Kadiri, Anwesh Ray | 2023-09-07T14:38:34Z | http://arxiv.org/abs/2309.03745v2 | # Asymptotic growth patterns for class field towers
###### Abstract.
Let \(p\) be an odd prime number. We study growth patterns associated with finitely ramified Galois groups considered over the various number fields varying in a \(\mathbb{Z}_{p}\)-tower. These Galois groups can be considered as non-commutative analogues of ray class groups. For certain \(\mathbb{Z}_{p}\)-extensions in which a given prime above \(p\) is completely split, we prove precise asymptotic lower bounds. Our investigations are motivated by the classical results of Iwasawa, who showed that there are growth patterns for \(p\)-primary class numbers of the number fields in a \(\mathbb{Z}_{p}\)-tower.
## 1. Introduction
### Motivation from Iwasawa theory
Let \(\mathds{L}\) be a number field and \(p\) be an odd prime number. Let \(H_{p}(\mathds{L})\) be the maximal abelian unramified \(p\)-extension of \(\mathds{L}\), and let \(\mathcal{A}_{p}(\mathds{L})\) denote the Galois group \(\operatorname{Gal}(H_{p}(\mathds{L})/\mathds{L})\). By class field theory, \(\mathcal{A}_{p}(\mathds{L})\) is naturally isomorphic to the \(p\)-primary part of the class group of \(\mathds{L}\). Setting \(h_{p}(\mathds{L}):=\#\mathcal{A}_{p}(\mathds{L})\), we refer to \(h_{p}(\mathds{L})\) as the \(p\)-class number of \(\mathds{L}\). We set \(\mathbb{Z}_{p}\) to denote the ring of \(p\)-adic integers. A \(\mathbb{Z}_{p}\)-extension of a number field \(\mathds{L}\) is an infinite Galois extension \(\mathds{L}_{\infty}/\mathds{L}\), such that \(\operatorname{Gal}(\mathds{L}_{\infty}/\mathds{L})\) is topologically isomorphic to \(\mathbb{Z}_{p}\). Given a \(\mathbb{Z}_{p}\)-extension \(\mathds{L}_{\infty}/\mathds{L}\), and \(n\in\mathbb{Z}_{\geq 0}\), we set \(\mathds{L}_{n}/\mathds{L}\) to be the extension such that \(\mathds{L}_{n}\subseteq\mathds{L}_{\infty}\) and \([\mathds{L}_{n}:\mathds{L}]=p^{n}\). The field \(\mathds{L}_{n}\) is the \(n\)_-th layer_, we obtain tower of number fields
\[\mathds{L}=\mathds{L}_{0}\subseteq\mathds{L}_{1}\subseteq\cdots\subseteq \mathds{L}_{n}\subseteq\mathds{L}_{n+1}\subseteq\dots;\]
let \(e_{n}\in\mathbb{Z}_{\geq 0}\) be such that \(p^{e_{n}}=h_{p}(\mathds{L}_{n})\). In his seminal work, Iwasawa [19] showed that there are constants \(\mu,\lambda\in\mathbb{Z}_{\geq 0}\) and \(\nu\in\mathbb{Z}\), such that for all large enough values of \(n\), \(e_{n}=p^{n}\mu+n\lambda+\nu\).
Thus, Iwasawa's results show that there are interesting growth patterns for \(p\)-class numbers in certain infinite towers of number fields. These results motivate the study of asymptotic growth properties of \(p\) Hilbert class field towers along \(\mathbb{Z}_{p}\)-extensions. Throughout, \(p\) will be a fixed odd prime number, and let \(\mathds{L}_{\infty}/\mathds{L}\) be a \(\mathbb{Z}_{p}\)-extension. Set \(\mathcal{F}(\mathds{L}_{n})\) to denote the maximal unramified pro-\(p\) extension of \(\mathds{L}_{n}\), and set \(G_{n}:=\operatorname{Gal}(\mathcal{F}(\mathds{L}_{n})/\mathds{L}_{n})\). We identify the abelianization of \(G_{n}\) with
\[\mathcal{A}_{p}(\mathds{L}_{n})=\operatorname{Gal}(H_{p}(\mathds{L}_{n})/ \mathds{L}_{n}).\]
The field \(\mathcal{F}(\mathds{L}_{n})\) contains a \(p\) Hilbert class field tower over \(\mathds{L}_{n}\). In greater detail, for \(j\in\mathbb{Z}_{\geq 0}\), define \(H_{p}^{(j)}(\mathds{L}_{n})\) inductively as follows
\[H_{p}^{(0)}(\mathds{L}_{n}):=\mathds{L}_{n},H_{p}^{(1)}(\mathds{L}_{n}):=H_{p }(\mathds{L}_{n})\text{ and }H_{p}^{(j)}(\mathds{L}_{n}):=H_{p}\left(H_{p}^{(j-1)}(\mathds{L}_{n})\right)\]
for \(j\geq 2\). In this way, we obtain a tower of \(p\)-extensions of \(\mathds{L}_{n}\) whose union is equal to \(\mathcal{F}(\mathds{L}_{n})\). The field \(\mathcal{F}(\mathds{L}_{n})\) is infinite if and only if \(H_{p}^{(j)}(\mathds{L}_{n})\subsetneq H_{p}^{(j+1)}(\mathds{L}_{n})\) for all \(j\geq 1\)
These Hilbert class field towers, when viewed along the \(\mathbb{Z}_{p}\)-extension, are represented by the following diagram
Iwasawa studied asymptotic growth patterns for the degrees of the field extensions \(H_{p}(\mathds{L}_{n})\) considered along the tower that is the second column (from the left margin) in the above diagram. In the spirit of Iwasawa theory, we consider natural growth questions
for the pro-\(p\) groups \(G_{n}\) as \(n\to\infty\). We define the _exponential growth number_\(\rho(G)\) of a pro-\(p\) group \(G\), which is a natural invariant associated with its Hilbert series. In this context, there is a natural analogy with the Hilbert series of an algebraic variety. Let \(\Omega(G)\) denote the mod-\(p\) Iwasawa algebra associated to \(G\), defined as the inverse limit \(\varprojlim_{U}\mathbb{F}_{p}[G/U]\), where \(U\) ranges over all finite index normal subgroups of \(G\). Let \(I_{G}\) be the augmentation ideal of \(\Omega(G)\), and for \(n\geq 0\), set \(c_{n}(G):=\dim_{\mathbb{F}_{p}}\left(I_{G}^{n}/I_{G}^{n+1}\right)\), where it is understood that \(c_{0}(G):=1\). Let
\[H(G;t)=\sum_{n\geq 0}c_{n}(G)t^{n}\]
denote the Hilbert series associated to \(G\), cf. Definition 2.1. If \(G\) is finite, then \(c_{n}(G)=0\) for large enough values of \(n\). When \(G\) is a \(p\)-adic analytic group of dimension \(d>0\), we find that \(c_{n}(G)\sim an^{d}\), for some positive constant \(a\). In practice, the Galois groups that arise from infinite \(p\) Hilbert class field towers are not \(p\)-adic analytic, and there are examples for which the numbers \(c_{n}(G)\) increase at an exponential rate. The radius of convergence of \(H(G;t)\) is given by \(R_{G}=\rho(G)^{-1}\), where \(\rho(G):=\limsup_{n\to\infty}|c_{n}(G)|^{\frac{1}{n}}\). The constant \(\rho(G)\) measures the exponential growth of subgroups in the Zassenhaus filtration of \(G\). It is clear that if \(\rho(G)>1\), then \(G\) is not an analytic pro-\(p\) group. In fact, in this case, the group \(G\) is not an analytic pro-\(p\) group. It is shown, in various contexts that pro-\(p\) Hilbert class field towers are infinite via an application of the Golod-Shafarevich-Vinberg criterion (cf. [10, Theorem 1.2]). This strategy is developed and employed in the following works: [11, 12, 13, 14, 15]. The class of pro-\(p\) groups for which it can be shown that \(\rho(G)>1\), via the Golod-Shafarevich-Vinberg criterion have special properties, and are known as Golod-Shafarevich groups (cf. Definition 2.4). We refer to [16] for an introduction to the theory of Golod-Shafarevich groups.
Given a number field \(L\), its _root discriminant_ is defined to be \(D_{L}^{1/n}\), where \(D_{L}\) is its absolute discriminant, and \(n=[L:\mathbb{Q}]\). It is an old question as to whether there exists an infinite tower of number fields, unramified away from a finite set of primes, for which the root discriminant is bounded. Since the root discriminant is constant in unramified extensions, the above question is related to the constants of Martinet [13] and Odlyzsko's bounds [17]. In the number field extensions \(L/\mathds{L}_{n}\) such that \(L\subset\mathcal{F}(\mathds{L}_{n})\), the root discriminant remains bounded. Hajir and Maire [15] construct unramified Golod-Shafarevich Galois groups and are able to improve upon Martinet's constants. These root discriminant bounds are further refined by Hajir, Maire and Ramakrishna in [15], but via tamely ramified towers rather than unramified ones. One is thus interested in constructing infinite unramified Galois pro-\(p\) groups \(G\), such that \(\rho(G)\) is large. In this paper, we study the following question.
**Question 1.1**.: _Let \(\mathds{L}\) be a number field and \(p\) be a prime number. Given a \(\mathbb{Z}_{p}\)-extension \(\mathds{L}_{\infty}/\mathds{L}\), what can one say about the growth of \(\rho(G_{n})\), as \(n\to\infty\)?_
### Main results
We consider a variant of the above question, for certain natural \(\mathbb{Z}_{p}\)-extensions in which one of the primes above \(p\) is infinitely split. For \(k\geq 0\), let \(\mathcal{F}^{[k]}(\mathds{L}_{n})\) be the maximal pro-\(p\) extension of \(\mathds{L}_{n}\) for which
* all primes \(v\nmid p\) are unramified,
* all decomposition groups at primes \(v|p\) are abelian,
* for all primes \(v\mid p\), every element of the inertia group of \(v\) has order dividing \(p^{k}\).
Note that \(\mathcal{F}(\mathds{L}_{n})=\mathcal{F}^{[0]}(\mathds{L}_{n})\). For \(k\geq 1\), there is finite ramification in the extension \(\mathcal{F}^{[k]}(\mathds{L}_{n})/\mathds{L}_{n}\). As a consequence, there is an intermediate extension \(\mathds{L}_{n}\subseteq\mathds{L}_{n}^{\prime}\subseteq\mathcal{F}^{[k]}( \mathds{L}_{n})\), such that \(\mathds{L}_{n}^{\prime}/\mathds{L}_{n}\) is a finite extension, and \(\mathcal{F}^{[k]}(\mathds{L}_{n})/\mathds{L}_{n}^{\prime}\) is unramified (cf. [10, Proposition 1.5]). In particular, for all number fields contained in \(\mathcal{F}^{[k]}(\mathds{L}_{n})\), the root discriminant is bounded. We shall set \(\mathcal{G}^{[k]}(\mathds{L}_{n}):=\operatorname{Gal}(\mathcal{F}^{[k]}( \mathds{L}_{n})/\mathds{L}_{n})\), and \(\rho^{[k]}(\mathds{L}_{n}):=\rho\left(\mathcal{G}^{[k]}(\mathds{L}_{n})\right)\).
Let \(\mathds{L}\) be a number field extension of \(\mathbb{Q}\) and let \(p\) be an odd prime. Let \(r_{1}\) (resp. \(r_{2}\)) the number of real embeddings (resp. complex embeddings) of \(\mathds{L}\). Let \(\mathfrak{p}=\mathfrak{p}_{1},\mathfrak{p}_{2},\ldots,\mathfrak{p}_{g}\) be the primes of \(\mathds{L}\) that lie above \(p\), and assume that \(g\geq 2\). Set \(e_{i}:=e(\mathfrak{p}_{i}/p)\), \(f_{i}:=f(\mathfrak{p}_{i}/p)\), and note that \(d_{i}:=[\mathds{L}_{\mathfrak{p}_{i}}:\mathbb{Q}_{p}]=e_{i}f_{i}\). Let \(U_{i}\) be the group of local units in \(\mathcal{O}_{\mathcal{L}_{\mathfrak{p}_{i}}}\) and \(U_{i}^{(1)}\) the principal local units in \(U_{i}\). We set \(\mathfrak{U}\) to denote the product \(\prod_{i=2}^{g}U_{i}^{(1)}\), and \(\bar{E}\) to denote the \(p\)-adic closure of the image of the group of global units \(\mathcal{O}_{\mathds{L}}^{\times}\) in \(\mathfrak{U}\). Setting \(\delta:=\operatorname{rank}_{\mathbb{Z}_{p}}(\bar{E})\), we note that \(\delta\leq\operatorname{rank}\mathcal{O}_{\mathds{L}}^{\times}=r_{1}+r_{2}-1\). We set
\[m:=\operatorname{rank}_{\mathbb{Z}_{p}}\left(\mathfrak{U}/\bar{E}\right)-1= [\mathds{L}:\mathbb{Q}]-d_{1}-\delta-1,\]
and assume that \(m\geq 1\). It follows from a standard application of global class field theory that there exists a \(\mathbb{Z}_{p}\)-extension of \(\mathds{L}\) in which \(\mathfrak{p}\) is completely split and all other primes \(\mathfrak{p}_{i}\) for \(i\geq 2\) are totally ramified. Note that when \(\mathds{L}\) is totally imaginary, the condition \(m\geq 1\) is automatically satisfied when \([\mathds{L}:\mathbb{Q}]\geq 2(d_{1}+1)\).
**Theorem 1.2**.: _With respect to notation above, assume that the following conditions are satisfied_
1. \(p\) _is odd and there are_ \(g>1\) _primes of_ \(\mathds{L}\) _that lie above_ \(p\)_,_
2. \(\mathds{L}\) _contains_ \(\mu_{p}\)_, the_ \(p\)_-th roots of unity,_
3. \([\mathds{L}:\mathbb{Q}]\geq 2\left(d_{1}+1\right)\)_,_
4. \(([\mathds{L}:\mathbb{Q}]+2)^{2}>8\left(\sum_{i=2}^{g}d_{i}^{2}\right)\)_._
_There exists a \(\mathbb{Z}_{p}\)-extension \(\mathds{L}_{\infty}/\mathds{L}\) in which \(\mathfrak{p}\) is totally split and all other primes above \(p\) are totally ramified. Furthermore, there exists a constant \(C>0\) (independent of \(n\) and \(k\)) and \(n_{0}\in\mathbb{Z}_{\geq 0}\), such that for all \(n\geq n_{0}\) and \(k\geq 1\), we have that_
\[\rho^{[k]}(\mathds{L}_{n})\geq Cp^{n}.\]
_Moreover, the constant \(C\) can be chosen to be any value such that_
\[0<C<\frac{2\left(\sum_{i=2}^{g}d_{i}^{2}\right)}{\left(2+[\mathds{L}:\mathbb{ Q}]\right)}.\]
Note that the condition (2) implies that \(\mathds{L}\) is totally imaginary and (3) implies that \(m\geq 1\). We prove Theorem 1.2 by adapting the strategy Hajir, Maire and Ramakrishna [10]. The following result illustrates Theorem 1.2.
**Corollary 1.3**.: _Let \(p\geq 23\) and \(\ell\geq 11\) be distinct primes and set \(\mathds{L}:=\mathbb{Q}(\mu_{p\ell})\). Let \(\mathfrak{p}\) be any prime of \(\mathds{L}\) that lies above \(p\). Assume that \(\ell\) divides \((p-1)\). Then, there exists a \(\mathbb{Z}_{p}\)-extension \(\mathds{L}_{\infty}/\mathds{L}\) in which \(\mathfrak{p}\) is totally split and all other primes above \(p\) are totally
ramified. Furthermore, for any constant_
\[0<C<\frac{2(\ell-2)(p-1)^{2}}{2+(p-1)(\ell-1)},\]
_we have that_
\[\rho^{[k]}(\mathds{L}_{n})\geq Cp^{n}\]
_for all \(k\geq 1\), and all large enough values of \(n\)._
We also define another notion measuring the _size_\(m(G)\) of a Golod-Shafarevich group \(G\) (cf. Definition 2.6). We prove an analogous result for the growth of the numbers \(m^{[k]}(\mathds{L}_{n}):=m\left(\mathcal{G}^{[k]}(\mathds{L}_{n})\right)\), cf. Theorem 3.4.
### Organization
Including the introduction, the manuscript consists of three sections. In section 2, we introduce preliminary notions. In greater detail, we develop the Golod-Shafarevich theory of pro-\(p\) groups. We prove an explicit criterion (cf. Proposition 2.3) which gives an explicit lower bound for the exponential growth number \(\rho(G)\) for a Golod-Shafarevich group \(G\). In section 3, we apply the results from section 2 to prove Theorem 1.2, Corollary 1.3 and Theorem 3.4.
### Acknowledgment
The third author's research is supported by the CRM-Simons fellowship. The authors are very thankful to Katharina Muller and Ravi Ramakrishna for numerous helpful comments.
## 2. Pro-\(p\) groups and their Hilbert series
We review some preliminaries and set up some basic notation in this section. Throughout, \(p\) shall be an odd prime. Let \(G\) be a finitely generated pro-\(p\) group, set \(\Omega(G)\) to denote the completed group algebra of \(G\) over \(\mathbb{F}_{p}\). More precisely, \(\Omega(G)\) is defined to be the inverse limit
\[\Omega(G):=\varprojlim_{U}\mathbb{F}_{p}[G/U],\]
where \(U\) runs over all finite index normal subgroups of \(G\). We refer to \(\Omega(G)\) as the _mod-\(p\) Iwasawa algebra_ of \(G\). Many properties of the group \(G\) are captured by algebraic properties of \(\Omega(G)\). We consider the _Hilbert series_ associated with \(\Omega(G)\). Given a normal subgroup \(U\) of \(G\), there is a natural map \(\iota_{U}:G\to\mathbb{F}_{p}[G/U]\), which sends \(g\in G\) to the group like element \(\bar{g}\cdot 1\). Here, \(\bar{g}\) is the congruence class of \(g\) modulo \(G\). The map \(\iota:G\to\Omega(G)\) is the inverse limit \(\iota:=\varprojlim_{U}\iota_{U}\). We shall, by abuse of notation, simply let \(g\in\Omega(G)\) denote the group-like element \(\iota(g)\). The augmentation map \(\alpha:\Omega(G)\to\mathbb{F}_{p}\) maps each group-like element to \(1\). Set \(I_{G}\) to denote the augmentation ideal of \(\Omega(G)\), i.e., the kernel of the augmentation map. For \(n\geq 1\), \(\Omega(G)/I_{G}^{n}\) is a finite dimensional \(\mathbb{F}_{p}\)-vector space.
**Definition 2.1**.: _For \(n\geq 1\), let \(c_{n}(G):=\dim_{\mathbb{F}_{p}}\left(I_{G}^{n}/I_{G}^{n+1}\right)\), and set \(c_{0}(G):=1\). The Hilbert series \(H(G;t)\) is a formal power series defined by \(H(G;t):=\sum_{n\geq 0}c_{n}(G)t^{n}\). Setting \(\rho(G):=\limsup_{n\to\infty}|c_{n}(G)|^{\frac{1}{n}}\), we note that the radius of convergence of \(H(G;t)\) is given by \(R_{G}:=\rho(G)^{-1}\)._
Given \(x\in G\) such that \(x\neq 1\), the _depth_ of \(x\) is defined as follows
\[\omega_{G}(x):=\max\{n\mid x-1\in I_{G}^{n}\}.\]
We set \(\omega_{G}(1):=\infty\).
**Definition 2.2**.: _The Zassenhaus filtration is defined as follows_
\[G_{n}:=\{g\in G\mid\omega_{G}(g)\geq n\}.\]
The sequence \(G_{n}\) is a sequence of open normal subgroups of \(G\). The quotient \(G_{n}/G_{n+1}\) is a finite dimensional \(\mathbb{F}_{p}\) vector space, set \(a_{n}(G):=\dim_{\mathbb{F}_{p}}\left(G_{n}/G_{n+1}\right)\). The quantity \(\rho(G)\) measures the order of exponential growth of the quotients \(\left(I_{G}^{n}/I_{G}^{n+1}\right)\) as \(n\to\infty\). These numbers are related to the growth of groups in the Zassenhaus filtration. There is an explicit relationship between the numbers \(\{a_{n}(G)\}\) and \(\{c_{n}(G)\}\), established by Minac, Rogelstad and Tan [11].
Assume that \(G\) is a finitely presented pro-\(p\) group, and let
\[d(G) :=\dim_{\mathbb{F}_{p}}H^{1}(G,\mathbb{F}_{p}),\] \[r(G) :=\dim_{\mathbb{F}_{p}}H^{2}(G,\mathbb{F}_{p}).\]
Consider a minimal presentation
\[1\to R\to F\xrightarrow{\varphi}G\to 1 \tag{2.1}\]
of \(G\). Here, \(F=\langle\sigma_{1},\ldots,\sigma_{d}\rangle\) is a free pro-\(p\) group generated by \(d=d(G)\) elements. The subgroup \(R\) of \(F\) is the normal subgroup \(\langle\rho_{1},\ldots,\rho_{r}\rangle^{\mathrm{Norm}}\) generated by \(r=r(G)\) elements. Given a choice of minimal presentation of \(G\), we define the associated Golod-Shafarevich polynomials. Let \(\Omega(F)\) be the mod-\(p\) Iwasawa algebra associated to \(F\), and \(I_{F}\) be the augmentation ideal of \(\Omega(F)\). By a theorem of Lazard [10], the algebra \(\Omega(F)\) is isomorphic to the algebra of power series \(\mathbb{F}_{p}\langle\langle u_{1},\ldots,u_{d}\rangle\rangle\). Here \(u_{i}\) is identified with the element \((\sigma_{i}-1)\). The augmentation ideal \(I_{F}\) is generated by \(u_{1},\ldots,u_{d}\). Let \(\omega_{F}\) be the depth function on \(F\), defined by setting
\[\omega_{F}(x):=\max\{n\mid x-1\in I_{F}^{n}\},\]
for \(x\neq 1\), and \(\omega_{F}(1):=\infty\). The map \(\varphi\) induces a surjection \(\Omega(F)\to\Omega(G)\), whose kernel we denote by \(J\). Identify \(\Omega(G)\) with the quotient \(\Omega(F)/J\) and \(I\) with \(I_{F}/J\). The depth filtration \(\omega_{G}\) is related to \(\omega_{F}\) as follows
\[\omega_{G}(x)=\max\{\omega(y)\mid\varphi(y)=g\},\]
cf. [10, Appendice 3, Theorem 3.5].
**Proposition 2.1**.: _The depth function \(\omega_{F}:F\to\mathbb{Z}_{\geq 1}\cup\{\infty\}\) and associated filtration \(\{G_{n}\}_{n}\) satisfies the following properties_
1. _for_ \(x\in F\)_,_ \(\omega_{F}(x^{p})=p\omega_{F}(x)\)_. Therefore, if_ \(x\in G_{n}\)_, then,_ \(x^{p}\in G_{np}\)_._
2. _For_ \(x\in G_{n}\) _and_ \(y\in G_{m}\)_, one has_ \([x,y]\in G_{n+m}\)_._
3. _For_ \(x,y\in G\)_, we have that_ \(\omega_{F}(xy)\geq\min(\omega_{F}(x),\omega_{F}(y))\)_._
Proof.: The above mentioned result is [11, Proposition 1.2], also see [14, Section 7.4].
**Definition 2.3**.: _The Golod-Shafarevich polynomial associated with the minimal presentation (2.1) is defined as follows_
\[P(G;t):=1-dt+\sum_{i=1}^{r}t^{\omega_{F}(\rho_{i})}.\]
Note that since the filtration is minimal, the depth \(\omega_{F}(\rho_{i})\geq 2\), cf. [10, p.3, l. -9].
**Theorem 2.2** (Vinberg's criterion).: _Let \(G\) be a pro-\(p\) group and let \(H(G;t)\) be the Hilbert series associated to \(G\). Choose a minimal presentation (2.1) of \(G\) and let \(P(G;t)\) be the associated Golod-Shafarevich polynomial. Recall that \(R_{G}\) is the radius of convergence of \(H(G;t)\), and set \(R_{G}^{\prime}:=\min\{1,R_{G}\}\). Then, for any value \(t\in(0,R_{G}^{\prime})\), the following inequality is satisfied_
\[H(G;t)P(G;t)\geq 1.\]
Proof.: This result is well known. We refer to [14, Theorem 2.1] and [15] for a proof of the above statement.
The next result shows that if \(P(G;t)\) takes a negative value \(t_{0}\) in the interval \((0,1)\), then, \(\rho(G)>1\). Furthermore, it gives us a lower bound for \(\rho(G)\), and can therefore be viewed as a refinement of the Golod-Shafarevich-Vinberg criterion (cf. [10, Theorem 1.2]).
**Proposition 2.3**.: _Suppose that for some \(t_{0}\in(0,1)\), the value \(P(G;t_{0})\) is negative. Then, the order of growth satisfies the lower bound \(\rho(G)\geq\frac{1}{t_{0}}\)._
Proof.: Let \(R_{G}\) be the radius of convergence of the Hilbert series \(H(G;t)\) and set \(R_{G}^{\prime}:=\min\{1,R_{G}\}\). By definition, the coefficients of \(H(G;t)\) are all positive. Then, for \(t\in(0,R_{G}^{\prime})\), the series \(H(G;t)\) converges absolutely to a positive value. By Vinberg's criterion, \(P(G;t)H(G;t)\geq 1\) for all \(t\in(0,R_{G}^{\prime})\). Since \(P(G;t_{0})<0\), it follows that \(H(G;t)\) does not converge absolutely at \(t=t_{0}\). In other words, \(t_{0}\) lies outside the domain of convergence. Therefore, \(t_{0}\geq R_{G}^{\prime}\). Since \(t_{0}<1\), it follows that \(R_{G}=R_{G}^{\prime}<1\) and therefore, \(t_{0}\geq R_{G}=\rho(G)^{-1}\). Therefore, we conclude that \(\rho(G)\geq\frac{1}{t_{0}}\).
**Definition 2.4**.: _A pro-\(p\) group \(G\) is said to be a Golod-Shafarevich group if for some minimal presentation_
\[1\to R\to F\xrightarrow{\varphi}G\to 1,\]
_there is a value \(t_{0}\in(0,1)\) so that \(P(G;t_{0})<0\). Proposition 2.3 shows that if \(G\) is a Golod-Shafarevich group with \(P(G;t_{0})<0\), then the order of exponential growth satisfies \(\rho(G)\geq t_{0}^{-1}>1\). In particular \(G\) is infinite for which the numbers \(c_{n}(G)\) grow at an exponential rate as \(n\to\infty\)._
At this point, we introduce a new definition which measures the _size_ of a Golod-Shafarevich group. Let \(G\) be a Golod-Shafarevich group and
\[1\to R\to F\xrightarrow{\varphi}G\to 1\]
be a minimal presentation of \(G\). We consider quotients \(G^{\prime}\) of \(G\) such that \(d(G^{\prime})=d(G)\). Suppose that \(G^{\prime}=G/\langle x_{1},\ldots,x_{m}\rangle^{\mathrm{Norm}}\). Then, following [10, p.4, ll. 10-17], we get a new minimal presentation as follows. Lift each \(x_{i}\) to \(y_{i}\in F\), and let
\(R^{\prime}=\langle\rho_{1},\ldots,\rho_{r},y_{1},\ldots,y_{m}\rangle^{\mathrm{Norm}}\) to be the normal subgroup of \(F\) generated by \(R\) and the elements \(y_{1},\ldots,y_{r^{\prime}}\). Since it is assumed that \(d(G^{\prime})=d(G)\), this gives a minimal presentation of \(G^{\prime}\). In particular, note that \(\omega_{F}(y_{i})\geq 2\) for all \(i=1,\ldots,m\). Following _loc. cit._, we say that we have _cut_ the group \(G\) by the elements \(y_{1},\ldots,y_{m}\). Let \(\varphi^{\prime}:F\to G^{\prime}\) be the composite of \(\varphi\) with the quotient map \(G\to G^{\prime}\). With respect to the new presentation
\[1\to R^{\prime}\to F\xrightarrow{\varphi^{\prime}}G^{\prime}\to 1,\]
we find that
\[P(G^{\prime};t)\leq 1-d(G)t+r(G)t^{2}+\sum_{i=1}^{m}t^{\omega_{F}(y_{i})}.\]
**Definition 2.5**.: _Let \(G\) be a Golod-Shafarevich group, and choose a minimal presentation for \(G\) such that \(P(G;t)\) attains a negative value on \((0,1)\). We say that a set \(\{y_{1},\ldots,y_{m}\}\subset F\) is an admissible cutting datum if \(\omega_{F}(y_{i})\geq 2\) for all \(i\)._
Given an admissible cutting datum \(\{y_{1},\ldots,y_{m}\}\), set \(x_{i}:=\varphi(y_{i})\) and
\[G^{\prime}:=G/\langle x_{1},\ldots,x_{m}\rangle^{\mathrm{Norm}}.\]
**Definition 2.6**.: _Let \(G\) be a Golod-Shafarevich group, and choose a minimal presentation_
\[\mathfrak{M}:1\to R\to F\to G\to 1\]
_for \(G\) such that \(P(G;t)\) attains a negative value on \((0,1)\). We define \(m_{\mathfrak{M}}(G)\) to be the smallest value \(m\in\mathbb{Z}_{\geq 0}\) such that there exists an admissible cutting datum \(\{y_{1},\ldots,y_{m+1}\}\), such that \(P(G^{\prime};t)\geq 0\) for all \(t\in(0,1)\). If no such \(m\) exists, we set \(m_{\mathfrak{M}}(G):=\infty\). We set_
\[m(G):=\min\{m_{\mathfrak{M}}(G)\mid\mathfrak{M}\},\]
_where the minimum is taken over all minimal presentations \(\mathfrak{M}\) of \(G\)._
Thus, for any admissible cutting datum \(\{y_{1},\ldots,y_{k}\}\), with \(k\leq m(G)\), the associated quotient \(G^{\prime}\) is a Golod-Shafarevich group, and thus in particular is infinite. Like \(\rho(G)\), the number \(m(G)\) gives one a measure of the size of a Golod-Shafarevich group \(G\).
## 3. Growth asymptotics for split prime \(\mathbb{Z}_{p}\)-extensions
Let \(\mathds{L}\) be a number field which satisfies the conditions of Theorem 1.2. In this section, \(S_{p}\) (resp. \(S_{\infty}\)) denote the primes of \(\mathds{L}\) that lie above \(p\) (resp. \(\infty\)). Set \(S\) to denote the union. Let \(L/\mathds{L}\) be a Galois number field extension and let \(S(L)\) (resp. \(S_{p}(L)\)) be the set of places of \(L\) which lie above \(S\) (resp. \(S_{p}\)). We shall denote by \(L_{S}\) the maximal pro-\(p\) algebraic extension of \(L\) in which the primes \(w\notin S(L)\) are unramified. We denote by \(\mathrm{G}_{S}(L)\) the Galois group \(\mathrm{Gal}(L_{S}/L)\). Set \(H_{S}(L)\) to be the maximal abelian unramified extension of \(L\) in which the places \(v\in S(L)\) are split. The \(S\) class group of \(L\) is the Galois group \(\mathrm{Cl}_{S}(L):=\mathrm{Gal}(H_{S}(L)/L)\), and is identified with a quotient of the class group \(\mathrm{Cl}(L)\). For \(v\in S_{p}\), let \((e_{v}(L/\mathds{L}),f_{v}(L/\mathds{L}),g_{v}(L/\mathds{L}))\) denote the ramification invariants of \(v\) in \(L\). In other words, \(v\) splits into \(g_{v}(L)\) primes in \(\mathcal{O}_{L}\), each with ramification index \(e_{v}(L/\mathds{L})\). The quantity \(f_{v}(L/\mathds{L}):=[\mathcal{O}_{L}/w:\mathcal{O}_{K}/v]\) is independent of the choice of prime \(w|v\) of \(L\). We note that \(e_{v}(L/\mathds{L})f_{v}(L/\mathds{L})g_{v}(L/\mathds{L})=[L:\mathds{L}]\).
**Theorem 3.1**.: _Let \(L/\mathds{L}\) be a finite Galois extension. With respect to the above notation, the following relations hold_
\[\dim_{\mathbb{F}_{p}}H^{1}(\mathrm{G}_{S}(L),\mathbb{F}_{p}) =\sum_{i=1}^{g}g_{\mathfrak{p}_{i}}(L/\mathds{L})+\frac{[L:\mathbb{ Q}]}{2}+\dim\left(\mathrm{Cl}_{S}(L)\otimes\mathbb{F}_{p}\right),\] \[\dim_{\mathbb{F}_{p}}H^{2}(\mathrm{G}_{S}(L),\mathbb{F}_{p}) =\sum_{i=1}^{g}g_{\mathfrak{p}_{i}}(L/\mathds{L})-1+\dim\left( \mathrm{Cl}_{S}(L)\otimes\mathbb{F}_{p}\right).\]
Proof.: The above is a special case of [13, Theorem 10.7.3].
We shall set \(g_{i}(n)\), \(e_{i}(n)\) and \(f_{i}(n)\) denote \(g_{i}(\mathds{L}_{n}/\mathds{L})\), \(e_{i}(\mathds{L}_{n}/\mathds{L})\) and \(f_{i}(\mathds{L}_{n}/\mathds{L})\) respectively. Recall that \(\mathfrak{p}_{1}\) is totally split in \(\mathds{L}_{\infty}\) and that \(\mathfrak{p}_{i}\) is totally ramified in \(\mathds{L}_{\infty}\) for \(i\geq 2\). We find that
\[g_{1}(n) =p^{n},e_{1}(n)=f_{1}(n)=1,\] \[g_{i}(n) =1,e_{i}(n)=p^{n},f_{i}(n)=1\text{ for }i\geq 2. \tag{3.1}\]
We set
\[d(n):=\dim_{\mathbb{F}_{p}}H^{1}(\mathrm{G}_{S}(\mathds{L}_{n}),\mathbb{F}_{p })\text{ and }r(n):=\dim_{\mathbb{F}_{p}}H^{2}(\mathrm{G}_{S}(\mathds{L}_{n}), \mathbb{F}_{p}).\]
Before stating the next result, we clarify some standard conventions with regard to our notation. Let \(f,g,h:\mathbb{Z}_{\geq 0}\to\mathbb{R}_{\geq 0}\) be non-negative functions. We write \(f(n)=g(n)+O(h(n))\) to mean that there is a positive constant \(C>0\), independent of \(n\), such that for all \(p\in\mathcal{P}\), \(|f(n)-g(n)|\leq Ch(n)\). We write \(f(n)\leq g(n)+O(h(n))\) (resp. \(f(n)\geq g(n)+O(h(n))\)) to mean that \(f(n)\leq g(n)+Ch(n)\) (resp. \(f(n)\geq g(n)-Ch(n)\)) for some absolute constant \(C>0\).
**Corollary 3.2**.: _The following relations hold_
\[d(n) \geq p^{n}\left(1+\frac{[\mathds{L}:\mathbb{Q}]}{2}\right)\] \[d(n) =O(p^{n}),r(n)=O(p^{n}).\]
Proof.: It follows from Iwasawa's theorem that \(\mathrm{Cl}(\mathds{L}_{n})\otimes\mathbb{F}_{p}=O(p^{n})\). The result follows therefore as an immediate consequence of Theorem 3.1 and (3.1).
For \(n\geq 0\) and \(v\in S_{p}(\mathds{L}_{n})\), set \(n_{v}:=\dim_{\mathbb{F}_{p}}\left(\mathds{L}_{n,v}^{\times}\otimes\mathbb{F}_ {p}\right)\). Note that the pro-\(p\) completion of the decomposition group at \(v\) is generated by \(n_{v}\) elements. For ease of notation, we set \(d_{i}:=[\mathds{L}_{\mathfrak{p}_{i}}:\mathbb{Q}_{p}]\). We note that since \(\mu_{p}\) is contained in \(\mathds{L}\),
\[n_{v}=\dim_{\mathbb{F}_{p}}\left(\mathds{L}_{n,v}^{\times}\otimes \mathbb{F}_{p}\right)= [\mathds{L}_{n,v}:\mathbb{Q}_{p}]+2,\] \[= \left(e_{i}(n)f_{i}(n)[\mathds{L}_{\mathfrak{p}_{i}}:\mathbb{Q}_ {p}]+2\right)\text{ if }v|\mathfrak{p}_{i},\] \[= \begin{cases}(d_{1}+2)\text{ if }v|\mathfrak{p}_{1},\\ (p^{n}d_{i}+2)\text{ if }v|\mathfrak{p}_{i}\text{ for }i\geq 2.\end{cases} \tag{3.2}\]
Let \(\widetilde{\mathcal{L}}_{n}\) be the maximal pro-\(p\) extension of \(\mathds{L}_{n}\) unramified at all primes \(v\notin S(\mathds{L}_{n})\). Denote by \(\mathcal{L}_{n}\) the maximal pro-\(p\) extension of \(\mathds{L}_{n}\) unramified at all primes \(v\notin S(\mathds{L}_{n})\) and such that for all primes \(v\in S_{p}(\mathds{L}_{n})\), the decomposition group is abelian. For \(k\geq 0\), let
\(\mathcal{F}^{[k]}(\mathds{L}_{n})\) be the maximal subfield of \(\mathcal{L}_{n}\) in which all the elements in the inertia groups at primes \(v\in S_{p}(\mathds{L}_{n})\) have order dividing \(p^{k}\). It is thus understood that when \(k=0\), \(\mathcal{F}(\mathds{L}_{n}):=\mathcal{F}^{[0]}(\mathds{L}_{n})\) is the maximal unramified pro-\(p\) extension of \(\mathds{L}_{n}\). We have the following containments
\[\mathds{L}_{n}\subseteq\mathcal{F}(\mathds{L}_{n})\subseteq\mathcal{F}^{[1] }(\mathds{L}_{n})\subseteq\cdots\subseteq\mathcal{F}^{[k]}(\mathds{L}_{n}) \subseteq\mathcal{F}^{[k+1]}(\mathds{L}_{n})\subseteq\cdots\subseteq\mathcal{ L}_{n}\subseteq\widetilde{\mathcal{L}}_{n}.\]
Set \(\widetilde{\mathcal{G}}_{n}\), \(\mathcal{G}_{n}\) and \(\mathcal{G}_{n}^{[k]}=\mathcal{G}^{[k]}(\mathds{L}_{n})\) denote the Galois groups \(\mathrm{Gal}(\widetilde{\mathcal{L}}_{n}/\mathds{L}_{n})\), \(\mathrm{Gal}(\mathcal{L}_{n}/\mathds{L}_{n})\) and \(\mathrm{Gal}(\mathcal{F}^{[k]}(\mathds{L}_{n})/\mathds{L}_{n})\) respectively. We begin with a minimal filtration
\[1\to R\to F\xrightarrow{\varphi}\widetilde{\mathcal{G}}_{n}\to 1 \tag{3.3}\]
of \(\widetilde{\mathcal{G}}_{n}\). The group \(\mathcal{G}_{n}\) is obtained from \(\widetilde{\mathcal{G}}_{n}\) upon cutting by the commutators of the generators of all decomposition groups for primes \(v\in S_{p}(\mathds{L}_{n})\). In greater detail, for each prime \(v\in S_{p}(\mathds{L}_{n})\), let \(z_{1}^{(v)},\ldots,z_{n_{v}}^{(v)}\) be a set of elements in \(F\) mapping to a set of generators of the decomposition group at \(v\). For \(1\leq i<j\leq n_{v}\), we let \(z_{i,j}^{(v)}\in F\) denote the commutator \([z_{i}^{(v)},z_{j}^{(v)}]\). It follows from Proposition 2.1 that
\[\omega(z_{i,j}^{(v)})\geq\omega(z_{i}^{(v)})+\omega(z_{j}^{(v)})\geq 2. \tag{3.4}\]
We represent \(\mathcal{G}_{n}\) as the quotient
\[\mathcal{G}_{n}=\frac{\widetilde{\mathcal{G}}_{n}}{\langle\varphi\left(z_{i, j}^{(v)}\right)\mid 1\leq i<j\leq n_{v},v\in S_{p}(\mathds{L}_{n}) \rangle^{\mathrm{Norm}}},\]
and we obtain a new presentation
\[1\to R^{\prime}\to F\to\mathcal{G}_{n}\to 1, \tag{3.5}\]
where,
\[R^{\prime}=R\langle z_{i,j}^{(v)}\mid 1\leq i<j\leq n_{v},v\in S_{p}(\mathds{L} _{n})\rangle^{\mathrm{Norm}}.\]
Going modulo the \(p^{k}\)-th powers of all generators of decomposition groups, we obtain a presentation
\[1\to R^{\prime\prime}\to F\to\mathcal{G}_{n}^{[k]}\to 1, \tag{3.6}\]
where,
\[R^{\prime\prime}=R^{\prime}\langle\left(z_{i}^{(v)}\right)^{p^{k}}\mid 1\leq i \leq n_{v},v\in S_{p}(\mathds{L}_{n})\rangle^{\mathrm{Norm}}.\]
Set \(\widetilde{\mathcal{P}}_{n}(t):=\mathcal{P}(\widetilde{\mathcal{G}}_{n},t)\), \(\mathcal{P}_{n}(t):=\mathcal{P}(\mathcal{G}_{n},t)\), and \(\mathcal{P}_{n}^{[k]}(t):=\mathcal{P}(\mathcal{G}_{n}^{[k]},t)\) to be the Golod-Shafarevich polynomials associated with the minimal filtrations (3.3), (3.5) and (3.6) respectively.
**Proposition 3.3**.: _For \(t\in(0,1)\), \(\mathcal{P}_{n}(t)\leq 1-D_{n}t+R_{n}t^{2}\), where \(D_{n},R_{n}\) are constants satisfying_
\[D_{n}\geq p^{n}\left(1+\frac{[\mathds{L}:\mathbb{Q}]}{2}\right),\] \[D_{n}= O(p^{n}),\] \[R_{n}= p^{2n}\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right)+O(p^{n}).\]
Proof.: The group \(\mathcal{G}_{n}\) is obtained from \(\widetilde{\mathcal{G}}_{n}\) by quotienting by the commutators of each of the decomposition groups at primes \(v\in S_{p}(\mathds{L}_{n})\). At each prime \(v\), we cut by \(\binom{n_{v}}{2}\) elements \(\{z_{i,j}^{(v)}\ |\ 1\leq i<j\leq n_{v},v\in S_{p}(\mathds{L}_{n})\}\), where we recall that \(n_{v}\) is given by (3.2). Furthermore, it follows from (3.4) that \(\omega(z_{i,j}^{(v)})\geq 2\). Therefore, we find that for \(t\in(0,1)\),
\[\mathcal{P}_{n}(t) \leq\widetilde{\mathcal{P}}_{n}(t)+\left(\sum_{v\in S_{p}( \mathds{L}_{n})}\binom{n_{v}}{2}\right)t^{2}\] \[=\widetilde{\mathcal{P}}_{n}(t)+\left(p^{n}\binom{d_{1}+2}{2}+ \sum_{i=2}^{g}\binom{p^{n}d_{i}+2}{2}\right)t^{2}\] \[\leq 1-d(n)t+\left(r(n)+p^{n}\binom{d_{1}+2}{2}+\sum_{i=2}^{g} \binom{p^{n}d_{i}+2}{2}\right)t^{2}\] \[\leq 1-D_{n}t+R_{n}t^{2},\]
where
\[D_{n} :=d(n)\geq p^{n}\left(1+\frac{[\mathds{L}:\mathbb{Q}]}{2}\right),\] \[R_{n} :=r(n)+p^{n}\binom{d_{1}+2}{2}+\sum_{i=2}^{g}\binom{p^{n}d_{i}+2}{ 2},\] \[=p^{2n}\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right)+O(p^{n}).\]
Here, we have invoked the bounds for \(d(n)\) and \(r(n)\) from Corollary 3.2.
Proof of Theorem 1.2.: It follows from Proposition 3.3 that
\[\mathcal{P}_{n}(t)\leq 1-D_{n}t+R_{n}t^{2},\]
where
\[D_{n}\geq p^{n}\left(1+\frac{[\mathds{L}:\mathbb{Q}]}{2}\right)\] \[D_{n}= O(p^{n}),\] \[R_{n}= p^{2n}\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right)+O(p^{n}). \tag{3.7}\]
For \(k\geq 1\), the group \(\mathcal{G}_{n}^{[k]}\) is obtained from \(\mathcal{G}_{n}\) by quotienting by the \(p^{k}\)-th powers of the generators of the decomposition groups at the primes \(v\in S_{p}(\mathds{L}_{n})\). The total number of
elements we quotient by is
\[\begin{split} R_{n}^{\prime}&:=\sum_{v\in S_{p}(\mathbb{ L}_{n})}n_{v},\\ &=p^{n}(d_{1}+2)+\sum_{i=2}^{g}(p^{n}d_{i}+2),\\ &=p^{n}(\sum_{i=1}^{g}d_{i}+2)+O(1).\end{split} \tag{3.8}\]
Let \(y_{1},\ldots,y_{R_{n}^{\prime}}\) be the elements that generate the decomposition groups at the primes \(v\in S_{p}(\mathbb{L}_{n})\). By Proposition 2.1, we have that
\[\omega_{F}(y_{i}^{p^{k}})=p^{k}\omega_{F}(y_{i})\geq p^{k}.\]
Therefore, for \(t\in(0,1)\), we find that
\[\mathcal{P}_{n}^{[k]}(t)\leq\mathcal{P}_{n}(t)+R_{n}^{\prime}t^{p^{k}}\leq \mathcal{P}_{n}(t)+R_{n}^{\prime}t^{p}.\]
Consider the polynomial \(Q(t):=1-D_{n}t+R_{n}t^{2}+R_{n}^{\prime}t^{p}\). Setting \(t_{n}:=\frac{D_{n}}{2R_{n}}\), note that the quadratic part \(1-D_{n}t+R_{n}t^{2}\) of \(Q(t)\) attains it minimum value at \(t_{n}\). We find that
\[Q(t_{n})=1-\frac{D_{n}^{2}}{4R_{n}}+\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p}R_{n}^ {p}}.\]
We require that \(Q(t_{n})<0\), i.e.,
\[D_{n}^{2}>4R_{n}+\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p-2}R_{n}^{p-1}},\]
i.e.,
\[2^{p-2}R_{n}^{p-1}D_{n}^{2}>2^{p}R_{n}^{p}+R_{n}^{\prime}D_{n}^{p}. \tag{3.9}\]
It follows from the bounds (3.7) that as \(n\to\infty\),
\[\begin{split} D_{n}^{2}&\geq Ap^{2n};\\ R_{n}&=Bp^{2n}+O(p^{n});\end{split} \tag{3.10}\]
where
\[\begin{split} A&=\left(1+\frac{[\mathbb{L}:\mathbb{ Q}]}{2}\right)^{2}\\ B&=\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right). \end{split}\]
Note that by (3.8), \(R_{n}^{\prime}=O(p^{n})\). We estimate both sides of the inequality (3.9). Therefore, we obtain the following asymptotic estimates
\[\begin{split}& 2^{p-2}R_{n}^{p-1}D_{n}^{2}\geq 2^{p-2}B^{p-1}Ap^{ 2pn}+O(p^{(2p-1)n});\\ & R_{n}^{\prime}D_{n}^{p}=O(p^{(p+1)n});\\ & 2^{p}R_{n}^{p}=2^{p}B^{p}p^{2np}+O(p^{(2p-1)n}).\end{split} \tag{3.11}\]
Since \(2p-1\geq p+1\), it follows that \(R_{n}^{\prime}D_{n}^{p}=O(p^{(2p-1)n})\). By condition (4) of Theorem 1.2,
\[\frac{2^{p-2}B^{p-1}A}{2^{p}B^{p}}=\frac{A}{4B}=\frac{\left(2+[\mathds{L}: \mathbb{Q}]\right)^{2}}{8\left(\sum_{i=2}^{g}d_{i}^{2}\right)}>1.\]
Therefore, for large enough values of \(n\), we have that \(Q(t_{n})<0\). Since \(\mathcal{P}_{n}^{[k]}(t)\leq Q(t)\), it follows that \(\mathcal{P}_{n}^{[k]}(t_{n})<0\) for all large enough values of \(n\), and all values of \(k\geq 1\).
With respect to notation above,
\[t_{n}^{-1}=\frac{2R_{n}}{D_{n}}.\]
By the estimates in the statement of Proposition 3.3,
\[D_{n}= p^{n}\left(1+\frac{[\mathds{L}:\mathbb{Q}]}{2}\right)+O(1)\] \[R_{n}= p^{2n}\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right)+O(p^{n}).\]
Let \(C>0\) be a constant for which
\[C<\frac{2\left(\sum_{i=2}^{g}d_{i}^{2}\right)}{\left(2+[\mathds{L}:\mathbb{Q}] \right)}.\]
Then, we find that for all large enough values of \(n\), we find that \(t_{n}^{-1}>Cp^{n}\). Since for large enough values of \(n\), \(\mathcal{P}_{n}^{[k]}(t_{n})<0\) and \(t_{n}\in(0,1)\), it follows from Proposition 2.3 that
\[\rho^{[k]}(\mathds{L}_{n})\geq t_{n}^{-1},\]
and this proves the result.
Proof of Corollary 1.3.: We show that the conditions of Theorem 1.2 are satisfied.
1. Since it is assumed that \(\ell\) divides \((p-1)\), it follows that \(p\) splits completely in \(\mathbb{Q}(\mu_{\ell})\), hence, \(g=(\ell-1)>1\).
2. Clearly, \(\mathds{L}:=\mathbb{Q}(\mu_{p\ell})\) contains \(\mu_{p}\).
3. Note that \(d_{i}=(p-1)\) for all \(i\). Since \(\ell\geq 5\), we find that \[[\mathds{L}:\mathbb{Q}]=(\ell-1)(p-1)\geq 2(d_{1}+1)=2p.\]
4. We find that \(([L:\mathbb{Q}]+2)^{2}=((\ell-1)(p-1)+2)^{2}>(\ell-1)^{2}(p-1)^{2}\) and that \(8(\sum_{i=2}^{g}d_{i}^{2})=8(\ell-2)(p-1)^{2}\). Clearly, \(\ell\geq 11\) implies that \((\ell-1)^{2}>8(\ell-2)\), and therefore, \[([L:\mathbb{Q}]+2)^{2}>8(\sum_{i=2}^{g}d_{i}^{2}).\]
Next, we set \(m^{[k]}(\mathds{L}_{n}):=m\left(\mathcal{G}_{n}^{[k]}(\mathds{L}_{n})\right)\). The following result gives an asymptotic lower bound for the numbers \(m^{[k]}(\mathds{L}_{n})\), as \(n\to\infty\).
**Theorem 3.4**.: _Let \(p\) be a prime number and \(\mathbb{L}\) be a number field for which the conditions of Theorem 1.2 are satisfied. Then, there exists a constant \(c>0\) (independent of \(n\) and \(k\)) and \(n_{0}\in\mathbb{Z}_{\geq 0}\), such that for all \(n\geq n_{0}\) and \(k\geq 1\), we have that_
\[m^{[k]}(\mathbb{L}_{n})\geq cp^{n}.\]
Proof.: It suffices to prove the result for \(k=1\). We set \(m_{n}:=m\left(\mathcal{G}_{n}^{[1]}(\mathbb{L}_{n})\right)\), and assume without loss of generality that \(m_{n}<\infty\). We obtain a lower bound on \(m_{n}\). Choose a minimal presentation
\[1\to R\to F\xrightarrow{\varphi}\mathcal{G}_{n}^{[1]}\to 1\]
such that there exists \(t_{0}\in(0,1)\) for which \(\mathcal{P}_{n}^{[1]}(t):=P(\mathcal{G}_{n}^{[1]},t_{0})<0\) for the associated Golod-Shafarevich polynomial. Let \(\{y_{1},\ldots,y_{m_{n}+1}\}\subset F\) be an admissible cutting datum. Note that by definition, \(\omega_{F}(y_{i})\geq 2\) for all \(i\). Setting \(x_{i}:=\varphi(y_{i})\), denote by \(G^{\prime}\) the quotient \(\mathcal{G}_{n}^{[1]}/\langle x_{1},\ldots,x_{m_{n}+1}\rangle^{\mathrm{Norm}}\). By definition, the admissible cutting datum \(\{y_{1},\ldots,y_{m_{n}+1}\}\) can be chosen so that \(P(G^{\prime},t)\geq 0\) for all \(t\in(0,1)\). From the proof of Theorem 1.2, we have that \(\mathcal{P}_{n}^{[1]}(t)\leq Q(t)\), where \(Q(t):=1-D_{n}t+R_{n}t^{2}+R_{n}^{\prime}t^{p}\). We find that
\[P(G^{\prime},t) \leq\mathcal{P}_{n}^{[1]}(t)+(m_{n}+1)t^{2}\] \[\leq Q(t)+(m_{n}+1)t^{2}\] \[=1-D_{n}t+(R_{n}+m_{n}+1)t^{2}+R_{n}^{\prime}t^{p}.\]
In what follows, we set \(a_{n}:=\frac{D_{n}}{2(R_{n}+m_{n}+1)}\). By the argument from the proof of Theorem 1.2, it follows that \(a_{n}\in(0,1)\) for large enough values of \(n\). Therefore, assume that \(n\) is large enough so that \(a_{n}\in(0,1)\). Then, \(P(G^{\prime},a_{n})\geq 0\), i.e.,
\[1-D_{n}a_{n}+(R_{n}+m_{n}+1)a_{n}^{2}+R_{n}^{\prime}a_{n}^{p}\geq 0.\]
We find that
\[1-D_{n}a_{n}+(R_{n}+m_{n}+1)a_{n}^{2}+R_{n}^{\prime}a_{n}^{p}\] \[= \frac{2^{p}(R_{n}+m_{n}+1)^{p}-2^{p-2}(R_{n}+m_{n}+1)^{p-1}D_{n}^ {2}+R_{n}^{\prime}D_{n}^{p}}{2^{p}(R_{n}+m_{n}+1)^{p}}\]
This implies that
\[(R_{n}+m_{n}+1) \geq\frac{D_{n}^{2}}{4}+\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p}(R_{n }+m_{n}+1)^{p}};\] \[\geq\frac{D_{n}^{2}}{4}+\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p}R_{n }^{p}},\]
and therefore, we get the inequality
\[m_{n}\geq\frac{D_{n}^{2}}{4}-R_{n}+\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p}R_{n }^{p}}. \tag{3.12}\]
By (3.10),
\[D_{n}^{2} \geq Ap^{2n};\] \[R_{n} =Bp^{2n}+O(p^{n}); \tag{3.13}\]
where
\[A =\left(1+\frac{[\mathbb{L}:\mathbb{Q}]}{2}\right)^{2}\] \[B =\left(\frac{\sum_{i=2}^{g}d_{i}^{2}}{2}\right).\]
Furthermore, by (3.11),
\[\frac{R_{n}^{\prime}D_{n}^{p}}{2^{p}R_{n}^{p}}=O(p^{2-p}). \tag{3.14}\]
Therefore, combining (3.12), (3.13) and (3.14), we have that
\[m_{n}\geq(A/4-B)p^{2n}-Cp^{n},\]
for some large enough constant \(C>0\). It follows from the condition (4) of Theorem 1.2 that \(A>4B\). Setting \(c:=A/4-B\), the result follows.
|
2309.05334 | MultIOD: Rehearsal-free Multihead Incremental Object Detector | Class-Incremental learning (CIL) refers to the ability of artificial agents
to integrate new classes as they appear in a stream. It is particularly
interesting in evolving environments where agents have limited access to memory
and computational resources. The main challenge of incremental learning is
catastrophic forgetting, the inability of neural networks to retain past
knowledge when learning a new one. Unfortunately, most existing
class-incremental methods for object detection are applied to two-stage
algorithms such as Faster-RCNN, and rely on rehearsal memory to retain past
knowledge. We argue that those are not suitable in resource-limited
environments, and more effort should be dedicated to anchor-free and
rehearsal-free object detection. In this paper, we propose MultIOD, a
class-incremental object detector based on CenterNet. Our contributions are:
(1) we propose a multihead feature pyramid and multihead detection architecture
to efficiently separate class representations, (2) we employ transfer learning
between classes learned initially and those learned incrementally to tackle
catastrophic forgetting, and (3) we use a class-wise non-max-suppression as a
post-processing technique to remove redundant boxes. Results show that our
method outperforms state-of-the-art methods on two Pascal VOC datasets, while
only saving the model in its current state, contrary to other
distillation-based counterparts. | Eden Belouadah, Arnaud Dapogny, Kevin Bailly | 2023-09-11T09:32:45Z | http://arxiv.org/abs/2309.05334v3 | # MultiOD: Rehearsal-free Multihead Incremental Object Detector
###### Abstract
Class-Incremental learning (CIL) is the ability of artificial agents to accommodate new classes as they appear in a stream. It is particularly interesting in evolving environments where agents have limited access to memory and computational resources. The main challenge of class-incremental learning is catastrophic forgetting, the inability of neural networks to retain past knowledge when learning a new one. Unfortunately, most existing class-incremental object detectors are applied to two-stage algorithms such as Faster-RCNN and rely on rehearsal memory to retain past knowledge. We believe that the current benchmarks are not realistic, and more effort should be dedicated to anchor-free and rehearsal-free object detection. In this context, we propose MultiOD, a class-incremental object detector based on CenterNet. Our main contributions are: (1) we propose a multihead feature pyramid and multihead detection architecture to efficiently separate class representations, (2) we employ transfer learning between classes learned initially and those learned incrementally to tackle catastrophic forgetting, and (3) we use a class-wise non-max-suppression as a post-processing technique to remove redundant boxes. Without bells and whistles, our method outperforms a range of state-of-the-art methods on two Pascal VOC datasets.
## 1 Introduction
Catastrophic forgetting, or catastrophic interference [28], is a significant challenge when artificial agents update their knowledge with new data. It involves losing past knowledge while rapidly transforming the model representation to fit the new data distribution. In scenarios where data arrives in streams, requiring ongoing model adaptation, the concept of Continual Learning (CL) gains prominence.
Class-incremental learning is a subdomain of Continual Learning, where new classes are added at each system update (called state). It gained an increasing interest in the last few years due to the emergence of different deep learning algorithms [13, 17, 24, 34, 38, 42]. Class-Incremental Object Detection (CIOD) is interesting in practice. It can be deployed in autonomous cars that continuously explore objects on road [37], or in security cameras to easily detect infractions, or even in large events to continuously capture attendees density for statistical purposes [5]. In computer vision, class-incremental learning is often applied to classification, object detection, and segmentation. Rehearsal-free continual object detection comprises an additional challenge compared to classification. The absence of annotations for objects belonging to earlier classes leads to their classification as background [29]. This phenomenon is called as background interference (or shift) and it aggravates the effect of catastrophic forgetting.
CIOD community proposed some benchmarks to tackle the problem [29]. However, these benchmarks are not realistic and are barely applicable in real-life scenarios. On the one hand, most existing CIOD models [7, 24, 31, 33, 38] are based on Faster-RCNN [36], a two-stage detection algorithm. This approach is not useful in practice, where data appears at a fast pace, and real-time object detectors are needed to quickly accommodate new knowledge. On the other hand, the largest room was booked for rehearsal-based methods, where past data is replayed to refresh the model representation and tackle catastrophic forgetting. However, this scenario is often not realistic because of the impossibility to access past data due to privacy issues or hardware limitations.
In this paper, we push the effort towards developing con
Figure 1: Mean Average Precision (IoU=0.5) on VOC0712 using different number of base classes (\(B\)) and incremental classes (\(I\))
tinual object detectors that are anchor-free and rehearsal-free. This is indeed the most challenging scenario, but the most useful in practice. Therefore, we propose _Multihead Incremental Object Detector_ (\(MultIOD\)), a new CIOD model based on CenterNet algorithm [47]. These are our main contributions:
1. _Architecture:_ we propose a multihead feature pyramid [22] to share upsampling layers of a group of classes that appear in the same state, while a multihead detector is used to efficiently separate class representations (Subsec 3.3).
2. _Training:_ we apply a transfer learning between classes learned in the initial state and classes learned incrementally to tackle catastrophic forgetting (Subsec 3.4).
3. _Inference:_ we use a class-wise non-max-suppression as a post-processing technique to remove redundant boxes that might appear when two or more detection heads are activated at the same time during inference. (Subsec 3.5)
Figure 1 shows that our method outperforms SID [32] and GT' [10], two distillation-based methods that are built on top of CenterNet. Note that we reduce model memory footprint by half because our method does not require saving the previous model as the other state-of-the-art distillation-based counterparts. In addition, our method trains faster since the number of trainable parameters at each state is reduced thanks to the transfer learning scheme.
## 2 Related Work
Continual Object Detection is a hot topic of research. We categorize existing methods into three groups as in [17].
### Fine-Tuning-based approaches
Here, model parameters are continuously updated at each incremental state. ILOD [38] is one of the first works that tackled CIOD problem. Authors propose a loss that balances the interplay between predictions of new classes, while they use a distillation loss to minimize the discrepancy between responses for past classes from the previous and the current models. [12] and [6] both modify the Region Proposal Network (RPN) of a Faster-RCNN [36] to accommodate new classes. The former uses a fully connected layer along with distillation to classify proposals. The latter adds a knowledge distillation loss from a teacher model using a domain-specific dataset. Similarly to [6], the authors of [46] use distillation with a sampling strategy that helps better select proposals of foreground classes. [33] distills knowledge for proposals that have a strong relation with ground-truth boxes. [41] also uses distillation of output logits, but with a YoloV3 backbone. Other works distill intermediate features instead of logits. [7] proposes a Hint Loss to tackle catastrophic forgetting, and uses an adaptive distillation approach that uses both RPN output and features.
Rehearsal of past class exemplars is widely used to tackle catastrophic forgetting. It was initially proposed in [34] to classify images incrementally. Its effectiveness was proven in classification on multiple datasets [17, 3]. In object detection, it was used by [37] and [11] who combined it with a replay and distillation, respectively. Meta-ILOD [16] uses both distillation and replay to tackle forgetting. In addition, it uses a gradient-conditioning loss for the region of interest component as a regularization strategy. In [26], authors propose an attention mechanism for feature distillation and an adaptive exemplar selection strategy for rehearsal. [44] uses an external dataset and distill knowledge from two networks that learn past and new classes, respectively, to a new separate network. Alternatively, RILOD [19] helps in collecting and annotating data from search engines, and using it in incremental detection. Finally, IncDet [24] builds on Elastic Weights Consolidation (EWC) and uses a pseudo-labeling technique in addition to a Huber loss of regularization. The pseudo labeling is widely used by the community to tackle background interference. We provide in the appendix a figure that explains this phenomenon.
There are few CenterNet-based CIOD models [10, 32]. GT' [10] combines the ground-truth bounding boxes with predictions of the model in the previous state. The latter are first converted to bounding boxes. Then, redundant boxes are cleaned before to be distilled to the current model. Selective and Inter-related Distillation (SID) [32] is another continual detector where the authors propose a new distillation method. It consists in using Euclidean distance to measure the inter-relations between instances. This helps in considering the relationship among different instances, and thus improves the transfer. We compare \(MultIOD\) to both methods. Note that the latter requires saving two models in order to perform distillation, while ours needs the current model only. This means that we halve memory footprint.
### Parameter-Isolation-based approaches
This type of approaches uses a subset of the model parameters to learn a different set of classes each time.
MMN [20] freezes the most important weights in the neural network and uses the least useful ones to learn new classes. Weights importance is computed based on their magnitudes that are evaluated after each incremental state. Alternatively, [45] uses pruning techniques [27] to remove useless channels and residual blocks, while it uses a YoloV3-based ensemble network to perform detection.
\(MultIOD\) incorporates features of a parameter-isolation technique. We assign a dedicated detection head to each class while sharing a common feature pyramid among classes within the same state. The backbone remains the only component shared across the detector.
### Fixed-Representation-based approaches
These approaches are based on a frozen feature extractor to transfer knowledge between the initial and incremental classes. In RODEO [13], authors freeze the neural network
after learning the first batch of classes. Later, they quantize the extracted features in order to have compact image representations. The latter are replayed in incremental states.
Transfer learning was proven to cope well with class-incremental learning when no memory of the past is allowed. [2] trains a deep feature extractor on initial classes and a set of Support Vector Machines (SVMs) [4] is used to learn classes incrementally. \(MultIOD\) shares the same spirit as [2], where the CenterNet backbone is shared between all classes, and a different detection head is specialized to learn each class. \(MultIOD\) is also considered as a fixed-representation-based algorithm.
Most of the works from the literature are built on top of a Faster-RCNN object detector. However, this detector cannot run in real-time. Real-life applications require a faster model to learn continuously while acquiring data in streams. According to a recent survey [1], keypoint-based detectors are the fastest, while being efficient. That is why we choose CenterNet [47] as a main algorithm to develop our class-incremental detector \(MultIOD\). While multihead CenterNet has been investigated previously in [15], its primary focus was on accommodating multi-task objectives such as object detection, pose estimation, and semantic segmentation, instead of continual learning.
## 3 Proposed Method
### Problem Definition
Given a set of states \(\mathcal{S}=\{S_{0},S_{1},...,S_{n-1}\}\), where \(n\) is the number of states, the initial state \(S_{0}\) contains \(B\) classes, and incremental states \(S_{i>0}\) contain \(I\) classes each. In a general form, we note \(|\mathcal{C}_{i}|\) the total number of classes seen so far in a state \(S_{i}\). \(\mathcal{D}=\{D_{0},D_{1},...,D_{n-1}\}\) are the sets of images of each state. An initial model \(M_{0}\) is trained from scratch on \(D_{0}\) which contains the first \(B\) classes. Incremental models \(M_{1}\), \(M_{2}\),..., \(M_{n-1}\) are trained on \(D_{1}\), \(D_{2}\),..., \(D_{n-1}\) in states \(S_{1}\), \(S_{2}\),..., \(S_{n-1}\), respectively. Note that in each set \(D_{i}\), only objects from classes of the corresponding state \(S_{i}\) are annotated. At each incremental state \(S_{i>0}\), the model \(M_{i}\) is initialized with weights of model \(M_{i-1}\) (except for new feature pyramid and detection heads that are randomly initialized). \(M_{i}\) has access to all training data from new classes, but no data from past classes is available. When testing, annotations from all classes \(|\mathcal{C}_{i}|\) are available to evaluate the performance of the model on all data seen so far. Having no access to past data during training is the most challenging scenario in class-incremental learning, yet the most interesting one in practice.
An object detection model usually comprises:
\(\bullet\)**A Backbone:** a classification network without the final dense layer. Example: ResNet [14], EfficientNet [39], etc.
\(\bullet\)**An Upsampling Network:** receives the output from the backbone and increases its dimensions to generate final prediction maps with enhanced resolution. Example: Deformable Convolutions [8], Feature Pyramids [22], etc.
\(\bullet\)**A Detection Network:** takes the output of the upsampling network and makes the final prediction. Example: CenterNet [47], Yolo [35], SSD [25], etc.
Since \(MultIOD\) is based on CenterNet, we briefly remind its main components in the next subsection:
### CenterNet Object Detector
CenterNet is an anchor-free object detector. It considers objects as points, and it generates five types of output maps:
- Center map:encodes the center of the objects using a Gaussian distribution, where the mean corresponds to the object center location, the peak value is set to 1, and the standard deviation varies based on the size of the object. The focal loss is used as a training objective (Equation 1):
Figure 2: Illustration of \(MultIOD\), depicting two states, the initial state (on the left), and one incremental state (on the right). The model is trained from scratch in the initial state using data from classes C1 and C2, while in the incremental state, it is updated using data from classes C3 and C4 only. The backbone, the feature pyramid of classes C1 and C2, as well as their detection heads are frozen once these classes are learned. Only the feature pyramid of classes C3 and C4, and their detection heads are trained in the incremental state.
\[\mathcal{L}_{focal}=\frac{-1}{N}\sum_{xyc}\left\{\begin{array}{cl}(1-\hat{Y}_{xyc} )^{\alpha}\ log(\hat{Y}_{xyc})&\text{if }Y_{xyc}=1\\ (1-Y_{xyc})^{\beta}\ (\hat{Y}_{xyc})^{\alpha}&\text{otherwise}\\ log(1-\hat{Y}_{xyc})&\end{array}\right. \tag{1}\]
where \(x\) and \(y\) are center map coordinates, \(c\) is the class index, \(\alpha\), \(\beta\) are parameters of the focal loss. \(N\) is the number of objects in the image. \(\hat{Y}\) is the predicted center map, and \(Y\) is the ground-truth center map.
- Size map:consists of two maps (for width and height). At the location corresponding to the peak of each Gaussian (center of the object), the width of this object is inserted in the width map, and its height in the height map. The size loss is an L1 loss (Equation 2):
\[\mathcal{L}_{size}=\frac{1}{N}\sum_{i=0}^{N-1}|\hat{S_{i}}-S_{i}| \tag{2}\]
where \(\hat{S}\) is the predicted size map, and \(S\) is the ground-truth size map. Note that the loss is only computed on map coordinates corresponding to the center of the objects.
- Offset map:consists also of two maps (for x and y axes). It is used to recover the discretization error caused by output stride when encoding object centers. It is also an L1 loss, and is computed on centers only (Equation 3):
\[\mathcal{L}_{offset}=\frac{1}{N}\sum_{i=0}^{N-1}|\hat{O_{i}}-O_{i}| \tag{3}\]
where \(\hat{O}\) is the predicted offset map, and \(O\) is the ground-truth offset map. The overall loss is the combination of the three losses (Equation 4):
\[\mathcal{L}=\lambda_{focal}\times\mathcal{L}_{focal}+\lambda_{size}\times \mathcal{L}_{size}+\lambda_{offset}\times\mathcal{L}_{offset} \tag{4}\]
where \(\lambda_{focal}=1.0\), \(\lambda_{size}=0.1\), \(\lambda_{offset}=1.0\)[47].
Note that CenterNet uses one center map per class, and shared size and offset maps between all classes.
### Multihead CenterNet for Class Separability
We build on CenterNet to develop our class-incremental detector. The central concept driving \(MultIDD\) is to differentiate the parameters responsible for learning distinct classes. This differentiation is not about complete separation, but rather about determining which parameters should be shared and which should be selectively distinguished. Therefore, we propose the creation of a multihead architecture, which we will elaborate on hereafter:
- Multihead Feature Pyramid:Feature Pyramid Network (FPN) [22] helps to build high-level semantic feature maps at all scales. This network is efficient for detecting both large and small objects. In an incremental scenario, we use one feature pyramid for each group of classes that occur at the same time in the data stream. This helps in sharing a subset of parameters between these classes and reinforces their learning. The detailed architecture of our FPN as well as all implementation details are in the appendix.
- Multihead Detector:We adapt the original CenterNet [47] architecture to a class-incremental scenario as follows: (1) In contrast to [47] that shares size and offset maps between classes, we use one size map and one offset map for each class. The motivation behind this choice is to enable the separation of class representations in the architecture. Furthermore, the original CenterNet, by definition [47], lacks the capability to address scenarios in which two objects of distinct classes happen to share the exact same center position. This limitation results in the necessity to encode either one object or the other in the size map, but not both simultaneously. Preliminary experiments confirmed an increase in performance when using mutihead CenterNet.
### Transfer Learning to Tackle Forgetting
In a rehearsal-free protocol, updating our model using only new data will result in a significant bias towards these specific classes in the weights of the backbone, feature pyramids, and detection heads. This bias leads to catastrophic forgetting, even when a multihead architecture is employed. To mitigate this, we propose a strategy that consists in freezing the backbone, feature pyramids of previously learned classes, and their corresponding detection heads once they are learned. This strategy efficiently minimizes the distribution shift of model parameters, facilitating effective transfer between the classes learned during the initial state and those learned incrementally. Transfer learning was proven to be effective on rehearsal-free continual learning for both classification [2] and detection [13]. It is noteworthy to mention that freezing a large part of the neural network leads to faster training since we have fewer parameters to optimize.
Figure 2 provides an overview of our method. It depicts two states: one initial, and one incremental. In the initial state, since we have only one group of classes, we use one feature pyramid shared between these classes, and one detection head per class. We perform a training from scratch with all data from these classes. In the incremental state, we freeze the backbone, the feature pyramid of past classes and their detection heads. We add a randomly initialized feature pyramid and detection heads to learn the new classes.
Training ObjectiveSince we use one separate size map and offset map for each class, we modify their respective losses, as in Equations 5 and 6.
\[\mathcal{L}^{\prime}_{size}=\frac{1}{N}\sum_{c=0}^{|\mathcal{C}|-1}\sum_{i=0}^{N_{ c}-1}|\hat{S}_{ic}-S_{ic}| \tag{5}\]
where \(|\mathcal{C}|\) is the number of classes seen so far, and \(N=N_{0}+N_{1}+...+N_{|\mathcal{C}|-1}\) is the total number of objects in the image. Similarly for the offset loss:
\[\mathcal{L}^{\prime}_{offset}=\frac{1}{N}\sum_{c=0}^{|\mathcal{C}|-1}\sum_{i=0 }^{N_{c}-1}|\hat{O}_{ic}-O_{ic}| \tag{6}\]
### Class-wise Non-Max-Suppression
As mentioned in Section 3.1, the model is evaluated on all classes seen so far. In object detection, the bounding boxes generation is usually followed by a Non-Max-Suppression (NMS) operation, in which redundant boxes are eliminated, and only the most pertinent ones are kept. However, in CenterNet [47], authors demonstrate that NMS is not useful in their algorithm. The latter directly predicts object centers, resulting in a situation where each object corresponds to just one positive anchor in the ground truth.
In \(MultID\), since we have one detection head per class, two or more heads can be activated at the same time during inference. Thus, it is crucial to use NMS to mitigate the problem of redundant boxes. We select to use class-wise NMS in our model (Algorithm 1) for two reasons. First, since the backbone is frozen after learning the first set of classes, the model tends to favor predictions of past classes whose confidence scores are higher than those of new classes. Second, it is important to not remove boxes belonging to different classes but share the same location. Experimental evidence of this finding is in Subsection 6.
```
1:functionClassWiseNMS(detections, threshold)
2: Sort detections by decreasing confidence scores
3: within each class
4: Initialize an empty list selected-dets
5:forclassinclassesdo
6:class-dets\(\leftarrow\) detections of class
7:whileclass-dets is not empty do
8:max-det\(\leftarrow\) detection with highest
9:confidence in class-dets
10:Add max-det to selected-dets
11: Remove max-det from class-dets
12:fordetinclass-detsdo
13:ifIoU(det, max-det)\(\geq\)thresholdthen
14: Remove det from class-dets
15:returnselected-dets
```
**Algorithm 1** Class-wise Non-Maximum Suppression
## 4 Experiments
### Compared Methods
We compare \(MultID\) with CenterNet-based methods:
\(\bullet\)**Vanilla Fine-tuning (FT)** - the lower-bound method in which we directly fine-tune CenterNet without any specific strategy to tackle catastrophic forgetting.
\(\bullet\)**Learning without Forgetting (LwF)**[21] - the previous model is distilled to guide the training of the current model. Note that this method was initially proposed for classification.
\(\bullet\)**SID**[32] - this is a distillation-based method, where authors distill intermediate features between the past and current model, and also distances between samples. The method was applied on both CenterNet and FCOS [40] detectors, but only the former is used here for comparability.
\(\bullet\)**SDR**[30] - this method was initially proposed for semantic segmentation. It is based on three main components: prototype matching, feature sparsification, and contrastive learning. We try two versions of this method: SDR1 distills only the center map, while SDR2 distills all maps.
\(\bullet\)**GT'**[10] - instead of directly distilling CenterNet maps from the previous to the current model, the past model is used to annotate data from new classes. After extracting the bounding boxes, the latter is encoded along with the ground truth before training the model on both the ground-truth and the pseudo-labeled data. Please note that the experimental protocol of this method is not realistic because authors remove images where past class objects are present.
\(\bullet\)**Full** - the upper-bound method. A training from scratch is performed on all classes with all their available data.
Note that we take results of \(LwF\) and \(SDR\) from [32] and [30], respectively.
Since we do not use exemplar memory, we only compare our method with algorithms that do not perform any replay, for a fair comparison. While all the aforementioned methods build upon CenterNet with a ResNet50 backbone, we opt to use the EfficientNet family of models as backbone because they are more suitable for efficient training. Unless specified otherwise, our choice of backbone is EfficientNet-B3 because it provides the best trade-off between performance and number of parameters. CenterNet with ResNet50 contains about 32M parameters, while CenterNet with EfficientNet-B3 model contains 17.7M parameters only. That is, we reduce around 45% of the number of parameters. We will prove in the results section, that even though our model achieves lower results in classical training (with all data available at once), it still outperforms by a good margin the other methods in almost all configurations. Our model is not only more robust against catastrophic forgetting, but it is also parameter-efficient.
### Datasets
\(\bullet\)**Pascal VOC2007**[9] - this dataset contains 20 object classes with 5,011 and 4,952 training and validation examples, respectively. Note that we use both the training and validation sets for training, as in [10, 32, 47].
\(\bullet\)**Pascal VOC0712**[9] - it is a version of VOC dataset, where we use both the training and validation sets of the versions 2007 and 2012. We use the same test set of version 2007 for validation. In total, this dataset contains 16,550 and 4,952 training and validation images, respectively.
\(\bullet\)**MNIST-Detection** (MNIST below) - this dataset was initially designed for digit classification [18]. We use an object-detection version of it to perform our ablation studies as it runs faster than Pascal VOC dataset. We used this github repository 1 to create training and validation sets of 5000 and 1000 images of size 512\(\times\)512, respectively. We made sure to create a challenging dataset. The dataset creation procedure is detailed in the appendix.
Footnote 1: [https://github.com/lukkelas/MNIST-ObjectDetection](https://github.com/lukkelas/MNIST-ObjectDetection)
### Methodology
Following [16, 29, 32], We evaluate our method using three incremental learning scenarios: we order classes alphabetically, then divide them into two states. We use the following number of classes per state for the VOC dataset: \(B=19,I=1\); \(B=15,I=5\) and \(B=10,I=10\). For MNIST dataset, we use \(B=9,I=1\); \(B=7,I=3\) and \(B=5,I=5\). Varying the number of classes between states is important to assess the robustness of CL.
### Evaluation Metrics
\(\bullet\)**[email protected]** - the mean average precision computed at an IoU (Intersection-over-Union) threshold of 0.5.
\(\bullet\)**mAP@[0.5, 0.95]** - the mean average precision averaged on IoU threshold that varies between 0.5 and 0.95 with a step of 0.05. Note that results with this metric are presented in the appendix.
\(\bullet\)**F\({}_{\mathbf{mAP}}\)** - the harmonic mean between the mean average precision of past and new classes.
\[F_{mAP}=2\times\frac{mAP_{past}\times mAP_{new}}{mAP_{past}+mAP_{new}} \tag{7}\]
When \(F_{mAP}=0.0\), this means that the model completely failed to detect either past classes or new classes.
## 5 Results and Discussion
### Main Results
Tables 1 and 2 provide results of \(MultID\) compared to state-of-the-art methods on VOC2007 and VOC0712 datasets, respectively. Results show that our method outperforms results from state of the art in 11 cases out of 12.
For VOC2007 dataset, in the scenario \(B=19,I=1\), our method gains up to 14.4 mAP points compared to the state-of-the-art method SID [32]. This is intuitive insofar as fixed-representations are more powerful when trained with more classes in the initial state. The diversity of the initial dataset and its size is crucial to have universal representation and transfer to unseen classes.
In the scenario \(B=15,I=5\), \(MultID\) outperforms SID in terms of mAP score, but fails to do so for the \(F_{mAP}\) score. This indicates that in this specific case, SID balances better the compromise between plasticity (the ability to adapt to new class distribution), and stability (the ability to keep as much knowledge as possible from the past). Vanilla Fine-tuning is the baseline method where the model is continuously updated using new class data and no specific strategy is applied to tackle catastrophic forgetting. This method behaves the worst as it pushes for the complete plasticity of the neural network. This is an extreme case in which the mAP for past classes is equal to zero or nearly so, which causes the \(F_{mAP}\) score to be zero as we can see in the case of \(B=10,I=10\).
LwF [21] is a method that was initially proposed to tackle continual learning in classification task. It consists in distilling the current model from the previous one, in order to mimic its predictions. Even though this method provides good results in rehearsal-free continual classification [3], it fails to generalize on the detection task. One reason could be that classification task is orders of magnitude much easier than object detection, and more specialized techniques are needed in order to tackle the latter.
For VOC0712, \(MultID\) outperforms other methods in all cases, followed by GT' [10]. It is noteworthy to mention that the experimental protocol of this method is not realistic insofar as authors remove images that contain objects from past classes when training the model on new classes. Thus,
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Method & Full & \(B=19,I=1\) & \(B=15\), \(I=5\) & \(B=10,I=10\) \\ \hline & \(mAP\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) \\ \hline FT & & 20.2 & 26.3 & 16.3 & 17.3 & 28.0 & 0.0 \\ LwF [21] & & 36.3 & 28.1 & 12.0 & 11.5 & 22.5 & 0.0 \\ SDR 1 [30] & & 60.4 & 22.8 & 41.4 & 23.9 & 35.5 & 32.2 \\ SDR 2 [30] & & 49.8 & 25.8 & 21.9 & 22.3 & 30.1 & 9.6 \\ SID [32] & & 41.6 & 19.3 & 48.4 & 17.9 & 45.5 & 44.2 \\ GT’ [10] & & 65.2 & 39.2 & 54.0 & 33.8 & 51.2 & 50.9 \\ \hline Ours & 69.5 & **68.0** & **56.9** & **60.7** & **47.0** & **56.6** & **55.8** \\ \hline \end{tabular}
\end{table}
Table 2: [email protected] and \(F_{mAP}\) score on VOC0712 dataset.
they avoid the problem of background shift by removing the overlap between annotations of background and past class objects. However, in real life situations, it is impossible to have this separation as both past and new class objects can appear in the scene, and the model should be able to effectively learn new classes while past ones are also there. SDR [30] is a method that was initially proposed for continual semantic segmentation, and was later adapted by [10] to object detection. Regardless of the effectiveness of this method on semantic segmentation, it does not achieve the same impressive performance on continual object detection. It reaches good scores in the protocol \(B=19,I=1\) only. One reason for that could be that in CenterNet there is less information to distill compared to segmentation where all object pixels are used.
Coincidentally, all CenterNet-based continual object detectors to which we compare \(MultIOD\), use distillation to update model weights from its past states. Overall, results show the usefulness of transfer learning compared to distillation. This is true especially when classes learned incrementally belong to the same domain as classes learned initially.
In both Tables 1 and 2, it is crucial to emphasize that our upper-bound (Full) model scores lower by 5.2 and 3.6 mAP points respectively, compared to the classical training from the state of the art. Nevertheless, it is interesting to notice that even with this drop in performance under traditional training, our model excels in comparison to other methods across nearly all scenarios involving incremental learning. This finding proves that \(MultIOD\) is more robust against catastrophic forgetting, because the gap with our upper-bound model is tighter compared to the other methods with their respective upper-bound model.
We remind that the difference in performance between the two upper-bound models is that our detector is built on top of an EfficientNet-B3 backbone, while the other methods use a ResNet50. In addition to the reduction in the number of parameters (17.7M vs. 32M), \(MultIOD\) halves the memory footprint because it does not require keeping the past model to learn the new classes. In contrast, all methods used for comparison need the past model in order to extract its activation and distill its knowledge to the current model. Thus, these methods require keeping two models simultaneously to make the training happen.
Some predictions with \(MultIOD\) are in the appendix.
### Additional Experiment
In this experiment, we take the challenge to a more difficult level by adding only one or two classes in each incremental state. In this case, we use one feature pyramid for each class to have a complete separation of their representations in the upsampling and detection heads. To avoid an explosion in the number of parameters compared to the previous protocol, we reduce the number of filters in the feature pyramids to have a comparable number of parameters with the model used in previous experiments. Details of this architecture are in the appendix. Results in Table 3 show that we lose from 1 to 6 mAP points approximately when passing from one state to another. Unfortunately, CenterNet-based state-of-the-art methods do not use this protocol, and we thus cannot compare them with \(MultIOD\). We provide results for future use.
### Ablation Study
#### 5.3.1 Ablation of Backbones
The backbone has a direct impact on model performance because it is responsible for the quality of extracted features. In Table 4, we vary the backbone and use EfficientNet-B0 (10 million params), EfficientNet-B3 (17.7 million params), and EfficientNet-B5 (37 million params) on VOC2007 and VOC0712. We provide results on MNIST in the appendix. Results show that the best overall backbone is EfficientNet-B3. EfficientNet-B0 is smaller and has more difficulties to generalize. However, it still provides decent performances while having 10M params only. EfficientNet-B5 provides comparable results with EfficientNet-B3. We decided to keep the latter since it has the best compromise between params number and performance.
#### 5.3.2 Ablation of Frozen Layers
One of the main components of our method is to use transfer learning between classes learned in the initial state, and classes that are learned incrementally. Therefore, studying the impact of freezing the different layers of our system is a crucial part to assess its robustness. In Table 5, we use MNIST dataset and EfficientNet-B0 backbone in a setting where \(B=5\) and \(I=5\), and test the following configurations: (1) do not freeze anything in the architecture, (2) freeze the backbone only, (3) freeze both the backbone and the feature pyramid of past classes, and (4) our full method, in which we freeze not only the backbone, but also the feature pyramid of past classes, and their detection heads. Results show that it is crucial to freeze past class detection heads in order to avoid their catastrophic forgetting. In fact, it is meaningful to freeze them only if all the preceding components are also frozen (the feature pyramid of past classes and the backbone).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline States & \(S_{0}\) & \(S_{1}\) & \(S_{2}\) & \(S_{3}\) & \(S_{4}\) & \(S_{5}\) \\ \hline & \(B=15\) & \multicolumn{5}{c|}{\(I=1\)} \\ \hline FT & 56.6 & 0.7 & 0.3 & 1.0 & 0.3 & 1.8 \\ Ours & 53.4 & 47.9 & 45.6 & 43.3 & 42.7 \\ \hline & \(B=10\) & \multicolumn{5}{c|}{\(I=2\)} \\ \hline FT & 57.7 & 5.3 & 4.7 & 5.1 & 2.1 & 3.2 \\ Ours & 48.4 & 45.7 & 43.0 & 39.7 & 38.0 \\ \hline \end{tabular}
\end{table}
Table 3: [email protected] of \(MultIOD\) on VOC2007.
#### 5.3.3 Ablation of Non-Max-Suppression Strategies
Using multiple detection heads leads to the risk of predicting multiple objects (from different classes) at the same pixel location. Thus, it is important to eliminate irrelevant bounding boxes (false positives). In this experiment, we ablate the following NMS strategies:
\(\bullet\) No-NMS: this is the method originally used in CenterNet [47], where no non-max-suppression is applied.
\(\bullet\) Inter-class NMS: here, a standard NMS algorithm is utilized to eliminate redundant bounding boxes, irrespective of their class membership.
\(\bullet\) Class-wise NMS: as described in Subsection 3.5, it is designed to enhance the precision of boxes selection within individual classes, effectively curbing redundancy and selecting the most pertinent boxes for each specific class.
\(\bullet\) Soft-NMS: unlike traditional NMS, which remove boxes that overlap significantly, it applies a decay function to the confidence scores of neighboring boxes, gradually reducing their impact. This results in a smoother suppression of redundant boxes and helps retain boxes with slightly lower scores that might still contribute to accurate detection.
Table 6 shows that the method which provides the best results is class-wise NMS, the one we choose to use. The second best method is the standard inter-class NMS. The former helps in detecting more objects, specifically when objects of different classes are highly overlapped. This comes at the expense of having some false positives. In contrast, inter-class NMS reduces the number of false positives at the expense of not detecting bounding boxes of different classes that are at the same location in the image. Depending on the use case, one or the other method can be preferred. In our use case, Soft-NMS did not provide good results. This can be explained by the fact that the gradual decay of confidence scores in Soft-NMS might inadvertently allow boxes with lower scores to persist, potentially leading to false positives or less accurate detections. Finally, unsurprisingly, the method with the worst performance is No-NMS due to the high number of false positives. Note that the same ablation for VOC2007 dataset is in the appendix.
## 6 Potential Negative Societal Impact
As AI researchers, We recognize that technological advancements may pose drawbacks, especially in continual object detection. We emphasize the following risks:
\(\bullet\) Data Privacy Concerns: Cameras used for object detection in public spaces raise data privacy concerns. Gathering and using personal data may lead to misuse under privacy regulations.
\(\bullet\) Unauthorized Surveillance: Cameras meant for security may be exploited for unauthorized surveillance, violating individual rights and privacy.
\(\bullet\) Bias and Discrimination: Continual Learning algorithms trained on data streams may lack diversity, introducing biases into automated decisions. Evolving models could amplify existing biases, resulting in inaccurate assessments and unequal resource access.
Addressing these concerns is vital to prevent the compromise of personal privacy, equality, and civil liberties by the benefits of continual object detection.
## 7 Conclusions and Perspectives
In this paper, we present \(MultIOD\), a class-incremental object detector based on CenterNet [47]. Our approach uses a multihead detection component, along with a frozen backbone. A multihead feature pyramid is also used to ensure a satisfying trade-off between rigidity and plasticity. Finally, it involves an efficient class-wise NMS that provides robustness to remove duplicate bounding boxes. Results show the effectiveness of our approach against CenterNet-based methods on Pascal VOC datasets [9] in many incremental scenarios. In the future, it would be interesting to focus more on scenarios where a different number of classes is encountered at each state. It would also be useful to test its effectiveness on larger datasets such as COCO [23] even though, by definition, larger datasets tend to yield more transferable features [43].
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|l|}{Method} & \multicolumn{2}{|c|}{Full} & \multicolumn{2}{|c|}{\(B=9,I=1\)} & \multicolumn{2}{|c|}{\(B=7,I=3\)} & \multicolumn{2}{|c|}{\(B=5,I=5\)} \\ \hline & \(mAP\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(mAP\) & \(F_{mAP}\) \\ \hline No-NMS & 90.2 & 89.5 & 89.7 & 90.8 & 91.3 & 88.4 & 88.2 \\ Soft-NMS & 91.5 & 89.7 & 89.5 & 91.1 & 91.4 & 88.5 & 88.4 \\ Inter-class NMS & 91.8 & 90.2 & 90.2 & 91.7 & 92.0 & 89.9 & 89.8 \\ Class-wise NMS & **93.1** & **91.3** & **91.2** & **93.1** & **93.5** & **91.3** & **91.3** \\ \hline \end{tabular}
\end{table}
Table 6: Performance of our model using MNIST dataset with different NMS strategies and EfficientNet-B0.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{|c|}{VOC2007} & \multicolumn{1}{|c|}{VOC0712} \\ \hline Method & Full & \(B=9,I=1\) & \(B=7,I=3\) & \(B=5,I=5\) & Full & \(B=9,I=1\) & \(B=7,I=3\) & \(B=5,I=5\) \\ \hline & \(mAP\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) \\ \hline EfficientNet-B0 & 56.7 & 55.7 & 35.6 & 49.2 & 33.8 & 46.3 & 45.5 & 66.9 & 65.0 & 54.0 & 57.7 & 44.7 & 53.9 & 52.9 \\ EfficientNet-B3 & 60.4 & **59.9** & 39.2 & **52.6** & **37.8** & **48.4** & **47.2** & 69.5 & 68.0 & **56.9** & 60.7 & **47.0** & **56.6** & **55.8** \\ EfficientNet-B5 & **61.3** & **59.9** & **45.2** & 52.5 & 36.5 & 45.2 & 42.4 & **70.3** & **68.4** & 54.7 & **60.8** & 46.5 & 55.3 & 53.9 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation of backbones on VOC2007 and VOC0712 datasets ([email protected]).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{1}{|l|}{Method} & \multicolumn{2}{|c|}{Full} & \multicolumn{2}{|c|}{\(B=9,I=1\)} & \multicolumn{2}{|c|}{\(B=7,I=3\)} & \multicolumn{2}{|c|}{\(B=5,I=5\)} \\ \hline & \(mAP\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(F_{mAP}\) & \(mAP\) & \(mAP\) & \(F_{mAP}\) \\ \hline No-NMS & 90.2 & 89.5 & 89.7 & 90.8 & 91.3 & 88.4 & 88.2 \\ Soft-NMS & 91.5 & 89.7 & 89.5 & 91.1 & 91.4 & 88.5 & 88.4 \\ Inter-class NMS & 91.8 & 90.2 & 90.2 & 91.7 & 92.0 & 89.9 & 89.8 \\ Class-wise NMS & **93.1** & **91.3** & **91.2** & **93.1** & **93.5** & **91.3** & **91.3** \\ \hline \end{tabular}
\end{table}
Table 5: Performance of \(MultIOD\) on MNIST with EfficientNet-B0 and \(B=5,I=5\) when ablating its components freezing. |
2309.11341 | Article Classification with Graph Neural Networks and Multigraphs | Classifying research output into context-specific label taxonomies is a
challenging and relevant downstream task, given the volume of existing and
newly published articles. We propose a method to enhance the performance of
article classification by enriching simple Graph Neural Network (GNN) pipelines
with multi-graph representations that simultaneously encode multiple signals of
article relatedness, e.g. references, co-authorship, shared publication source,
shared subject headings, as distinct edge types. Fully supervised transductive
node classification experiments are conducted on the Open Graph Benchmark
OGBN-arXiv dataset and the PubMed diabetes dataset, augmented with additional
metadata from Microsoft Academic Graph and PubMed Central, respectively. The
results demonstrate that multi-graphs consistently improve the performance of a
variety of GNN models compared to the default graphs. When deployed with SOTA
textual node embedding methods, the transformed multi-graphs enable simple and
shallow 2-layer GNN pipelines to achieve results on par with more complex
architectures. | Khang Ly, Yury Kashnitsky, Savvas Chamezopoulos, Valeria Krzhizhanovskaya | 2023-09-20T14:18:04Z | http://arxiv.org/abs/2309.11341v2 | # Improving Article Classification with Edge-Heterogeneous Graph Neural Networks
###### Abstract
Classifying research output into context-specific label taxonomies is a challenging and relevant downstream task, given the volume of existing and newly published articles. We propose a method to enhance the performance of article classification by enriching simple Graph Neural Networks (GNN) pipelines with edge-heterogeneous graph representations. SciBERT is used for node feature generation to capture higher-order semantics within the articles' textual metadata. Fully supervised transductive node classification experiments are conducted on the Open Graph Benchmark (OGB) ogbn-arxiv dataset and the PubMed diabetes dataset, augmented with additional metadata from Microsoft Academic Graph (MAG) and PubMed Central, respectively. The results demonstrate that edge-heterogeneous graphs consistently improve the performance of all GNN models compared to the edge-homogeneous graphs. The transformed data enable simple and shallow GNN pipelines to achieve results on par with more complex architectures. On ogbn-arxiv, we achieve a top-15 result in the OGB competition with a 2-layer GCN (accuracy 74.61%), being the highest-scoring solution with sub-1 million parameters. On PubMed, we closely trail SOTA GNN architectures using a 2-layer GraphSAGE by including additional co-authorship edges in the graph (accuracy 89.88%). The implementation is available at: [https://github.com/lyvykhang/edgehetero-nodeproppred](https://github.com/lyvykhang/edgehetero-nodeproppred).
Heterogeneous Graph Learning Graph Neural Networks Article Classification Document Relatedness
## 1 Introduction
Article classification is a challenging downstream task within natural language processing (NLP) [16]. An important practical application is classifying existing or newly-published articles according to specific research taxonomies. The task can be approached as a graph node classification problem, where nodes represent articles with corresponding feature mappings, and edges are defined by a strong signal of article relatedness, e.g. citations/references. Conventionally, graph representation learning is handled in two phases: unsupervised node feature generation, followed by supervised learning on said features using the graph structure. Graph neural networks (GNNs) can be successfully employed for the second phase of such problems, being capable of preserving the rich structural information encoded by graphs. In recent years, prolific GNN architectures have achieved strong performance
on citation network benchmarks (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Frasca et al., 2020; Li et al., 2021).
We focus on combining textual information from articles with various indicators of article relatedness (citation data, co-authorship, subject fields, and publication sources) to create a graph with multiple edge types; further referred to as an _edge-heterogeneous_ graph, also known as multi-graphs or multipartite networks (Barabasi and Posfai, 2017). We use two established node classification benchmarks - the citation graphs ogn-arxiv and PubMed - and leverage their connection to large citation databases - Microsoft Academic Graph (MAG) and PubMed Central - to retrieve the metadata fields and enrich the graph structure with additional edge types (Hu et al., 2020; Sen et al., 2008). For feature generation, SciBERT is used to infer embeddings based on articles' titles and abstracts (Beltagy et al., 2019), with the intention of capturing higher-order semantics compared to the defaults, i.e. Skip-Gram or bag-of-words. We test our transformed graphs with a variety of GNN backbone models, converted to support heterogeneous input using the relational graph convolutional network (R-GCN) framework (Schlichtkrull et al., 2018). In essence, we approach a typically homogeneous task using heterogeneous techniques. The method is intuitively simple and interpretable; we do not utilize complex model architectures and training frameworks, focusing primarily on data retrieval and preprocessing to boost the performance of simpler models, thus maintaining a reasonably low computational cost and small number of fitted parameters.
A considerable volume of research is devoted to article classification, graph representation learning with respect to citation networks, and the adaptation of these practices to heterogeneous graphs (Wu et al., 2019; Bing et al., 2022). However, the application of _heterogeneous_ graph enrichment techniques to article classification is not well-studied and presents a research opportunity. Existing works on heterogeneous graphs often consider multiple node types, expanding
Figure 1: Illustration of the proposed edge heterogeneity, which enables the neighboring feature aggregation for a node \(X_{1}\) to be performed across a variety of subgraphs, leveraging multiple signals of article relatedness (References, Authorship, and Journal depicted here).
from article to entity classification; we exclusively investigate the heterogeneity of paper-to-paper relationships to remain consistent with the single-node type problem setting. The emergence of rich metadata repositories for papers, e.g. OpenAlex, illustrates the relevance of our research (Priem et al., 2022).
Scalability is often a concern with GNN architectures. For this reason, numerous approaches simplify typical GNN architectures with varying strategies, e.g. pre-computation or linearization, without sacrificing significant performance in most downstream tasks (Frasca et al., 2020; Wu et al., 2019; Prieto et al., 2023). Other solutions avoid GNNs altogether, opting for simpler approaches based on early graph-based techniques like label propagation, which outperform GNNs in several node classification benchmarks (Huang et al., 2021). The success of these simple approaches raises questions about the potential impracticality of deep GNN architectures on large real-world networks with a strong notion of locality, and whether or not such architectures are actually necessary to achieve satisfactory performance.
Compared to simple homogeneous graphs, heterogeneous graphs encode rich structural and semantic information, and are more representative of real-world information networks and entity relationships (Bing et al., 2022). For example, networks constructed from citation databases can feature relations between papers, their authors, and shared keywords, often expressed in an RDF triple, e.g. "_paper_ (\(\xrightarrow{\text{(co-)authored by}}\)) _author_," "_paper_ \(\xrightarrow{\text{includes}}\) _keyword_," "_paper_ \(\xrightarrow{\text{cites}}\) _paper_." Heterogeneous GNN architectures share many similarities with their homogeneous counterparts; a common approach is to aggregate feature information from local neighborhoods, while using additional modules to account for varying node and/or edge types (Yang et al., 2022). Notably, the relational graph convolutional network approach (R-GCN) by Schlichtkrull et al. (2018) shows that GCN-based frameworks can be effectively applied to modeling relational data, specifically for the task of node classification. The authors propose a modeling technique where the message passing functions are duplicated and applied individually to each relationship type. This transformation can be generalized to a variety of GNN convolutional operators in order to convert them into their relational (heterogeneous) counterparts.
## 2 Methodology
We propose an approach focusing on dataset provenance, leveraging their linkage to large citation and metadata repositories, e.g. MAG and PubMed Central, to retrieve additional features and enrich their graph representations. The proposed method is GNN-agnostic, compatible with a variety of model pipelines (provided they can function with heterogeneous input graphs) and node embedding techniques (results are presented with the provided features, plus the SciBERT embeddings). Figure 1 provides a high-level overview of the method.
The tested GNN backbones (see section 3) are converted to support heterogeneous input using the aforementioned R-GCN transformation defined by Schlichtkrull et al. (2018), involving the duplication of the message passing functions at each convolutional layer per relationship type; we employ the PyTorch Geometric (PyG) implementation of this technique, using the mean as the aggregation operator (Fey and Lenssen, 2019).
### Datasets
Our experiments are conducted on two datasets: the Open Graph Benchmark (OGB) ogbn-arxiv dataset and the PubMed diabetes dataset.
The OGB ogbn-arxiv dataset consists of 169,343 Computer Science papers from arXiv, hand-labeled into 40 subject areas by paper authors and arXiv moderators, with 1,166,243 reference links (Hu et al., 2020). Node features are constructed from textual information by averaging the embeddings of words (which are generated with the Skip-Gram model) in the articles' titles and abstracts. The dataset provides the mapping used between papers' node IDs and their original MAG IDs, which can be used to retrieve additional metadata.
The PubMed diabetes dataset consists of 19,717 papers from the National Library of Medicine's (NLM) PubMed database labeled into one of three categories: "Diabetes Mellitus, Experimental," "Diabetes Mellitus Type 1," and "Diabetes Mellitus Type 2," with 44,338 references links (Sen et al., 2008). Bag-of-words vectors from a dictionary of 500 unique words are provided as node features. Similarly, the papers' original PubMed IDs can be used to fetch relevant metadata.
### Data Augmentation
We used a July-2020 snapshot of the complete Microsoft Academic Graph (MAG) index (240M papers) - since MAG (and the associated API) was discontinued later - to obtain additional metadata (Zhang et al., 2022).1 Potential indicators
of paper relatedness include: authors, venue, and fields of study. Other metadata (DOI, volume, page numbers, etc.) are not useful for our purposes. Hence, we "heterogenize" the dataset with the following additional edge types (in addition to the References edges):
* (Co)-Authorship: Two papers are connected if they share an author, with a corresponding edge weight indicating the number of shared authors. This is based on the assumption that a given author tends to perform research on similar disciplines.
* Venue: Two papers are connected if they were published at the same venue (conference or journal), assuming that specific conferences contribute to relevant research areas.
* Fields of Study (FoS): Two papers are connected if they share at least one field, with an edge weight based on the number of shared fields. Fields of study, e.g. "computer science," "neural networks," etc. are automatically assigned with an associated confidence score (which we do not use), and each paper can have multiple fields of study, making them functionally similar to keywords.
An unprocessed version of the PubMed dataset preserving the original paper IDs was used [Namata et al., 2012].2 A January-2023 snapshot of the complete PubMed citation database (35M papers) was accessed for additional metadata. Relevant features include: title, abstract, authors, published journal (indicated by unique NLM journal IDs), and Medical Subject Headings (MeSH@). The latter is an NLM-controlled hierarchical vocabulary used to characterize biomedical article content. As with the Fields of Study, they are functionally similar to curated keywords. As before, we use three additional edge types:
Footnote 2: This version of the dataset is hosted by the LINQS Statistical Relational Learning Group. The 2023 annual baseline on the NLM FTP server is accessed to retrieve metadata. All files were downloaded locally and metadata of matching IDs were extracted (19,716 records matched, 1 missing).
* (Co)-Authorship: Two papers are connected if they share an author, with a corresponding edge weight indicating the number of shared authors. Unlike MAG, PubMed Central does not provide unique identifiers for authors, so exact author names are used, which can lead to some ambiguity in a minority of cases, e.g. two distinct authors with the same name.
* Journal: Two papers are connected if they were published in the same journal. The intuition is that journals publish papers on similar topics.
* MeSH: Two papers are connected if they share at least one MeSH subject heading, with an edge weight based on the number of shared subjects.
Since the ogbn-arxiv venue and fields and PubMed MeSH relationships result in massive edge lists, posing out-of-memory issues on the utilized hardware, we only create edges between up to \(k\) nodes per unique venue/field/heading, where \(k\) is the mean number of papers per venue/field/heading, in order to reduce the subgraphs' sizes. All edge weights are normalized using logistic sigmoid.
In a traditional citation network, the links are typically _directed_, but in our experiments, they are _undirected_ to strengthen the connections of communities in the graph. The graph includes only one node type, "paper." Other approaches, notably in the citation recommendation domain, leverage node types to represent authors and journals [Guo et al., 2017]. However, this work strictly concerns relationships between papers and not between papers and other entities, in order to apply the homogeneous problem settings. Practically, the resultant graph would contain too many nodes, while there the number of features and metadata is insufficient to generate informative representations of other node types, limiting their usefulness in the feature aggregation step.
For textual node feature representation, embeddings of the concatenated paper titles and abstracts are inferred using SciBERT [Beltagy et al., 2019]; this is inspired by the work of Cohan et al. [2020], in which SciBERT was pretrained on a citation graph and used to generate high-quality document-level embeddings for node classification purposes. Here, we utilize the base model (scibert-scivocab-uncased) without additional fine-tuning.
### Subgraph Properties
Some insights on the characteristics of the defined subgraphs can be derived from Table 1, which lists the following: number of nodes and edges (post-conversion to undirected) in the largest connected component (LCC), total number of edges, average degree, network average clustering coefficient [Schank and Wagner, 2005], and edge homophily ratio (fraction of edges connecting nodes with the same label) [Ma et al., 2022]. While the References graphs do not exhibit the tight clustering typical of real-world information networks, the strong signal of relatedness in the edges has
nonetheless ensured their compatibility with message passing GNN paradigms (Wu et al., 2019a). This relatedness is also evident in the Authorship graphs, and the high level of clustering confirms the initial hypothesis that researchers co-author papers within similar topics. The topic-based relationships (FoS and MeSH), include many edges formed between shared generic keywords, e.g. "computer science," leading to rather average homophily. The publication source subgraphs (Venue and Journal) consist of isolated fully-connected clusters per unique source, with no inter-cluster connections, as each paper belongs to only one journal or venue. As with the topic-based relationships, the research area covered by a given publication conference or journal can be quite broad with respect to the paper labels.
Figure 2 shows the degree distribution of all edge type subgraphs in both datasets, which gives a clear view of the subgraphs' structures when interpreted with the above metrics. The high frequency of large node degrees in the PubMed Journal subgraph corresponds to large journals, the size of the LCC (2,213) is the number of papers in the largest journal. While not visible for the ogbn-arxiv Venue subgraph due to the aforementioned sampling, a similar distribution would occur for large venues if all possible edges had been included. In contrast, the lower occurrence of high degree nodes and low clustering in the References subgraphs of both datasets indicates greater average _distance_ across the LCC compared to the other subgraphs; such a structure stands to benefit the most from the multi-hop neighborhood
\begin{table}
\begin{tabular}{c|l|l|l|l|l|l|l} \hline \hline & & Nodes in LCC & Edges in LCC & Edges total & Avg. degree & Avg. clustering coeff. & Homophily \\ \hline \multirow{4}{*}{ogbn-arxiv} & References & 169,343 & 2,315,598 & 2,315,598 & 13.7 & 0.247 & 0.654 \\ \cline{2-8} & Authorship & 145,973 & 6,697,998 & 6,749,335 & 39.9 & 0.702 & 0.580 \\ \cline{2-8} & Venue & 63 & 3,906 & 600,930 & 3.5 & 0.112 & 0.077 \\ \cline{2-8} & FoS & 144,529 & 8,279,492 & 8,279,687 & 48.9 & 0.505 & 0.319 \\ \hline \multirow{4}{*}{PubMed} & References & 19,716 & 88,649 & 88,649 & 4.5 & 0.057 & 0.802 \\ \cline{2-8} & Authorship & 17,683 & 729,468 & 731,376 & 37.1 & 0.623 & 0.721 \\ \cline{2-8} & Journal & 2,213 & 4,895,156 & 11,426,930 & 579.6 & 0.940 & 0.414 \\ \cline{2-8} & MeSH & 18,345 & 1,578,526 & 1,578,530 & 80.1 & 0.456 & 0.550 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Properties of constructed subgraphs. Note that the References subgraphs are the only ones without isolated nodes. Note that the network average clustering coefficient computation accounts for isolated nodes (which are treated as having “zero” local clustering), hence the value for sparser subgraphs, e.g. ogbn-arxiv Venue, is notably lower than intuitively expected (1, in the absence of isolated nodes).
Figure 2: Degree distribution, i.e. frequency of each degree value, of all subgraphs for ogbn-arxiv (left) and PubMed (right), plotted on a log-log scale. Points indicate the unique degree values.
feature aggregation performed by GNNs. Relative to the References, the Authorship and topic-based subgraphs (FoS and MeSH) exhibit increased skewness in the distribution and higher average clustering, which indicates the presence of more (near-)cliques, i.e. subsections of the graph wherein (almost) any two papers share an author or topic. Hence, these subgraphs bear the closest structural resemblance to small-world networks (Watts and Strogatz, 1998). The impact of these degree distributions on classification performance is further investigated in section 3.1.
## 3 Experiments and Results
We evaluate model performance on the task of _fully supervised transductive node classification_. The metric is multi-class accuracy on the test set. The proposed data preparation scheme is tested with several GNN architectures commonly deployed in benchmarks. We consider two GCN setups (base one and with a jumping knowledge module using concatenation as the aggregation scheme), as well as GraphSAGE (Kipf and Welling, 2017; Xu et al., 2018; Hamilton et al., 2017). We also run experiments with the simplified graph convolutional operator (SGC) (Wu et al., 2019). The increased graph footprint can lead to scalability concerns, hence the performance of such lightweight and parameter-efficient methods is of interest.
For ogbn-arxiv, the provided time-based split is used: train on papers published until 2017, validate on those published in 2018, test on those published since 2019. For PubMed diabetes, nodes of each class are randomly split into 60% - 20% - 20% for training - validation - and testing for each run (as performed by Pei et al. (2020) and Chen et al. (2020)). Ablation experiments are also performed to examine the impact of the different edge types (averaged across 3 runs) and to identify the optimal configuration for both datasets, on which we then report final results (averaged across 10 runs). Experiments were conducted on a g4dn.2xlarge EC2 instance (32 GB RAM, 1 NVIDIA Tesla T4 16 GB VRAM). Models are trained with negative log-likelihood loss and early stopping based on validation accuracy (patience of 50 epochs, with an upper limit of 500 epochs). We also scale down the learning rate as the validation loss plateaus. The SciBERT embeddings are pre-computed using multi-GPU distributed inference.
### Ablation Study
Ablation results for both datasets are presented in Table 2, separated by node embedding type. First, all possible homogeneous subgraphs are inspected, as this is the conventional input data for this task (see the first 4 rows). The best performance is achieved on the References graphs, followed by the Authorship graphs. Then we build upon the References graph by adding different combinations of other subgraphs. The results demonstrate that transitioning to edge-heterogeneous graphs can yield up to 2.87% performance improvement on ogbn-arxiv and 2.05% on PubMed (see the difference between the yellow- and green-highlighted cells). These results were obtained with a 2-layer GCN base, using the following fixed hyperparameter values: optimizer weight decay of 0.001 and initial learning rate of 0.01, hidden layer dropout probability of 0.5, and hidden feature dimensionality of 128 (ogbn-arxiv) or 64 (PubMed).
Cross-checking with the metrics in Table 1 implies improvements from edge-heterogeneity roughly correspond to the edge homophily ratio of the utilized subgraphs, as strong homophily is implicitly assumed by the neighborhood aggregation mechanism of GCN-based models. Subsequently, their performance can be erratic and unpredictable in graphs with comparatively low homophily (Kipf and Welling, 2017; Ma et al., 2022). Since the R-GCN transformation collects neighborhoods from input subgraphs with equal weighting, including a comparatively noisy subgraphs (like ogbn-arxiv Venue) can worsen predictive performance. Changing the R-GCN aggregation operator, e.g. from mean to concatenation, does not alleviate this. This study suggests publication sources do not yield beneficial subgraphs with their current definition; while the aforementioned tight clustering could provide a strong signal for classification with sufficient homophily, any potential benefit is absent because these clusters are noisy. The topic-based subgraphs are more structurally preferable, but noisy edges (from keywords tied to concepts that are higher-level than the paper labels) reduce their usefulness in classification. In all experimental settings, the largest gain from heterogeneity comes from the co-authorship graph, often outperforming configurations that use more subgraphs. These trends are expected, given the characteristics discussed in section 2.3.
Replacing the PubMed bag-of-words vectors with SciBERT embeddings does not improve raw accuracy in most cases. Though, the increased feature dimensionality of SciBERT embeddings do stabilize convergence behavior in configurations using multiple subgraphs. A paper might possess only a few non-zero feature dimensions when using bag-of-words; combined with the additional feature averaging step from the R-GCN transformation, the risk of oversmoothing is increased even on shallow 2-layer networks in edge-heterogeneous configurations (however, note that tested single-layer models underfit and thus do not improve performance). In accordance with the findings of Chen et al. (2020), these hypothesized oversmoothing effects are more pronounced when using graphs with high average degree, i.e. the publication source and topic-based subgraphs; nodes with high degree aggregate more information from their neighbors, increasing the likelihood of homogenization as network depth increases.
### Optimal Configuration
Results with the optimal configuration identified from the ablation study on ogbn-arxiv and PubMed are listed in Tables 3 and 4, respectively. Note that the actual performance gains may vary slightly since the third-party benchmarks were used with _directed_ References graphs and differently-tuned hyperparameters.
The results demonstrate that the additional structural information provided by edge heterogeneity consistently improves final performance of a variety of hetero-transformed GNN frameworks compared to their homogeneous counterparts on both datasets, when making optimal subgraph choices (though, suboptimal choices can still situationally improve performance). These improvements are independent of the tested textual embedding methods, and can occur even when the added subgraphs possess suboptimal graph properties, e.g. lower edge homophily ratio and presence of isolated nodes, compared to the starting References graph. SGC with ogbn-arxiv shows the strongest improvement over the baseline. Most likely, the linear classifier relies less on graph structures, and hence benefits more from the deeper textual semantics captured by SciBERT. Notably, the best results are competitive with the SOTA, while operating on a limited compute budget and low level of complexity (simple 2-layer GNN model pipelines with comparatively few tunable parameters). On ogbn-arxiv, we achieve a top-15 result with the GCN backbone, being the highest-scoring solution with sub-1 million parameters. On PubMed with the aforementioned splitting strategy, we closely trail the best
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{References} & \multirow{2}{*}{Authorship} & Venue / & FoS / & \multicolumn{2}{c|}{ogbn-arxiv} & \multicolumn{2}{c}{PubMed} \\ \cline{3-8} & & & Journal & & SkipGram & SciBERT & Bag-of-Words & SciBERT \\ \hline ✓ & & & & 0.6858 \(\pm\) 0.0008 & 0.7177 \(\pm\) 0.0024 & 0.8646 \(\pm\) 0.0094 & 0.8635 \(\pm\) 0.0043 \\ \hline & ✓ & & & 0.6099 \(\pm\) 0.0008 & 0.6456 \(\pm\) 0.0009 & 0.8026 \(\pm\) 0.0054 & 0.7921 \(\pm\) 0.0121 \\ \hline & & ✓ & & 0.4665 \(\pm\) 0.0003 & 0.5998 \(\pm\) 0.0043 & 0.5440 \(\pm\) 0.0357* & 0.6137 \(\pm\) 0.0090 \\ \hline & & & ✓ & 0.4828 \(\pm\) 0.0088 & 0.5270 \(\pm\) 0.0043 & 0.7499 \(\pm\) 0.0047 & 0.7274 \(\pm\) 0.0158 \\ \hline \hline ✓ & ✓ & & & 0.7079 \(\pm\) 0.0023 & 0.7330 \(\pm\) 0.0010 & **0.8851 \(\pm\) 0.0050** & **0.8747 \(\pm\) 0.0055** \\ \hline ✓ & & & ✓ & 0.6795 \(\pm\) 0.0035 & 0.7136 \(\pm\) 0.0036 & 0.8745 \(\pm\) 0.0103 & 0.8661 \(\pm\) 0.0076 \\ \hline ✓ & & & ✓ & 0.6914 \(\pm\) 0.0007 & 0.7221 \(\pm\) 0.0002 & 0.8737 \(\pm\) 0.0106 & 0.8643 \(\pm\) 0.0091 \\ \hline ✓ & ✓ & ✓ & & 0.6995 \(\pm\) 0.0009 & 0.7334 \(\pm\) 0.0008 & 0.8430 \(\pm\) 0.0389* & 0.8696 \(\pm\) 0.0046 \\ \hline ✓ & ✓ & & ✓ & ✓ & **0.7145 \(\pm\) 0.0024** & **0.7383 \(\pm\) 0.0003** & 0.8301 \(\pm\) 0.1035* & 0.8724 \(\pm\) 0.0050 \\ \hline ✓ & & & ✓ & ✓ & 0.6828 \(\pm\) 0.0015 & 0.7189 \(\pm\) 0.0001 & 0.8306 \(\pm\) 0.0563* & 0.8656 \(\pm\) 0.0104 \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & 0.7049 \(\pm\) 0.0011 & 0.7327 \(\pm\) 0.0004 & 0.8563 \(\pm\) 0.0120* & 0.8726 \(\pm\) 0.0038 \\ \hline \hline \end{tabular}
* Indicates (significant) oversmoothing.
\end{table}
Table 2: Ablation study for both datasets, 3-run average test accuracy with a 2-layer GCN. The best results are highlighted in bold. Note the table feature differences (Venue, FoS, and Skip-Gram for ogbn-arxiv, Journal, MeSH, and Bag-of-Words for PubMed).
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Model & Validation Accuracy & Test Accuracy & \# Params & Baseline Accuracy & Improvement \\ \hline GCN & 0.7586 \(\pm\) 0.0012 & 0.7461 \(\pm\) 0.0006 & 621,944 & 0.7174 \(\pm\) 0.0029 & +2.87\% \\ \hline GCN+JK & 0.7629 \(\pm\) 0.0007 & 0.7472 \(\pm\) 0.0024 & 809,512 & 0.7219 \(\pm\) 0.0021 & +2.53\% \\ \hline SAGE & 0.7605 \(\pm\) 0.0007 & 0.7461 \(\pm\) 0.0013 & 1,242,488 & 0.7149 \(\pm\) 0.0027 & +3.12\% \\ \hline SGC & 0.7515 \(\pm\) 0.0005 & 0.7419 \(\pm\) 0.0004 & 92,280 & 0.6855 \(\pm\) 0.0002 & +5.64\% \\ \hline \hline \end{tabular}
* [https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv](https://ogb.stanford.edu/docs/leader_nodeprop/#ogbn-arxiv).
\end{table}
Table 3: Results on the best ogbn-arxiv configuration (References, Authorship, Fields of Study subgraphs, and SciBERT embeddings) with hyperparameters described in Appendix 4. The baseline results on the unmodified dataset and the improvement over the baseline (test acc. minus baseline acc.) are also displayed. For GCN, GCN+JK, and SAGE, these are taken from the official leaderboard.* The SGC baseline was obtained by using the (undirected) References subgraph and provided SkipGram features.
performance reported by GCNII (90.30%) and Geom-GCN (90.05%) (Chen et al., 2020; Pei et al., 2020), by using a GraphSAGE backbone with just the added co-authorship subgraph input (89.88%).
## 4 Conclusions and Future Work
In this paper, we propose a data transformation methodology leveraging metadata retrieved from citation databases to create enriched edge-heterogeneous graph representations based on various additional signals of document relatedness: co-authorship, publication source, fields of study, and subject headings. We also test the substitution of default node features with SciBERT embeddings to capture higher-dimensionality textual semantics. By nature, the methodology is GNN- and embedding-agnostic. Deploying optimal configurations of the transformed graph with a variety of simple GNN pipelines leads to consistent improvements over the starting data, and enables results on par with the SOTA in full-supervised node classification. Overall, results show that our methodology can be an effective strategy to achieve respectable performance on datasets with readily-available article metadata, without necessitating complex GNN architectures and lengthy (pre-)training procedures.
As the methodology is compatible with any hetero-transformable GNN backbone and node embedding technique, we expect that deploying the transformed data with SOTA GNN frameworks, e.g. RevGAT by Li et al. (2021) on ogbn-arxiv, will lead to a greater raw performance. Though, the larger memory footprint of the graph may complicate the application of such frameworks.
Refining the edge type definitions, e.g. connect papers that share at least two fields of study and/or remove "generic" fields applicable to a majority of papers in the set, can help de-noising and improving the properties of the respective subgraphs. A custom aggregation scheme could be implemented for the heterogeneous transformation dependent on individual subgraph properties, such as a weighted average based on some metric of subgraph "quality," e.g. homophily. To mitigate the increased risk of heterogeneity-induced oversmoothing, additional regularization techniques, e.g. DropEdge by Rong et al. (2020), could be considered. Finally, applying parameter-efficient fine-tuning (PEFT) techniques to the SciBERT model can improve feature separability and thus classification performance (Duan et al., 2023); the effectiveness of different transformer-based language models can also be investigated.
## Acknowledgement
The authors would like to thank \(\copyright\) prof. Paul Groth for his supervision and consultation throughout the project.
|
2308.16707 | Causal Analysis of First-Year Course Approval Delays in an Engineering
Major Through Inference Techniques | The study addresses the problem of delays in the approval of first-year
courses in the Civil Engineering Major at the National University of Tucum\'an,
Argentina. Students take an average of 5 years to pass these subjects. Using
the DoWhy and Causal Discovery Toolbox tools, we looked to identify the
underlying causes of these delays. The analysis revealed that the regulatory
structure of the program and the evaluation methods play a crucial role in this
delay. Specifically, the accumulation of regular subjects without passing a
final exam was identified as a key factor. These findings can guide
interventions to improve student success rates and the effectiveness of the
education system in general. | Hugo Roger Paz | 2023-08-31T13:17:51Z | http://arxiv.org/abs/2308.16707v1 | Causal Analysis of First-Year Course Approval Delays in an Engineering Major Through Inference Techniques.
###### Abstract
The study addresses the problem of delays in the approval of first-year courses in the Civil Engineering Major at the National University of Tucuman, Argentina. Students take an average of 5 years to pass these subjects. Using the DoWhy and Causal Discovery Toolbox tools, we looked to identify the underlying causes of these delays. The analysis revealed that the regulatory structure of the program and the evaluation methods play a crucial role in this delay. Specifically, the accumulation of regular subjects without passing a final exam was identified as a key factor. These findings can guide interventions to improve student success rates and the effectiveness of the education system in general.
Causal Inference, Course Approval Delays, Engineering Education.
## 1 Introduction
This article delves into the complexities of delayed curricular progression in higher education by investigating the underlying causes in the context of the 2005 updated curriculum of the Civil Engineering program at the National University of Tucuman. The five-and-a-half-year study plan, divided into eleven semesters, is governed by a Correlative System that sets up prerequisites for enrolling in courses. These prerequisites can be "regularized", showing the approval of partial evaluations or practical tasks, or "approved", showing the successful completion of final exams which implies that you have passed the course.
In this study, we start from the hypothesis that the existing evaluation system contributes to the accumulation of "regular" subjects, but not passed among students. We keep that this phenomenon arises from the lack of time available for adequate preparation for final exams, which, in turn, causes delays in the course approval process. Our main goal is to empirically demonstrate this underlying hypothesis, through the application of rigorous techniques of causal inference. Through this approach, we seek to shed light on the causal connections between the evaluation system, the accumulation of pending subjects and the delays in curricular progression.
As highlighted Pearl (2009), causal inference allows the identification of causal relationships beyond mere correlations. The emphasis on causal inference arises from the need to discover the true causal factors that drive delays in curriculum progression. "Observational research findings may be inconsistent or consistent but are unlikely to reflect true cause-and-effect relationships"(Hammerton and Munafo, 2021).
As Oloriz et al conclude in a paper where they study the relationship between the academic performance of entrants in engineering Majors and the dropout of university studies, "there is an important correlation between academic performance in the first quarter... and the dropout that occurs during the second, third and fourth quarters"(Oloriz et al., 2007, p. 1), that is why it is necessary to work on the reduction of academic failure since it leads, inexorably, to the abandonment of higher education.
## 2.- Method
### Population and Sample
The data under analysis come from the academic trajectories of 1,615 students enrolled in the Civil Engineering major, covering a diverse spectrum of Major advancements, from first-year students to those nearing graduation, as well as individuals who have dropped their training and those who have completed their educational process. These data were obtained through the Student and Administrative Management System (SIGEA), which collects and centralizes relevant information on the academic progress and management of students in the program.
Within the framework of this research, the academic performance of university students has been examined over a period of fourteen years, from 2004 to 2019. To ensure the accuracy of the results, an exclusion was made of those students who had obtained the approval of subjects through an equivalency system, due to modifications in their study plan, academic trajectory or educational institution. This measure was implemented considering that these students could have entered the program with subjects already passed, which could affect their progress and performance compared to their peers. For the analysis, we tried to select a set of 1,343 academic histories out of a total of 1,615.
In the present context, those students who have completed their studies within the period under evaluation (2005-2019) are classified as "graduates". For their part, students who have kept their enrollment in the academic program until 2019 and have not yet completed their training are considered "permanent".
Ultimately, the situation of "dropout" is defined for those students who do not register any enrollment in the year 2020 and have not completed their educational process. In adopting this approach, we follow the proposal of Gonzalez Fiegehen (2007)for students who enrolled in 2019, representing only 5% of the total analyzed(Peace, 2022). For the rest of the students, a less rigorous criterion is used, including those who resume their enrollment in the degree and resume their studies after a period of inactivity. The calculation of the time lapses has been based on the difference between the last activity documented in the SIGEA system and the date of admission to the academic program.
### Analysis technique
In the world of data analysis, understanding causal relationships is essential to making informed decisions and understanding the effects of various actions. In this context, Dowhy stands as a tool that uses solid theoretical foundations to conduct advanced causal analysis, unraveling hidden connections in data and providing deep insight into cause-and-effect relationships.
Dowhy is based on the theory of design of experiments and causal inference. His approach is based on the concept of "intervention", where variables are manipulated to see how they affect other variables in a system.
Once a causal effect has been found, different estimation methods can be adopted compatible with the identification strategy. To estimate the average causal effect, DoWhy supports the following methods: Distance-based matching(Chen et al., 2007), Propensity-based Stratification(Desai et al., 2017), Propensity Score Matching(Caliendo & Kopeinig, 2008; Rosenbaum & Rubin, 1983), Linear Regression, Generalized Linear Models (including logistic regression) (Chernozhukov et al., 2022), Binary Instrument/Wald Estimator(Jiang et al., 2023), Regression discontinuity(Imbens & Lemieux, 2008), Two-stage linear regression(Angrist & Imbens, 1995).
Dowhy performs a wide range of causal inference techniques, including estimating the Average Treatment Effect (ATE).(Crump et al., 2008)and other measures of causal impact. The tool also allows the estimation of confidence intervals and tests of statistical significance to support the results obtained. The sequence of analysis entails the execution of the following steps:
A Definition of the Causal Graph:The first step is to model the causal graph, which is the relationships between the variables of interest and how they influence each other. This helps to visualize the system and understand the underlying connections. A causal graph models the causal relationships, or "cause-effect relationships" present in a system or problem domain. In DoWhy, the causal graph is required to be a directed acyclic graph (DAG) where an edge X\(\rightarrow\)Y implies that X causes Y. Statistically, a causal graph encodes conditional independence relationships between variables.
One of the essential elements in the causal analysis process with Dowhy is the definition of the causal graph, which captures the interactions and relationships between the variables of interest. However, this step is not an isolated task; rather, it requires deep prior knowledge about the problem at hand and possible causal relationships, based on experience and intuition.
The process of defining a causal graph within Dowhy is like creating a map that will guide the analysis towards understanding the underlying causal relationships. This map cannot be generated solely by algorithms or automated approaches, as it requires a deep understanding of the specific domain. This is where prior knowledge comes into play.
Foreknowledge is the amalgamation of experience, intuition, and informed assumptions. Those who are familiar with the problem at hand can find the likely influencing variables and likely directions of causal relationships. Some assumptions may be based on earlier research, fundamental theories, or even the simplest observations.
The definition of the causal graph is based on the idea that many relationships in the real world can be interpreted as intervenable causality (IC) graphs. An IC chart is a visual representation of the causal relationships between variables, with arrows showing direct influences. In this context, modeling the causal graph in Dowhy involves mapping the cause-and-effect relationships of the problem in a graphical format.
The graph definition process is not merely technical; it is an art backed by experience. Domain experts are those who can bring valuable ideas and insight to the task. Its ability to find non-obvious connections and capture the subtleties of causal interactions enriches the analysis and ensures that the graph is a faithful representation of reality.
The definition of the causal graph in Dowhy is a fundamental part of the process of causal analysis. It requires input from prior knowledge based on experience and informed assumptions. This collaborative approach between human expertise and technological tools ensures that causal analysis is a rigorous and results-oriented process in search of meaningful connections between variables.
### Identification and Estimation of the Causal Effect:
Once the graph is defined, Dowhy uses observational data and causal inference methods to estimate the effect of the interventions on the
Figure 1: Structure of the Causal Model
variables of interest. This involves the selection of variables, the calculation of propensity scores if necessary, and the application of adjustment techniques.
Once the causal graph is defined, the treatment (independent variable) and the outcome (dependent variable) are selected to be investigated to estimate its causal effect. Dowhy is based on the concept of intervention, where the manipulation of the treatment is simulated to see its impact on the result.
In many cases, observational data may be subject to bias due to self-selection in treatments. Dowhy uses the "propensity score matching" method to estimate the probability of receiving the treatment (propensity score) based on other observed variables. This helps create comparable groups and adjust for potential biases.
Dowhy offers a variety of causal effects estimation methods, such as the "Matching" method, "Regression Discontinuity", "Instrumental Variables", among others. These methods are applied to adjusted and controlled observational data, to assess how the outcome changes when the treatment is manipulated.
After estimating the causal effect, Dowhy allows you to assess the statistical significance of the results. This is carried out by estimating confidence intervals and performing hypothesis tests to determine if the observed effect is statistically significant.
Once the results are obtained, Dowhy facilitates the interpretation of the estimated causal effects. The obtained values, together with confidence intervals and significance tests, provide valuable information about how treatment affects outcome and whether this relationship is plausible from a causal perspective.
Dowhy performs a comprehensive process of causal effect identification and estimation, combining causal theory with advanced statistical methods. Through the definition of the causal graph, the selection of variables, the use of the propensity score and the application of estimation methods, Dowhy sheds light on causal relationships in observational data.
C. Evaluation of the Validity of the Model:Dowhy allows you to assess the statistical significance of the results obtained, which provides an objective measure of the reliability of the estimated causal effects.
It should be noted that the causal part does not come from data. It comes from the assumptions leading to the identification made at point A. The data is used simply for statistical estimation. Therefore, it becomes essential to check whether our assumptions were correct in the first step or not. What happens when there is another common cause? What happens when the treatment itself was placebo? To do this, three methods of refuting results are used:
**Method 1: Random common cause.**Add randomly drawn covariates to the data and rerun the analysis to see whether or not the causal estimate changes. If our assumption was originally correct, then the causal estimate shouldn't change much.
**Method 2: Refuting placebo treatment.**randomly assigns any covariate as treatment and reruns the analysis. If our assumptions were correct, then this newfound estimate should go to 0.
**Method 3: Data subset refuter.**creates subsets of data (similar to cross-validation) and checks whether causal estimates vary between subsets. If our assumptions were correct, there shouldn't be much variation.
The application of the aforementioned techniques was carried out through the development of a code in Python language(Van Rossum & Drake Jr, 1995). The specific DoWhy libraries were used for the development of the code.(Sharma & Kiciman, 2020)and Causal Discovery Toolbox(Kalainathan et al., 2020).
## 3.- Results
### Definition of the variables of the problem
In this study, we start from the hypothesis that the existing evaluation system contributes to the accumulation of "regular" subjects, but not passed among students, which ultimately produces the causal effect of generating delays in the course approval process. This analysis will focus in this work on the delay that students suffer in the approval of the first-year courses.
From this point of view, the variables "treatment: T" (Accumulation of Regular Subjects) and the variable "result: Y" (Time it takes students to pass the subjects corresponding to the first year of the degree) can then be defined. The Y causal relationship involves other variables or "confounders" which intervene in said causal relationship, affecting both the treatment and the outcome, according to the following graph:
Based on this, the questions that arise are the following:
What is the impact that the accumulation of regular subjects has on the time it takes for students to pass all the subjects of the first year of the degree?
And the equivalent counterfactual question is:
If students do not have less than the average amount of accumulation of regular subjects, what is the probability that the approval time will decrease? Or vice versa. In formal language, we are interested in the average effect of treatment on students (ATE).
For the analysis, the academic histories were used (see section 2.1.), from which the following data were extracted for 1343 students:
\(\succ\)Cohort
\(\succ\)Gender
\(\succ\)Age
\(\succ\)Time in Major
\(\succ\)Approved Activities
\(\succ\)Average Notes
\(\succ\)Total Number Coursed
\(\succ\)Number Re Coursed
\(\succ\)Regular Number
\(\succ\)Free Number
\(\succ\)Total appealed over total completed.
\(\succ\)Total Regular-Total Coursed
Figure 2. Flow rate graph (DAG)
\(\succ\)Number of Exams
\(\succ\)Number of promotions
\(\succ\)Number Approved
\(\succ\)number of failures
\(\succ\)Number of Absentees
\(\succ\)Number of Promotions Out of Total Promotional Subjects
\(\succ\)Number of Promotions Out of Total Subjects
\(\succ\)Passed Exam Out of Total Exam
\(\succ\)Exam Promoted Over Total Exam
\(\succ\)Failed Exam Over Total Exam
\(\succ\)Absent Exam Over Total Exam
\(\succ\)has promotions.
\(\succ\)Has Free More 10
\(\succ\)Has Failed More 3
\(\succ\)Has Failed Minor 3
\(\succ\)Approval Time Modules 1-2 (M1 and 2 = First year of the Degree)
\(\succ\)Approval Time M1 2 More than 2 years (M1 and 2 = First year of the Degree)
\(\succ\)Maximum Accumulated Regular Subjects (MaxRegAcum)
\(\succ\)Accumulated Regular Subjects Greater than 6 (MaxRegAcumMayor6)
Regarding the treatment variables (Accumulated Regular Subjects) and result (Approval Time Modules 1 and 2), a binary distribution was made to allow the execution of the analysis. Said distribution (Accumulated Regular Subjects Greater than 6 for the treatment and Approval Time M1 2 Greater than 2 years for the result) was made based on the average value of said variables for the total sample. In this way, it is sought that the cases for both samples are balanced, since, otherwise, the implementation of the method throws inference errors (See Figures 3 and 4).
Figure 4: Histogram Approval Time Subjects Year 1
Figure 3: Histogram of Maximum Accumulated Regularities
### Definition of the Causal Graph
The generation of the causal graph in this study has been guided by a number of fundamental assumptions about the structure of the curriculum and the interactions between assessment activities, such as course regularization and approval. In this process, treatment and outcome variables have been incorporated following the binary distribution detailed in the earlier section. These essential variables were incorporated to model the interventions and the effects saw in the study context.
It should be noted that the causal graph presented here is a first approach to causality analysis. The relationships and connections between the variables have been delineated according to the available assumptions and expert knowledge. However, to confirm the validity and robustness of these causal relationships, further refutation analyzes will be performed to allow the causal hypotheses to be rigorously evaluated.
In addition, as part of the modeling process, all the variables that were considered relevant as possible "confounders" have been introduced. The inclusion of these added variables is intended to improve the accuracy of causal inference by controlling for potential sources of bias and confounding. This adjustment strategy looks to ensure that the estimated effects are more accurately attributable to the treatment under study.
In summary, the process of generating the causal graph has involved making fundamental assumptions about the structure of the curriculum and the interactions between assessment activities. As the analysis progresses, refutation methods will be applied to confirm and strengthen the causal relationships proposed in this first graph. The inclusion of treatment, outcome and "confounders" variables looks to offer a robust and detailed representation of the underlying causal relationships in this educational context. The resulting graph can be seen in the following figure.
Figure 5: Direct Acyclic Causal Graph (DAG)
3.2.- Identification and Estimation of the Causal Effect
The implementation of the Identification and Estimation of the Causal Effect was conducted by adopting specific methodological approaches, in line with the premises of scientific rigor and causal validity. In this study, two estimators, namely "backdoor.propensity_score_matching" and "backdoor.distance_matching", were employed in order to discern and quantify the causal effects under analysis. These estimators are based on the notions of "backdoor" and "matching" to address possible alternative paths of confusion and bias in observational data.[11, 12].
Within the framework of this methodology, the concept of "backdoor" refers to variables that open additional pathways between treatment and outcome, which could lead to confusion in the estimation of the causal effect.[13]. To counteract this effect, the "propensity score matching" and "distance matching" estimators were applied. The first, based on the propensity score, seeks to achieve balance between the treatment and control groups in terms of the relevant observable characteristics. The second, using the distance between the covariates, performs a more precise and calibrated matching to achieve comparability of the groups.[11, 12].
The choice of these estimators is justified by their ability to mitigate potential bias and possible confounding variables in observational data. The theory underlying these estimators is based on the idea of adequately controlling the variables that could intervene in the causal relationship between treatment and outcome, so that the causal effect can be isolated and estimated more accurately.
In summary, the implementation of Causal Effect Identification and Estimation in this study is based on the selection of specific estimators, "backdoor.propensity_score_matching" and "backdoor.distance_matching", which are supported by causal and methodological theory. These estimators address potential confounding pathways by rigorously controlling variables using the matching technique. The application of these methodological approaches advances towards a more solid and informed understanding of the causal effects in the studied context. The results can be seen below.
propensity_score_matching
*** Causal Estimate ***
## Identified estimate
Estimand type: nonparametric-ate
## Estimate : 1
Estimating name: backdoor
Estimating expression:
d
(E[Time_Approval_M1_2_Major2|MaxRegAccum]) d[MaxRecAccumMax6] Estimand assumption 1, Unconfoundedness: If U-{MaxRegAccumMajor6} and U-Time_Approval_M1_2_Major2 then P(Time_Approval_M1_2_Major2|MaxRegAccumMajor6,MaxRegAccum,U) P(Time_Approval_M1_2_Major2|MaxRegAccumMax6,MaxRegAccum)
## Realized estimate
b: Time_Approval_M1_2_Major2-MaxRegAccumMajor6+MaxRegAccum Target units: ate
Mean value: 0.6963350785340314 Causal Estimate is: 0.6963350785340314 Text Interpreter ----------------------------------------------------------------Increasing the treatment variable(s) [MaxRegAcumMajor6] from 0 to 1 causes an increase of 0.6963350785340314 in the expected value of the outcome [Tiempo_Approbacion_M1_2_Major2], over the data distribution/population represented by the dataset. ----------------------------------------------------------------
**distance_matching** Causal Estimate ***
## Identified estimate Estimand type: nonparametric-ate
## Estimate : 1 Estimating name: backdoor Estimating expression: d
(E[Time_Approval_M1_2_Major2|MaxRegAccum]) d[MaxRecAumMax\({}_{6}\)] Estimand assumption 1, Unconfoundedness: If U\(\rightarrow\){MaxRegAcumMajor6} and U-Time_Approval_M1_2_Major2 then P(Time_Approval_M1_2_Major2|MaxRegAccumMajor6,MaxRegAccum,U) = P(Time_Approval_M1_2_Major2|MaxRegAccumMax6,MaxRegAccum)
Realized estimate b: Time_Approval_M1_2_Major2\(\rightarrow\)MaxRegAccumMajor6+MaxRegAccum Target units: ate
Estimate Mean value: 0.6963350785340314 Causal Estimate is: 0.6963350785340314 Text Interpreter ----------------------------------------------------------------Increasing the treatment variable(s) [MaxRegAcumMajor6] from 0 to 1 causes an increase of 0.6963350785340314 in the expected value of the outcome [Tiempo_Approbacion_M1_2_Major2], over the data distribution/population represented by the dataset. ----------------------------------------------------------------
### Evaluation of the Validity of the Model
The rigorous evaluation of the validity of the causal model deployed constitutes an essential stage in the analysis process, allowing the confirmation of the proposed causal relationships and strengthening the credibility of the results obtained. To achieve this, refuters based on the counterfactual approach and resampling methods have been used in this study.
Within the framework of this evaluation, two specific refuters were applied: the "random common cause" and the "Bootstrap Sample Dataset". These disprovers, supported by solid theoretical foundations, look to challenge and test the robustness of the causal conclusions derived from the model.
The "random common cause" refuter is based on the introduction of a random variable
as a possible common cause, to assess whether this variable can alter the estimated effects. This methodology seeks to verify if the model is sensitive to variables not considered in the original analysis and, therefore, evaluates the possible influence of factors not considered in the causal inference.(Fame & Montanari, 2022).
On the other hand, the Bootstrap Sample Dataset refuter operates by generating replicate data samples from the original sample.(Farne & Montanari, 2022). This resampling technique makes it possible to assess the stability of the estimated causal effects when considering variations in the observational sample. By comparing the results obtained in different generated samples, the consistency and reliability of the causal conclusions is evaluated.
The results of the refutation evaluations are enlightening about the robustness and validity of the proposed causal model. In Rebuttals 1 and 3, where a random common cause is introduced, it is seen that the estimated effects stay consistent and are not significantly altered. This suggests that the model is resistant to the inclusion of new variables and reinforces the coherence of the established causal relationships.
In Rebuttals 2 and 6, using the Bootstrap Sample Dataset rebutter, a slight variation in the estimated causal effects is seen. However, this variation is within acceptable margins and does not compromise the robustness of the conclusions. These results reinforce the reliability and stability of the causal effects found in the analysis.
Taken together, the assessment of model validity through refuters and resampling methods reinforces confidence in the proposed causal relationships. These procedures provide rigorous confirmation of the robustness of the results and underscore the strength of the conclusions derived from the causal analysis. The results can be seen below.
**Rebuttal 1**
***Class Name ***
**backdoor.propensity_score_matching**
Refute: Add a random common cause
Refute: Add a random common cause
Estimated effect:0.500000000000001
New effect:0.5000000000000001
p-value: 1.0
**refutation 2**
***Class Name ***
**backdoor.propensity_score_matching**
Refute: Bootstrap Sample Dataset
Refute: Bootstrap Sample Dataset
Estimated effect:0.5000000000000001
New effect:0.5054347826086957
p-value:0.55
**Resultal 3**
***Class Name ***
**backdoor.distance_matching**
Refute: Add a random common cause
Refute: Add a random common cause
Estimated effect:0.5000000000000001
New effect:0.5000000000000001
p-value: 1.0
**Rebuttal 4**
*** Class Name ***
**backdoor.distance_matching**
Refute: Bootstrap Sample Dataset
Refute: Bootstrap Sample Dataset
Estimated effect:0.500000000000001
New effect:0.49663043478260865
p-value:0.4399999999999995
### 5.- Discussion and Conclusions
The results derived from the application of the propensity score matching and distance matching estimators supply valuable information on the impact of the increase in the treatment variable "MaxRegAccumMajor6" on the expected value of the result "Tiempo_Approbacion_M1_2_Major2". These results reveal significant patterns that can have profound repercussions in the context of the Civil Engineering Major and educational decision-making. In this analysis, we have explored how the accumulation of more than 6 regular courses not passed is related to the passing time of the courses of the first year.
The findings show that increasing the treatment variable from 0 to 1 is associated with a 0.696 increase in the expected value of approval time. In other words, students who accumulate more than 6 regular subjects without passing have a 70% probability that the approval time of the first year Civil Engineering subjects will be greater than 2 years. This result reveals a worrying connection between the accumulated number of failed subjects and the lengthening of the time necessary to complete the first year of the degree.
This impact has significant implications for delayed Major progression and the dropout rate. The high probability that students in this situation will experience a substantial delay in the achievement of academic goals can have negative effects on their motivation, commitment and perspective towards higher education. In addition, increasing the length of the first year may influence decisions to continue or drop out, potentially contributing to the dropout rate.
It is important to note that this analysis focused on the first year of the degree, but its implications can be extrapolated to the higher cycle. Future lines of research could delve into the analysis particularized by year and by subject. Examining how the pattern of accumulation of failed subjects relates to the length of each year and to specific subjects could supply more detailed and personalized information for educational decision-making.
In summary, the results obtained underline the importance of addressing the delay in the approval of subjects in the context of the Civil Engineering Major. The relationship found
between the accumulated number of subjects not approved and the approval time of the first year raises significant questions about the design of strategies to support students and the optimization of the curricular structure. These results serve as a starting point for future research that may have a positive impact on student retention and the improvement of the academic experience.
On the other hand, the results obtained in this study support a concern that has been widely discussed in the academic literature in previous decades. Emblematic works such as the one carried out by Tinto (1975) on student dropout in higher education and research on(Bean & Metzner, 1985)on student persistence, they had already identified the critical relevance of academic delays in the first years of the degree as a significant precursor to dropout.
These early contributions laid the foundation for understanding how obstacles to course completion in the early years can have an adverse impact on the continuation of higher education. As argued Terenzini and Pascarella (1991), lengthening the initial academic term can erode students' motivation and present challenges to their engagement and academic success.
Our results, which suggest a substantial increase in first year passing time for students with a history of failing courses, are consistent with these previous findings. The persistence of this pattern in the context of the Civil Engineering Major highlights the continuing importance of addressing this academic issue.
The extrapolation of our findings to subsequent years, although it requires future research, finds support in the perspective proposed by Tinto (1989) about the "integration theory". According to this theory, initial challenges in academic integration can influence later success in higher education. In this sense, our conclusions show a solid base for further research that considers a detailed analysis by year and subject, thus expanding the understanding of how academic backwardness can keep a prolonged influence on student trajectory.
In summary, this study contributes to a constantly evolving stream of research that emphasizes the critical importance of addressing early college delays in relation to student dropout. By building on earlier research that has proven the links between delay and dropout, this analysis provides a specific empirical context and highlights the relevance of educational policies aimed at mitigating delay and promoting the persistence of students in the field of higher education.
|
2308.00165 | Adversarially Robust Neural Legal Judgement Systems | Legal judgment prediction is the task of predicting the outcome of court
cases on a given text description of facts of cases. These tasks apply Natural
Language Processing (NLP) techniques to predict legal judgment results based on
facts. Recently, large-scale public datasets and NLP models have increased
research in areas related to legal judgment prediction systems. For such
systems to be practically helpful, they should be robust from adversarial
attacks. Previous works mainly focus on making a neural legal judgement system;
however, significantly less or no attention has been given to creating a robust
Legal Judgement Prediction(LJP) system. We implemented adversarial attacks on
early existing LJP systems and found that none of them could handle attacks. In
this work, we proposed an approach for making robust LJP systems. Extensive
experiments on three legal datasets show significant improvements in our
approach over the state-of-the-art LJP system in handling adversarial attacks.
To the best of our knowledge, we are the first to increase the robustness of
early-existing LJP systems. | Rohit Raj, V Susheela Devi | 2023-07-31T21:44:48Z | http://arxiv.org/abs/2308.00165v1 | # Adversarially Robust Neural Legal Judgement Systems
###### Abstract
Legal judgment prediction is the task of predicting the outcome of court cases on a given text description of facts of cases. These tasks apply Natural Language Processing (NLP) techniques to predict legal judgment results based on facts. Recently, large-scale public datasets and NLP models have increased research in areas related to legal judgment prediction systems. For such systems to be practically helpful, they should be robust from adversarial attacks. Previous works mainly focus on making a neural legal judgement system; however, significantly less or no attention has been given to creating a robust Legal Judgement Prediction(LJP) system. We implemented adversarial attacks on early existing LJP systems and found that none of them could handle attacks. In this work, we proposed an approach for making robust LJP systems. Extensive experiments on four legal datasets show significant improvements in our approach over the state-of-the-art LJP system in handling adversarial attacks. To the best of our knowledge, we are the first to increase the robustness of early-existing LJP systems
Keywords:Natural Language Processing Legal Judgement Prediction Robust Models
## 1 Introduction
Legal information is mainly in the form of text, so legal text processing is a growing area of research in NLP, such as crime classification [10], judgment prediction [3], and summarization [6]. Countries like India, which are highly populated, have many pending legal cases (approx 41 million ). In Brazil, only in the financial domain, three hundred thirty-two thousand cases are in progress [20]. It is due to multiple factors, including the unavailability of judges. Here legal judgment prediction system can help in several steps like finding articles or the history of a case, deciding penalty terms, etc. Also, legal judgment prediction is critical, so a small error in the system may drastically affect judicial fairness.
Most of the researchers focused on making LJP systems by training NLP models(LSTM, BERT [1], legal-BERT [2]) on legal datasets. At the same time, very little or no attention has been given to the robustness of these models.
We summarise our contribution as follows:
1. We implemented adversarial attacks on existing baseline models after fine-tuning them on legal datasets and found that their performance decreased drastically.
2. We suggested an algorithm for adversarial training for making robust legal models.
3. We implemented training using data augmentation and adversarial training methods to improve the model's robustness.
## 2 Related Work
### Legal Judgement System
Earlier legal judgment prediction systems involved linear models like SVM with a bag of words as feature representation. In recent years, neural network [3] methods have been used for legal domains due to the availability of NLP models like RNN and BERT [1].
Most researchers used BiGRU-att [3], HAN [3], BERT [1] and Hier-BERT [4] architecture to predict article violation on ECtHR [5] dataset. Legal-BERT [2] is a domain-specific BERT pretrained on legal-documents corpora of approx 11.5 GB, used for legal judgement prediction. A number of other tasks like legal summarization [6],prior case retrieval [8],legal QA [7] have been introduces.
In legal judgment prediction, the model must predict the final decision based on case facts. Several datasets are introduced for training so that model can learn specific words (for example,'restrictive covenant', 'promissory estoppel', 'tort' and 'novation') that are being used in legal documents which are not used for general purposes; for example, ECtHR [5], a multilabel dataset containing violated articles as the label. SCOTUS [5] contains cases of the American Supreme Court and ILDC [9] contains cases of the Indian supreme court. All of these are English datasets. However, datasets from different languages are also introduced like Chinese [10], Swedes [11], Vietnamese [12].
### Adversarial Training
Several adversarial training methods have been explored in NLP models to increase their robustness. The models are trained on a dataset containing augmented adversarial examples with the original dataset in adversarial training. These adversarial examples are generated by applying adversarial attacks on pre-trained models such that generated examples should be similar to the original example, and the average human user cannot differentiate it from the natural one. Several adversarial attack mechanisms are being used in NLP, such as BERT-Attack [15], BAE [14], A2T [16], TextFooler [13]. In these attacks, the model finds essential words in the original text and replaces them with semantically similar words such that the label of the original text changes and generates adversarial text that looks similar to the original text.
### Why adversarial training?
To motivate the necessity of adversarial training, we implemented adversarial attacks on existing baseline models(BERT [1], Legal-BERT [2], RoBERTa [18]) to check their robustness. We found that the performance of these models decreased drastically, as these models could not handle the adversarial attack. We also implemented data augmentation using back-translation during training, but the model's performance was not improved much.
Legal judgment prediction is critical, so a slight variation in the input may affect judgment fairness. So during deployment, if someone intentionally perturbs the input sequence, prediction may change drastically. It is the main reason for adversarial training.
## 3 Problem Formulation
Given a legal dataset, which contains a collection of legal documents, \(L=\{(X_{1},y_{1})..(X_{N},y_{N})\}\), where \(X_{i}\) is a legal text extracted from a legal document and \(y_{i}=\{1,2,3..K\}\). Here the length of each \(X_{i}\) is very large, and \(y_{i}\) is a label corresponding to that text.
The task is to design an LJP model \(M(.)\) that can:
* Predict correct class on legal documents of even large length.
* Perform correct prediction even if data is perturbed. Let \(X^{\prime}\) be a perturbed text, which may be perturbed intentionally or by mistake, then \(M(X^{\prime})\to y\), where \(y\) is the correct label of that legal text.
## 4 Methods
In this section, we present our training workflow. We implemented three methods for training. These are 1) Fine-tuning Baseline models, 2) Training baseline models with data augmentation 3) Adversarial training using augmenting adversarial examples with natural examples. At the end of each method, we tested our model's robustness with adversarial attacks.
### Fine tuning baseline model
In this approach, we have taken baseline models( BERT [1], Legal-BERT [2], RoBERTa [18], Hierarchical Version of BERT [4], we have used a modified version of Hier-BERT, denoting as H-BERT) and fine-tuned them on our downstream tasks for legal judgment predictions. For BERT, Legal-BERT, and RoBERTa, we have taken the last 512 tokens of each input text for training, as this approach gave a better result. For H-BERT(modified Hierarchical Version of BERT), we have divided text into chunks of 510 tokens such that two consecutive chunks overlapped each other, here RoBERTa is taken as encoder, shown in Figure 2,
as it gives the best result. We have used cross-entropy as a loss function for updating the gradient and evaluated model performance on accuracy.
### Training using data-augmentation
In this approach, we first generated data using back-translation[24] and then augmented it with training data. The algorithm for training is similar to Algorithm 2, where in place of an adversarial example generator, we are using a data augmenter.
We use the transformer model implemented by HuggingFace [21] for back-translation. We first translate English to French and then translate it back to English from French. We augment newly generated data such that it does not have any duplicate instances, and training is done similarly to approach 1.
### Adversarial Training
In this approach, we generate adversarial examples from original legal document datasets, then further augment these examples with legal document datasets and train the model on this new dataset, i.e., \(D_{new}=D_{nat}\cup D_{adv}\).
For generating an adversarial example from a text sample, first, we find the importance score of each word in that sample using greedy search with word importance ranking mechanism [25], where the importance of the word is determined by how much heuristic score changes when a word is deleted from the original input. i.e.,
\[I_{w_{i}}=\left\{\begin{array}{ll}M_{y}(X)-M_{y}(X/_{w_{i}}),& \text{if }M(X)=M(X/_{w_{i}})=y,\\ (M_{y}(X)-M_{y}(X/_{w_{i}}))+(M^{{}^{\prime}}_{y}(X/_{w_{i}})-M^{{}^{\prime}}_{ y}(X)),&\text{if }M(X)=y,M(X/_{w_{i}})=y^{{}^{\prime}}\\ &\text{and }y\neq y^{{}^{\prime}}\end{array}\right. \tag{1}\]
Here we have followed the deletion approach for finding word importance because we are considering a common black-box setup which is usually followed in a real-world scenario. We denote sentence after deletion of word \(w_{i}\) as \(X/_{w_{i}}\) = \(\{w_{1},...,w_{i_{-1}},w_{i_{+1}},...w_{n}\}\) and use \(M_{y}(.)\) to denote prediction score of model for label \(y\). Here \(I_{w_{i}}\) denote importance score of word \(w_{i}\) which is defined in equation 1.
As shown in Algorithm 1, in lines 3-4, we select top-k words according to importance score using equation 1 and generate'm' synonyms for each word using cosine-similarity and counter-fitted-word-embedding [23]. We then replace original words with synonyms and make an adversarial example \(X^{\prime}\). Further, to find the similarity of an adversarial sample \(X^{\prime}\) to the original sample \(X\), we use
Universal Sentence Encoder(USE) [19]. We ignore the examples below a certain threshold value. We have taken 0.5 as the threshold value for all of our experiments. We have implemented all of our algorithms on top of the Textattack framework.
```
1:Input: Legal judgement prediction model \(M(\theta)\), legal sample sentence \(X=(w_{1},w_{2},..w_{n})\), Perturbation Generator \(P(X,i)\) which replace \(w_{i}\) with certain perturbed word using counter-fitted-word-embedding
2:Output: Adversarial legal sample \(X_{adv}\)
3: Calculate importance score \(I(w_{i})\) of each word \(w_{i}\) using equation 1.
4: Take top-k words and rank them in decreasing order according to \(I(w_{i})\) and store them in set \(R=(r_{1},r_{2}..r_{k})\)
5:\(X^{\prime}\gets X\)
6:for\(i=r_{1},..r_{n}\)in\(R\)do,
7:\(X_{p}\leftarrow\) perturb the sentence \(X^{\prime}\) using \(P(X^{\prime},i)\)
8:if\(M(X_{p})\neq y\)then
9:if\(sim(X_{p},X)>threshold\)then\(\triangleright\) Check similarity of \(X\) and \(X^{\prime}\)
10:\(X^{\prime}\gets X_{p}\)
11:endif
12:endif
13:endfor
14:return\(X^{\prime}\)as\(X_{adv}\)
```
**Algorithm 1** Adversarial Example Generation from legal Sample
For adversarial training, we first train the model using natural legal dataset \(D_{nat}\) for some iterations,i.e., \(n_{nat}\), after that we augment adversarial example \(D_{adv}\) by using adversarial example generator, i.e., \(D_{new}=D_{nat}\cup D_{adv}\). Further, train the model on \(D_{new}\) for some iterations, i.e., \(n_{adv}\). Here adversarial loss function is used to train the model.
Let \(L_{nat}\) be the loss function used for natural training, which is defined as a cross-entropy loss function, i.e.,
\[L_{nat}=L_{\theta}(X,y) \tag{2}\]
Where \(X\) is the input text and \(y\) is the label corresponding to it. If \(A_{\theta}(X,y)\) is the adversarial example generator, then the loss function for adversarial training is defined as a cross-entropy loss function,i.e
\[L_{adv}=L_{\theta}(A_{\theta}(X,y),y) \tag{3}\]
So our final loss function will be the combination of these two cross entropy loss functions, i.e.,
\[L=argmin_{\theta}(L_{nat}+\gamma L_{adv}) \tag{4}\]
Where \(\gamma\) is a hyper-parameter that is used to change the importance of adversarial training.
As shown in Algorithm 2, lines 3-6 represent the natural training of the model. Lines 7-18 represent the adversarial training of the model, which is pre-trained in lines 3-6. In lines 10-15, adversarial examples are generated. Line 16 represents the augmentation of adversarial examples with natural data. Line 17 represents the adversarial training step.
```
1:Input: Legal judgement prediction model \(M_{\theta}(.)\), Adversarial example generator algorithm \(A_{\theta}(X,y)\), legal dataset \(D_{nat}=\{X,y\}_{i=1}^{m}\), natural training epochs \(n_{nat}\), adversarial training epochs \(n_{adv}\)
2:Output: Adversarially trained model
3:Randomly initialize \(\theta\)
4:for\(i=1,2..n_{nat}\)do,
5: Train \(M_{\theta}(.)\) on dataset \(D_{nat}\) using loss function from Equation (2).
6:endfor
7:for\(i=1,2..n_{adv}\)do,
8: Initialize set of adversarial legal dataset \(D_{adv}\leftarrow\{\}\)
9:\(K\leftarrow\) fraction of adversarial samples to be generated of natural dataset
10:for i = 1,2..\(size(D_{nat})\)do,
11:if size\((D_{adv})\)\(<K*D_{nat}\)then
12:\(X_{adv}\gets A_{\theta}(X,y)\)
13:\(D_{adv}\gets D_{adv}\cup\{X_{adv},y\}\)
14:endif
15:endfor
16:\(D_{new}\gets D_{nat}\cup D_{adv}\)
17: Train \(M_{\theta}(.)\) on \(D_{new}\) using loss function from Equation (4).
18:endfor
```
**Algorithm 2** Adversarial Training of legal Models
## 5 Experiments and Results
### Datasets and Models
**Datasets**
**ECHR [3]:** It contains cases of the European Council of Human Rights (ECHR). The dataset has 11.5k cases, of which 7100 cases are used for training, 1380 for development, and 2998 for the test set. The training and development set contains cases from 1959-2013, and the test set contains cases from 2014-2018. Total ECHR articles are 66; however, we have taken **binary representation** of the ECHR dataset, in which label 1 is assigned if any article is violated; otherwise, 0 is assigned.
**SCOTUS[5] :** It is a dataset of the US Supreme Court, which hears only complex cases not well solved by lower courts. SCOTUS is a multi-class single label
dataset containing 14 classes, broad areas like Civil Rights, Criminal Procedure, Economic Activity, etc. The SCOTUS cases are split into a 5k(1946-1982) training set, 1.4k(1982-1991) development set, and 1.4k(1991-2016) test set.
**ILDC :** Indian Legal Document Corpus (ILDC) is introduced by [9], which contains cases of the Supreme court of India(SCI) from 1947 to 2020. It is a **binary classification** dataset contain labels \(\{0,1\}\). It has two versions. **1)ILDC-single** contains cases of a single petition filed, label 1 is assigned to cases whose petition is accepted, and 0 is for not accepted. **2)ILDC-multi** contains cases with multiple petitions filed. Here label 1 is assigned to cases with at least one petition accepted; otherwise, label 0 is assigned. We have taken ILDC-multi for all of our experiments.
As shown in Figure 1, legal text datasets have a substantial length. Here the average length of samples of ECtHR is **1619** words, SCOTUS is **5853** words, and ILDC is **3208** words. So it is far greater than a normal BERT architecture input size. Therefore, we have implemented the modified Hierarchical Variant of BERT (H-BERT) architecture.
#### 3.2.2 Models
**BERT[1]** is a pre-trained transformer-based language model. It is pre-trained to perform masked language modeling and next sentence prediction.
**Legal-BERT[2]** is BERT pre-trained on English legal corpora, which contains legislation, contracts, and court cases. Its configuration is the same as the original BERT configuration. The sub-word vocabulary of Legal-BERT is built from scratch.
**Hierarchical Variant of BERT (H-BERT)** Legal documents are usually of large text length (shown in Figure 1), for example, ECtHR, ILDC, and SCOTUS. Transformer-based models can handle up to 512 sub-word units. So we implemented an architecture similar to [4] in which we divided the text into the chunk of 510 tokens such that two consecutive chunks have 100 overlapping tokens. Each chunk is sent through a BERT-Encoder to generate CLS embedding.
Figure 1: Text length distribution of different datasets, here horizontal axis shows length of input texts and vertical axis show number of inputs
As shown in Figure 2, CLS embedding is passed to 1-dimensional convolution and max-pooling layers. A further output of the max-pooling layer is passed to Bi-directional LSTM and then the Dense layer. We have taken RoBERTa as an encoder here as it gave best result among all other BERT-based models.
### Implementation Details
For all tasks, we use pre-trained transformer-based BERT models from Huggingface implementation. Each model output a 768-dimension vector regarding each input text. The batch size is set to 8. Models trained using Adam optimizer with 1e-5 learning rate for overall 10 epochs, which includes 3 epochs of natural training and 7 epochs of adversarial training. We used LSTM of 100 units and 1-D CNN with 32 filters for H-BERT.
### Results
#### Results after fine tuning
We have fine-tuned models naturally, i.e., without any augmentation (shown in Table 1). For BERT, Legal-BERT, and RoBERTa, we have taken the last 512
Figure 2: Adversarially Robust Neural Legal Judgement Model (H-BERT model)
tokens of each sample as input, and for H-BERT, we have divided the text into chunks, as mentioned in section 3. From empirical results, we can say that H-BERT performs better than other models as H-BERT takes whole text examples, whereas other models take only the last 512 tokens. Legal-BERT performs better on ECHR and SCOTUS datasets as it is pre-trained on legal documents of Europe and America. The performance of RoBERTa on the ILDC dataset is better than Legal-BERT because Legal-BERT is not pre-trained on the Indian-origin legal dataset, whereas RoBERTa is pre-trained on general English datasets.
## Results after adversarial attack on naturally trained models
We feed 1000 adversarial examples generated from the adversarial examples generator to naturally trained models to check their robustness against adversarial attacks. As shown in Table 2, naturally trained models could not handle adversarial attacks as their accuracy decreased drastically. The accuracy of BERT decreased the most because it is not pre-trained on domain-specific (legal domain) datasets, whereas, in the case of H-BERT, accuracy decreased least because H-BERT's RoBERTa is pre-trained on general English datasets as well as during training, it is considering whole legal text documents. In contrast, other models consider only the last 512 words of each example. Legal-BERT is more robust than BERT as it is pre-trained on legal datasets. RoBERTa is pre-trained on a large corpus, so it can able to handle adversarial attacks better than Legal-BERT.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Models** & **ECHR** & **SCOTUS** & **ILDC\({}_{multi}\)** \\ \hline BERT(\(FT\)) & 81.21 & 68.33 & 67.24 \\ Legal-BERT(\(FT\)) & 83.42 & 76.47 & 63.37 \\ RoBERTa(\(FT\)) & 79.27 & 71.69 & 71.26 \\ H-BERT(\(FT\)) & 81.03 & 78.02 & 74.89 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy of Naturally trained models, (\(FT\)) : Fine Tuning
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Models** & **ECHR** & **SCOTUS** & **ILDC\({}_{multi}\)** \\ \hline BERT(\(FT\)) & 33.12 & 36.42 & 22.59 \\ Legal-BERT(\(FT\)) & 36.27 & 41.67 & 25.26 \\ RoBERTa(\(FT\)) & 36.05 & 41.91 & 38.92 \\ H-BERT(\(FT\)) & 39.18 & 43.19 & 37.21 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy of Naturally trained Models after attack, (\(FT\)) : Fine Tuning
#### Results after adversarial attack on model trained using data-augmentation
We feed 1000 adversarial examples to a model trained using data augmentation to check their robustness. As shown in Table 3, the accuracy of models is less than that of naturally trained models but better than the accuracy of models after the adversarial attack on naturally trained models. This is because we are augmenting extra data, which is very similar to the original data except for a few words for training. So due to this, the model is more diverse and can handle some adversarial attacks. In most cases, H-BERT performs better than others because it considers whole text data instead of the last 512 tokens.
#### Results after adversarial training
We implemented adversarial training using our Algorithm 2. As shown in Table 4, sometimes, the accuracy of an adversarially trained model is better than the naturally trained model. The increase in accuracy is due to the augmentation of adversarial examples, which creates more diversity during training. The performance of the H-BERT model is best, while Legal-BERT is performing better on ECHR and the SCOTUS dataset because it is pretrained on European and American legal documents. Figure 3 shows an adversarial example on the ILDC dataset during adversarial training. As we can see, slight change perturbation in the text can change the label of an input. Due to the large length of text input, we have shown only a small snippet of an example where an example is being perturbed.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Models** & **ECHR** & **SCOTUS** & **ILDC\({}_{multi}\)** \\ \hline BERT(\(AT\)) **(Ours)** & 79.23 & 69.07 & 65.56 \\ Legal-BERT(\(AT\)) **(Ours)** & 82.01 & 77.02 & 61.02 \\ RoBERTa(\(AT\)) **(Ours)** & 81.73 & 70.03 & 69.97 \\ H-BERT(\(AT\)) **(Ours)** & **83.67** & **78.09** & **71.53** \\ \hline \end{tabular}
\end{table}
Table 4: Accuracy after adversarial training, (\(AT\)) : Adversarial Training
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Models** & **ECHR** & **SCOTUS** & **ILDC\({}_{multi}\)** \\ \hline BERT(\(DA\)) **(Ours)** & 38.03 & 41.12 & 32.56 \\ Legal-BERT(\(DA\)) **(Ours)** & 39.36 & 43.15 & 41.66 \\ RoBERTa(\(DA\)) **(Ours)** & 40.21 & **45.09** & 38.71 \\ H-BERT(\(DA\)) **(Ours)** & **46.10** & 45.02 & **42.03** \\ \hline \end{tabular}
\end{table}
Table 3: Accuracy after adversarial attack, (\(DA\)) : Data Augmentation
#### 5.5.1 Results after adversarial attack on adversarially trained model
We feed 1000 adversarial examples, as earlier, to check the robustness of the adversarially trained model. The results are surprising, as shown in Table 5. Our models can handle most adversarial attacks. Accuracy is far better than accuracy after the attack on naturally trained models. This is because, during adversarial training, the model came across a diverse set of words that were not present earlier.
As shown in Table 5, H-BERT is performing better than other models because it is trained on the whole dataset. The BERT model performs worst as it is not pre-trained on legal documents. The performance of Legal-BERT is not satisfactory on ILDC because it is pre-trained on European and American legal documents, which may contain words that are different from Indian legal documents.
## 6 Conclusion and Future work
In this work, we empirically proved that early existing legal models are not adversarially robust, which is a significant risk for deploying them in work. We also presented an adversarially robust model, which is trained on our adversarial training algorithm for legal judgment prediction, which performs better than state-of-the-art models in the presence of adversarial examples.
For future work, we suggest making robust legal models which can be applied to Legal documents that are different from English. Also, one can work on zero-shot and few-shot learning in legal domains, where very few resources are available for legal documents.
Figure 3: original and adversarial examples of ILDC dataset while training BERT.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Models** & **ECHR** & **SCOTUS** & **ILDC\({}_{multi}\)** \\ \hline BERT(\(AT\)) **(Ours)** & 58.96 & 52.38 & 54.46 \\ Legal-BERT(\(AT\)) **(Ours)** & 64.07 & 52.71 & 51.96 \\ RoBERTa(\(AT\)) **(Ours)** & 64.97 & 50.09 & 55.91 \\ H-BERT(\(AT\)) **(Ours)** & **69.32** & **61.53** & **58.29** \\ \hline \end{tabular}
\end{table}
Table 5: Accuracy after attack, (\(AT\)) : Adversarial Training |
2309.06629 | The Relational Bottleneck as an Inductive Bias for Efficient Abstraction | A central challenge for cognitive science is to explain how abstract concepts
are acquired from limited experience. This has often been framed in terms of a
dichotomy between connectionist and symbolic cognitive models. Here, we
highlight a recently emerging line of work that suggests a novel reconciliation
of these approaches, by exploiting an inductive bias that we term the
relational bottleneck. In that approach, neural networks are constrained via
their architecture to focus on relations between perceptual inputs, rather than
the attributes of individual inputs. We review a family of models that employ
this approach to induce abstractions in a data-efficient manner, emphasizing
their potential as candidate models for the acquisition of abstract concepts in
the human mind and brain. | Taylor W. Webb, Steven M. Frankland, Awni Altabaa, Simon Segert, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Tyler Giallanza, Zack Dulberg, Randall O'Reilly, John Lafferty, Jonathan D. Cohen | 2023-09-12T22:44:14Z | http://arxiv.org/abs/2309.06629v5 | # The Relational Bottleneck as an Inductive Bias
###### Abstract
A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience. This effort has often been framed in terms of a dichotomy between empiricist and nativist approaches, most recently embodied by debates concerning deep neural networks and symbolic cognitive models. Here, we highlight a recently emerging line of work that suggests a novel reconciliation of these approaches, by exploiting an inductive bias that we term the _relational bottleneck_. We review a family of models that employ this approach to induce abstractions in a data-efficient manner, emphasizing their potential as candidate models for the acquisition of abstract concepts in the human mind and brain.
## Highlights
* Human learners efficiently acquire abstract concepts from limited experience. The effort to explain this capacity has fueled debate between proponents of symbolic and connectionist approaches, and motivated proposals for hybrid neuro-symbolic systems.
* A recently emerging approach, that we term the'relational bottleneck' principle, suggests a novel way to bridge the gap. We formulate this in information theoretic terms, and review neural network architectures that implement this principle, displaying rapid learning of relational patterns, and systematic generalization of those patterns to novel inputs.
* The approach may help to explain a diverse set of phenomena, ranging from cognitive development to capacity limits in cognitive function. The approach is also consistent with findings from cognitive neuroscience, and may offer a useful general principle for designing more powerful artificial learning systems.
## Modeling the efficient induction of abstractions
Human cognition is characterized by a remarkable ability to transcend the specifics of limited experience to entertain highly general, abstract ideas. Understanding how the mind and brain accomplish this has been a central challenge throughout the history of cognitive science, and a major preoccupation of philosophy before that [1, 2, 3, 4]. Of particular importance is the central role played by _relations_, which enable human reasoners to abstract away from the details of individual entities and identify higher-order patterns across distinct domains [5, 6]. The capacity to think in terms of relations is a major
component underlying the human capacity for fluid reasoning [7, 8], and a key factor distinguishing human intelligence from that of other species [9].
Efforts to explain this capacity have often been framed in terms of a debate between **empiricism** (see Glossary), according to which concepts are primarily acquired from experience, and **nativism**, according to which certain core concepts are innate. Cognitive scientists in the empiricist tradition have for decades explored how the abstractions associated with human cognition might emerge through experience in neural architectures using general-purpose learning algorithms (often termed **connectionsism**) [10, 11, 12, 13]. This endeavor has recently taken on new relevance, as the success of large language models has demonstrated that it is possible, in some cases, for a human-like capacity for abstraction to emerge given sufficient scaling of both architecture and training data [14, 15, 16, 17]. For instance, it has recently been shown that large language models can solve various analogy problems at a level equal to that of college students [18]. However, the ability of these models to perform abstract tasks (e.g., analogy) depends on exposure to a much larger training corpus than individual humans receive in an entire lifetime [19, 20].
An alternative approach, often associated with the nativist tradition, holds that human cognition arises from processes akin to symbolic programs. This approach has a long tradition in both cognitive science and AI [21, 22, 23], due to the fact that it offers a natural explanation of the flexibility of human cognition: processes that operate over symbols can be sensitive to their general structure, without respect to a particular symbol's reference [24]. Recent efforts have demonstrated that this approach is capable of inducing abstract concepts in a data-efficient manner, mirroring the efficiency of human concept learning [25, 26, 27, 28, 29, 30, 31]. However, a potential limitation of this approach is that it depends on the pre-specification of symbolic primitives. Though it remains to be seen how far this approach can be scaled, it has thus far proven challenging to identify a set of primitives that are expressive enough to account for the breadth and richness of human natural concepts. It also raises the question of how the symbolic primitives arise: are these an innate feature of the human mind, or could they too emerge in a data-efficient manner through learning?
In this review, we highlight a recently emerging approach that suggests a novel reconciliation of these two traditions. The central feature of this approach is an **inductive bias** that we refer to as the _relational bottleneck_: a constraint that biases neural network models to focus on relations between objects rather than the attributes of individual objects. This approach satisfies two key desiderata for models of abstract concept acquisition. First, the approach is capable of rapid acquisition of abstract relational concepts. Second, the approach does not require access to a set of pre-specified primitives. This latter feature distinguishes the approach from other so-called _neuro-symbolic_ approaches (see Box 2 for further discussion). In the following sections, we first provide a general characterization of this approach, drawing on concepts from information theory, and discuss a number of recently proposed neural network architectures that implement the approach. We then discuss the potential of the approach for modeling human cognition, relating it to existing cognitive theories and considering potential mechanisms through which it might be implemented in the brain.
## The relational bottleneck
We define the relational bottleneck as any mechanism that restricts the flow of information from perceptual to downstream reasoning systems to consist only of relations (see Box 1 for a formal definition). For example, given inputs representing individual visual objects, a relational bottleneck would constrain the representations passed to downstream reasoning processes such that they capture only the relations between these objects (e.g., whether the objects have the same shape), rather than the individual features of the objects (e.g., the individual shapes). Such a representation encourages downstream processes to identify relational patterns, such as the identity rule in Figure 1, in a manner that is abstracted away from the details of specific instances of those patterns, and can therefore be systematically generalized to novel inputs. In the following section, we highlight three recently proposed neural network architectures that instantiate this approach in different guises, illustrating how they utilize a relational bottleneck to induce abstract concepts in a data-efficient manner.
**Box 1: The relational bottleneck principle**
Information bottleneck theory [32] provides a normative framework for formalizing the notion of a relational bottleneck. Consider an information processing system that receives an input signal \(X\) and aims to predict a target signal \(Y\). \(X\) is processed to generate a compressed representation \(Z=f(X)\) (the 'bottleneck'), which is then used to predict \(Y\). At the heart of information bottleneck theory is the idea of'minimal-sufficiency'. \(Z\) is _sufficient_ for predicting \(Y\) if it contains all the information \(X\) encodes about \(Y\). That is, \(I(Z;Y)=I(X;Y)\), where \(I(\cdot\,;\,\cdot)\) is the mutual information. If \(Z\) is sufficient, then we write \(X\to Z\to Y\), meaning that \(Y\) is conditionally independent of \(X\) given the compressed representation \(Z\). \(Z\) is _minimal-sufficient_ if it is sufficient for \(Y\) and does not contain any extraneous information about \(X\) which is not relevant to predicting \(Y\). That is, \(I(X;Z)\leq I(X;\tilde{Z})\) for any other sufficient compressed representation \(\tilde{Z}\).
Achieving maximum compression while retaining as much relevant information as possible is a trade-off. It is captured by the information bottleneck objective,
\[\text{minimize }\mathcal{L}(Z)=I(X;Z)-\beta I(Z;Y). \tag{1}\]
This objective reflects the tension between compression - which favors discarding information as captured by the first term - and the preservation of relevant information in \(Z\), captured by the second term. The parameter \(\beta\) controls this trade-off.
While this objective is well-defined when the joint distribution \((X,Y)\) is known, obtaining a minimal-sufficient compressed representation from data is, in general, very challenging for the high-dimensional signals that are often of interest. However, it may be possible to implicitly enforce a desirable information bottleneck for a large class of tasks through architectural inductive biases.
In particular, we hypothesize that human cognition has been fundamentally optimized for tasks that are relational in nature. We define a'relational task' as any task for which there exists a minimal
Figure 1: **The relational bottleneck.** An inductive bias that prioritizes the representation of relations (e.g., ‘same’ vs. ‘different’), and discourages the representation of the features of individual objects (e.g., the shape or color of the objects in the images above). The result is that downstream processing is driven primarily, or even exclusively by patterns of relations, and can therefore systematically generalize those patterns across distinct instances (e.g., the common ABA pattern displayed on both left and right), even for completely novel objects. The approach is illustrated here with same/different relations, but other relations can also be accommodated. Note that this example is intended only to illustrate the overall goal of the relational bottleneck framework. Figure 2 depicts neural architectures that implement the approach.
sufficient representation \(R\) that is _purely relational_. Suppose the input signal represents a set of objects,
\[X=\left(x_{1},\,\ldots,\,x_{N}\right). \tag{2}\]
A relational signal is a signal of the form,
\[R=\left\{r(x_{i},x_{j})\right\}_{i\neq j}=\left\{r(x_{1},x_{2}),\,r(x_{1},x_{3} ),\,\ldots,\,r(x_{N-1},x_{N})\right\}, \tag{3}\]
where \(r(x_{i},x_{j})\) is a learned relation function that satisfies certain key relational properties (e.g., transitivity). One type of operation that satisfies the relevant properties is inner products of the form \(\left\langle\phi(x_{i}),\psi(x_{j})\right\rangle\). Let \(\mathcal{R}\) be the class of all possible relational representations of the input signal \(X\). In a relational task, there exists \(R\in\mathcal{R}\) which is sufficient for predicting \(Y\) (i.e., \(X\to R\to Y\)).
A relational bottleneck is any mechanism that restricts the space of all possible compressed representations to be a subset of the relational signals \(\mathcal{R}\). This gives the model a smaller space of possible compressed representations over which it must search. This space of compressed representations \(\mathcal{R}\) is guaranteed to contain a minimal-sufficient representation for the task and excludes many representations that encode extraneous information about \(X\), promoting efficient learning of relational abstractions.
## The relational bottleneck in neural architectures
Figure 2 (Key Figure) depicts three neural architectures that implement the relational bottleneck through the use of architectural inductive biases. Here, we discuss how the distinct mechanisms employed by these models implement the same underlying principle. In particular, a common aspect of all three architectures is the use of inner products to represent relations, which ensures that the resulting representations are genuinely relational. In each case, we also contrast these architectures with closely related approaches that do _not_ incorporate a relational bottleneck, emphasizing how this key architectural feature gives rise to the data-efficient induction of abstractions.
### Emergent symbol binding
We first consider the Emergent Symbol Binding Network (ESBN) (Figure 2a) [33]. The ESBN is a deep neural network architecture, augmented by an external memory, that was inspired by the notion of role-filler variable binding in cognitive models of relational reasoning [36, 37, 38]. In those models, relational reasoning is supported by the existence of separately represented 'roles', which capture information about abstract variables, and 'fillers', which capture information about concrete entities to which those variables are bound. Previous work has focused on how these roles and fillers can be dynamically bound in neural circuits. However, the role and filler representations themselves were typically pre-specified by the modeler, leaving open the question of how these representations might emerge from experience.
The ESBN adopts this key idea of separate role and filler representations, but integrates them into a larger system that can be trained end-to-end, averting the need to pre-specify those representations. The ESBN contains three major components: 1) a feedforward encoding pathway ('Encoder' in Figure 2a), which generates object embeddings from high-dimensional perceptual inputs, 2) a recurrent controller ('Controller' in Figure 2a), which operates over learned representations of abstract task-relevant variables, without direct access to the object embeddings, and 3) an external memory system responsible for binding and associating representations between these two pathways. The ESBN processes perceptual observations sequentially. For each observation, a pair of embeddings is added to memory, one from the perceptual pathway (referred to as a _key_), and one from the control pathway (referred to as a _value_)1. To read from this memory, the object embedding for the current observation (referred to as a _query_) is compared to all of the keys in memory via an inner product, yielding a set of scores (one for each key) that govern the retrieval of the associated values in the abstract control pathway.
Footnote 1: Note that the use of the terms ‘key’ and ‘value’ here is reversed relative to the original paper [33] in order to be more consistent with their usage in describing the CoRelNet and Abstract architectures.
Importantly, in this retrieval operation, the control pathway does not gain access to the _content_ of the representations in the perceptual pathway. Instead, the interaction is mediated only by the _comparison_ of perceptual representations with each other. The ESBN thus implements the relational bottleneck as an architectural prior, separating the learning and use of abstract representations by the controller from the embeddings of perceptual information. Thanks to this design feature, the ESBN is capable of rapidly learning relational patterns (such as the identity rules displayed in Figure 1), and generalizing them to **out-of-distribution** inputs (e.g., to previously unseen shapes) [33]. Critically, it can be shown to use precisely the same representation for a given role, irrespective of filler, thus exhibiting a critical feature of abstract, symbolic processing [24]. In this sense, the representations in the model's control pathway can be viewed as a form of learned'symbols'.
Figure 2: **Implementing the relational bottleneck. Three neural architectures that implement the relational bottleneck.****(a)** Emergent Symbol Binding Network (ESBN) [33]. **(b)** Compositional Relation Network (CoRelNet) [34]. **(c)** Abstractor [35]. In all cases, high-dimensional inputs (e.g., images) are processed by a neural encoder (e.g., a convolutional network), yielding a set of object embeddings \(\mathbf{O}\). These are projected to a set of keys \(\mathbf{K}\) and queries \(\mathbf{Q}\), which are then compared yielding a relation matrix \(\mathbf{R}\), in which each entry is an inner product between a query and key. Abstract values \(\mathbf{V}\) are isolated from perceptual inputs (the core feature of the relational bottleneck), and depend only on the relations between them.
It is instructive to compare this model with similar approaches that do not implement the relational bottleneck. The ESBN is part of a broader family of neural network architectures that use content-addressable **external memory** - a separate store of information with which a neural network can interact via learnable read and write operations [39, 40, 41]. Notably, these read and write operations typically rely on a similarity computation (based on inner products). These have often been cast as simplified implementations of the brain's episodic memory system [42, 43]. Standard external memory architectures do not typically isolate the control and perceptual pathways. Instead, perceptual inputs are passed directly to a central controller, which is then responsible for writing to and reading from a single, monolithic memory. Though it is possible for a role-filler structure to emerge in these systems given a sufficiently large amount of training data [44], they take much longer to learn relational tasks (requiring approximately an order of magnitude more training data), and do not display the same degree of generalization [33]. Thus, although external memory plays an important role in the ESBN architecture, the presence of external memory alone is insufficient to implement a relational bottleneck. Rather, it is the _isolation_ of the perceptual and abstract processing components from one another that implements the relational bottleneck. Furthermore, as we illustrate in the following sections, it is possible to achieve this isolation without the use of external memory.
### Relation matrices
An alternative approach to implementing the relational bottleneck is illustrated by the Compositional Relation Network (CoRelNet) (Figure 2b) [34]. In that approach, a set of perceptual observations are first processed by an encoder, yielding a sequence of object embeddings. A relation matrix is then computed over all pairs of objects, in which each entry consists of the inner product between a pair of object embeddings. Finally, this relation matrix is passed to a downstream decoder network (the architecture of this network can vary, e.g., using a multilayer perceptron or transformer). This decoder is subject to a relational bottleneck, in that it only has access to the relation matrix, and does not have direct access to the object embeddings. As with the ESBN, this relational bottleneck enables CoRelNet to rapidly learn and systematically generalize relational patterns.
CoRelNet can be viewed as a feedforward, parallelized implementation of the sequential process (of encoding and similarity-based retrieval from external memory) carried out by the ESBN. This results in performance benefits, as CoRelNet does not suffer from the vanishing gradient problem that is a challenge for recurrent neural networks used to implement such sequential processing [45]. It also makes the key relational inductive bias underlying the ESBN more explicit. The ESBN's memory retrieval procedure, in which the current observation is compared to the entries in memory, can be interpreted as computing a single row of the relation matrix. In both architectures, downstream processing is constrained so as to depend only on this relation matrix, though the details of this dependency differ.
Here too, a useful contrast can be made with related architectures that do not incorporate a relational bottleneck. In particular, architectures such as the Relation Net [46] (see [47] for related approaches) explicitly perform a comparison between each pair of inputs, leading to improved performance in relational tasks. However, whereas CoRelNet represents pairwise relations using inner products, the Relation Net utilizes generic neural network components (e.g., multilayer perceptrons) that are learned in a task-dependent manner. While this is in principle more flexible, it does not constrain the network to learn representations that _only_ capture relational information. As a consequence, this architecture is susceptible to learning shortcuts consistent with the training data (i.e., overfitting to perceptual details), compromising its ability to reliably generalize learned relations to out-of-distribution inputs [48, 33, 49]. This is in contrast to the inner product operation employed by the ESBN and CoRelNet, which is inherently relational, and therefore guarantees that downstream processing is based only on relations.
### Relational attention
The recently proposed Abstractor architecture (Figure 2c) [35] illustrates how the relational bottleneck can be implemented within the broader framework of attention-based architectures (including the Transformer [50]). The Abstractor is built on a novel attention operation termed _relational cross-attention_. In this operation, a set of object embeddings (which may be produced by an encoder given perceptual observations) is converted to form keys and queries, using separate linear projections. A relation matrix is then computed, in which each entry corresponds to the inner product between a
query and key. The relation matrix is used to attend over a set of learned values, which reference objects but are independent of their attributes.
Relational cross-attention can be contrasted with the standard forms of attention employed in Transformers: self-attention and cross-attention. In self-attention, the same set of object embeddings are used to generate keys, queries, and values. In cross-attention, object embeddings are used to generate keys and values, and queries are generated by a separate decoder network. In both cases, the values over which attention is performed are based directly on the object embeddings, and the information contained in these embeddings is therefore passed on for downstream processing (thus contravening the relational bottleneck). By contrast, in _relational_ cross-attention, keys and queries are generated from object embeddings, but a separate set of learned vectors are used as values. As in the ESBN, these values can be viewed as learned'symbols', in the sense that they are isolated from the perceptual content of the objects with which they are associated.
This implementation of the relational bottleneck yields the same benefits observed in others: the Abstractor learns relational patterns faster than the Transformer, and displays better out-of-distribution generalization of those patterns [35]2. The Abstractor also has a few advantages relative to existing implementations of the ESBN and CoRelNet. Because the relation matrix is computed using separate key and query projections, the Abstractor is capable of representing asymmetric relations (e.g., can capture the difference in meanings between 'A is greater than B' and 'B is greater than A'). In addition, multi-headed relational cross-attention enables the Abstractor to model multi-dimensional relations. As proposed, ESBN and CoRelNet are limited to relations along a single feature dimension only. Finally, similar to Transformers, the Abstractor is a _generative_ architecture, whereas the ESBN and CoRelNet are purely discriminative3. This enables the Abstractor to perform a broader range of tasks, including the sequence-to-sequence tasks that are common in natural language processing.
Footnote 2: Although, as noted in the introduction, there is evidence that the standard Transformer architecture can learn to perform relational tasks (e.g., in the case of large language models [18]), this requires considerable amounts of data. Experiments comparing the standard Transformer architecture with the various implementations of the relational bottleneck highlighted above suggest that the latter may be substantially more data efficient, though this remains to be demonstrated at scale, and for the full range of tasks over which Transformers have been applied.
Footnote 3: It should be noted that these are not fundamental limitations of the ESBN and CoRelNet architectures. For instance, both of these architectures can be modified so as to employ separate key and query embeddings, enabling asymmetric relations to be modeled [34]. Furthermore, an alternative implementation of the ESBN has been proposed that can perform generative tasks [51].
As the examples we have considered illustrate, the relational bottleneck can be implemented in a diverse range of architectures, each with their own strengths and weaknesses. In each case, the inclusion of a relational bottleneck enables rapid learning of relations without the need for pre-specified relational primitives. In the remainder of the review, we discuss the implications of this approach for models of cognition, and consider how the relational bottleneck may relate to the architecture of the human brain.
#### Box 2: Neuro-symbolic modeling approaches
Many approaches have been proposed for hybrid systems that combine aspects of both neural and symbolic computing. Early work in this area focused on incorporating a capacity for variable-binding - a key property of symbolic systems - into connectionist systems. Notable examples of this approach include binding-by-synchrony [37], tensor product variable-binding [36], and BoltzCONS [52]. A number of vector symbolic architectures have since been proposed that build on the tensor product operation, but enable more elaborate symbolic structures to be embedded in a vector space of fixed dimensionality [53, 54, 55, 56]. These approaches have all generally relied on the use of pre-specified symbolic primitives.
More recently, hybrid systems have been developed that combine deep learning with symbolic programs. In this approach, deep learning components are typically employed to translate raw perceptual inputs, such as images or natural language, into symbolic representations, which can then be processed by traditional symbolic algorithms [57, 58, 59, 60, 61]. This approach is complemented by recent neuro-symbolic approaches to probabilistic program induction, in which symbolic primitives are pre-specified (following earlier symbolic-connectionist modeling efforts), and then deep learning is used to assemble these primitives into programs [28].
An alternative approach (which might also be viewed as neuro-symbolic in some sense) involves the integration of key features of symbolic computing within the framework of end-to-end trainable neural systems. Examples of this approach include neural production systems [62], graph neural networks [47], discrete-valued neural networks [63], and efforts to incorporate tensor product representations into end-to-end systems [64, 65]. The relational bottleneck falls into this broad category, as it incorporates key elements of symbolic computing - variable-binding and relational representations - into fully differentiable neural systems that can be trained end-to-end without the need for pre-specified symbolic primitives. Relative to these other approaches, the primary innovation of the relational bottleneck framework is the emphasis on architectural components that promote the development of genuinely relational representations.
## The relational bottleneck in the mind and brain
### Modeling the development of counting: a case study in learning abstractions
A core requirement for cognitive models of abstract concept acquisition is to account for the timecourse of acquisition during human development. A useful case study can be found in the early childhood process of learning to count [66, 67, 68]. Children typically learn to recite the count sequence (i.e. 'one, two, three,...' etc.) relatively early, but their ability to use this knowledge to count objects then proceeds in distinct stages. Each stage is characterized by the ability to reliably count sets up to a certain size (i.e., first acquiring the ability to reliably count only single objects, then to count two objects, and so on). Around the time that children learn to count sets of five, an inductive transition occurs, in which children rapidly learn to counts sets of increasing size. It has been proposed that this transition corresponds to the acquisition of the 'cardinality principle' - the understanding that the last word used when counting corresponds to the number of items in a set [68].
A recent study investigated the development of counting in deep neural network architectures [69]. These included the ESBN, the Transformer, and long short-term memory (LSTM) [70] (a type of recurrent neural network). Each architecture displayed a distinct developmental timecourse. The Transformer displayed a roughly linear timecourse, taking approximately the same amount of time to master each number. The LSTM displayed an exponentially increasing timecourse, taking more time to learn each new number. Only the ESBN displayed a human-like inductive transition, gradually learning to count each number from one to four, and then rapidly acquiring the ability to count higher after learning to count to five. This was due to the ability of the ESBN to learn a procedure over the representations in its control pathway that was abstracted away from the specific numbers in the count sequence (represented in the model's perceptual pathway), allowing it to rapidly and systematically generalize between numbers. This case study illustrates how the relational bottleneck can emulate a human-like developmental trajectory for learning abstract concepts.
### Cognitive models of analogical reasoning
The relational bottleneck also has some important parallels with cognitive models of analogical reasoning. In particular, both approaches afford an especially important role to _patterns of similarity_. In traditional symbolic models, this typically takes the form of literal identicality between symbols [71]. However, more recent models employ a graded measure of similarity that can be easily applied to distributed representations, such as those derived from deep learning systems (e.g., word or image embeddings) [72, 73, 74]. In those models, a similarity matrix is computed, which is then used to identify a mapping from the elements in one situation to the elements in another situation. This use of similarity matrices has a close connection to the relation matrices (both of which are based on inner products) employed explicitly in architectures such as CoRelNet and the Abstractor, and implicitly in the retrieval operation of the ESBN. This suggests the intriguing possibility that these architectures, aided by the inductive bias of the relational bottleneck, may learn to implement a procedure similar to the mapping algorithm proposed by cognitive models of analogy.
### Capacity limits and the curse of compositionality
The relational bottleneck principle may also help to explain the limited capacity of some cognitive processes (e.g., working memory) [75]. Recent work has demonstrated that human-like capacity limits naturally emerge in an architecture that implements the relational bottleneck [76]. In that architecture, two separate representational pools (each representing distinct feature spaces, e.g., color and location) interact via a dynamic variable-binding mechanism (in that case, implemented using rapid Hebbian learning). This architecture is conceptually similar to the ESBN, but is subject to classical efficient coding constraints--that is, limits not only on the amount of available data, but also time and memory available for optimizing a loss function. This mechanism, which is intimately related to classic neural network models of rapid learning and memory retrieval [77], enables the model to flexibly construct compositional representations (e.g., representing a visual scene by binding together spatial locations and visual features). However, this flexibility comes at the cost of relying on compositional representations that, by definition, are shared across many different, potentially competing processes (an instance of the general relationship between shared representations and cognitive capacity [78]). The model quantitatively captures capacity limits observed in three distinct cognitive domains: working memory [75], subitizing (the ability to rapidly identify the number of items in a display) [79], and absolute judgment (the ability to correctly label specific feature values such as pitch or loudness) [80].
### Brain mechanisms supporting the relational bottleneck
We close by considering how the relational bottleneck might relate to the architecture of the human brain. A central element of this framework is the presence of segregated systems for representing abstract vs. perceptual information (i.e., abstract values vs. perceptual keys/queries in the ESBN or Abstractor). A large body of findings from cognitive neuroscience suggests the presence of distinct neocortical systems for representing abstract structure (e.g., of space or events) vs. concrete entities (e.g., people or places), located in the parietal and temporal cortices respectively [81, 82, 83, 84, 85]. This factorization has also been explored in a number of recent computational models [86, 87, 88].
However, this segregation raises the question of how representations in these distinct neocortical systems are flexibly bound together. Though many proposals have been made for how the brain might solve this variable-binding problems (see Box 2), one intriguing possibility involves the episodic memory system [42]. A common view holds that episodic memory is supported by rapid synaptic plasticity in the hippocampus, which complements slower statistical learning in the neocortex [43, 89]. According to this view, episodes are encoded in the hippocampus by the rapid _binding_ of features that co-occur within an episode, while the features themselves are represented in neocortical systems. This same mechanism could in principle support an architecture similar to the ESBN, by enabling rapid binding of abstract and perceptual neocortical representations. This is in fact very similar to models of cognitive map learning, according to which distinct representations of structural vs. sensory information, corresponding to the medial vs. lateral entorhinal cortices (often viewed as extensions of the parietal and temporal neocortical systems referenced above), are bound together by rapidly formed conjunctive representations in the hippocampus [90].
That said, the extent to which variable-binding relies on the hippocampus remains an open question. Some lesion evidence suggests that hippocampal damage does not lead to impairments of abstract reasoning [91] (see Box 3 for further discussion). Other alternatives are that variable-binding may be supported by other structures capable of rapid synaptic plasticity (e.g., the cerebellum, which has been increasingly implicated in higher cognitive functions [92, 93, 94]), or by other structures (such as the prefrontal cortex) that use other mechanisms for binding (such as selective attention [95] or working memory gating [96]). The latter possibilities are consistent with findings that prefrontal damage often leads to severe deficits in abstract reasoning tasks [97, 98], and prefrontal activity is frequently implicated in neuroimaging studies of abstract reasoning [99, 100]. However, this may also reflect the role of prefrontal cortex in _representing_ abstract structure (along with the parietal system described above), rather than the _binding_ of that structural information to concrete content. Of course, it is also possible that variable-binding is supported by a collection of distinct mechanisms, rather than a single mechanism alone. These are all important questions for future work that we hope will be usefully guided by the formalisms and computational models reviewed here.
#### Box 3: Episodic memory and the relational bottleneck
The proposal that episodic memory (EM) plays a crucial role in abstract reasoning may seem to be at odds with conventional wisdom for several reasons. First, the capacity for abstraction may be assumed to fall more naturally within the province of semantic memory, which is generally assumed to encode the abstract (e.g., statistical) structure of relationships among concepts [43, 89]. The proposal considered here is not that EM _represents_ such structure, but rather that it is used to apply structural information (e.g., roles) to specific instances (e.g., fillers) by serving as a binding mechanism.
Another concern might be that reasoning processes are generally associated with working memory (WM) function [101, 95] rather than EM. However, a growing body of recent findings have suggested the potential involvement of EM in tasks that are traditionally associated with WM [102, 103, 104]. Furthermore, the functional properties of EM are well suited to perform the variable-binding operation that plays a critical role in the relational bottleneck framework. In traditional accounts of EM, rapid hippocampal plasticity serves to bind together the features of an episode, but this same mechanism is in principle capable of binding together abstract and perceptual representations (such as the key and value representations in the ESBN) in the service of an ongoing reasoning process.
As noted in the text, there is some evidence that could be taken as evidence against this account: lesion studies suggesting that hippocampal damage, which leads to severe EM deficits, does not lead to comparably severe deficits in abstract reasoning [91], whereas reasoning deficits often arise from damage to prefrontal cortex [97, 98]. It is of course possible that both hippocampal and prefrontal mechanisms contribute to variable-binding in the healthy brain, but that prefrontal mechanisms alone can support variable-binding in the event of hippocampal damage. However, an alternative possibility is that EM-like processes - i.e., the rapid encoding of arbitrary but durable associations subject to similarity-based retrieval - may be observed by other brain regions not traditionally associated with EM, such as the prefrontal cortex, cerebellum, or other structures. From this perspective, the relational bottleneck framework points to a number of intriguing directions for future research concerning the nature of EM and its relationship to the capacity for abstraction.
## Concluding remarks and future directions
The human mind has a remarkable ability to acquire abstract relational concepts from relatively limited and concrete experience. Here, we have proposed the relational bottleneck as a functional principle that may explain how the human brain accomplishes such data-efficient abstraction, and highlighted recently proposed computational models that implement this principle. We have also considered how the principle relates to a range of cognitive phenomena, and how it might be implemented by the mechanisms of the human brain.
It should be noted that the framework reviewed here is not necessarily at odds with the existence of certain forms of domain-specific innate knowledge. In particular, a range of evidence from developmental psychology has suggested that humans possess certain 'core knowledge' systems, such as an innate capacity to represent objects [105, 106, 107]. These findings have motivated the development of neuro-symbolic models endowed with these innate capacities [108], although it is also possible that these findings may ultimately be accounted for by the inclusion of additional inductive biases into connectionist systems, such as mechanisms for object-centric visual processing [109, 110, 111, 112] (which have also been combined with the relational bottleneck [113]). Critically, however, it is important to emphasize that the relational bottleneck is, in principle, orthogonal to questions about these domain-specific capacities, and is focused instead on explaining the induction of abstract, domain-general concepts and relations.
There are a number of important avenues for further developing the relational bottleneck framework (see Outstanding Questions). Further work is needed to integrate the relational bottleneck with a broader range of cognitive processes relevant to abstraction, including attentional processes [114] and semantic cognition [115]. Additionally, much work has suggested that human reasoning is not purely relational, but instead depends on a mixture of concrete and abstract influences [116, 117, 118, 119]. This suggests the potential value of a more graded formulation that controls the amount
of non-relational information allowed to pass through the bottleneck. Finally, the human capacity for abstraction surely depends not only on architectural biases such as those that we have discussed here, but also on the rich educational and cultural fabric that allows us to build on the abstractions developed by others [120]. In future work, it will be important to explore the interaction between education, culture and relational inductive biases.
## Outstanding Questions
* Human reasoners often display so-called 'content effects', in which abstract reasoning processes are influenced by the specific content under consideration (and therefore are not purely abstract or relational). Can a more graded version of the relational bottleneck capture these effects, while preserving a capacity for relational abstraction?
* How can other cognitive processes (perception, attention, memory, etc.) be integrated with the relational bottleneck?
* How is the relational bottleneck implemented in the brain? To what extent does this rely on mechanisms responsible for episodic memory, attentional mechanisms, and/or other mechanisms that remain to be identified? What role do the hippocampus, prefrontal cortex, and/or other structures play in these computations?
* How do architectural biases toward relational processing interact with cultural sources of abstraction (e.g., formal education)?
## Glossary
### Connectionism
A modeling framework in cognitive science that emphasizes the emergence of complex cognitive phenomena from the interaction of simple, neuron-like elements organized into networks, in which connections are formed through learning.
#### Empiricism
An epistemological view according to which knowledge is ultimately derived from experience. Often contrasted with nativism.
#### Episodic memory
A form of memory in which arbitrary, but durable, associations can be rapidly formed. Often thought to be implemented by hippocampal mechanisms for rapid synaptic plasticity and similarity-based retrieval.
#### External memory
In the context of neural networks, an approach that combines these with separate external stores of information, typically with learnable mechanisms for writing to and reading from these stores, and in which retrieval is usually similarity-based (i.e, 'content-addressable'). Often used to implement a form of episodic memory.
#### Inductive bias
An assumption made by a machine learning model about the distribution of the data. In deep learning models, this often takes the form of architectural features that bias learning toward certain (typically desirable) outcomes. Genetically pre-configured aspects of brain structure can be viewed as a form of inductive bias.
#### Out-of-distribution generalization
In machine learning, generalization to a distribution that differs from the distribution observed during training.
#### Nativism
The view that certain concepts and mental capacities are innate rather than learned from experience. Often contrasted with empiricism.
## Declaration of interests
The authors declare no competing interests.
|
2306.08081 | Bouncing behaviours in four dimensional Einstein Gauss-Bonnet gravity
with Cosmography and Observational constraints | This manuscript is based on an investigation of bouncing cosmology in a 4D
Einstein Gauss-Bonnet gravity. Various bouncing models such as symmetric
bounce, matter bounce, super bounce, and oscillatory bounce have been examined.
Expressions for energy density, pressure, equation of state parameter have been
derived in the most general manner and then reduced to 4D Einstein Gauss-Bonnet
gravity for isotropic, homogenous, FLRW cosmos. Physical interpretation of
Hubble and deceleration parameters has also been discussed and plotted for each
model from non-vanishing scale factors. Non-singular bouncing models indulge in
accelerating late-time cosmic acceleration phenomenon. It has been analysed
that the Gauss-Bonnet coupling parameter has a lesser contribution to the
dynamics of modified gravity while the bouncing parameter has noticeable
effects. We have examined various energy conditions and witnessed the violation
of strong and null energy conditions in bouncing models. Analytical expressions
for jerk and snap parameters have also been calculated in terms of cosmic time
and redshift. We have explored bouncing models through specific cosmographic
tests to check their validity. Also, through stability analysis, matter bounce
becomes the most stable model by increasing the value of the bouncing
parameter. To find best-fit values, bouncing models have been constrained with
Hubble data set and $\Lambda$CDM. We have calculated the values of parameters
by applying the least-square fitting method. To make this analysis quantified,
we have employed reduced chi-squared method on $H(z)$ data sets for each model. | M. Zubair, Mushayydha Farooq | 2023-05-31T07:12:49Z | http://arxiv.org/abs/2306.08081v1 | Bouncing behaviours in four dimensional Einstein Gauss-Bonnet gravity with Cosmography and Observational constraints
###### Abstract
This manuscript is based on an investigation of bouncing cosmology in a 4D Einstein Gauss-Bonnet gravity. Various bouncing models such as symmetric bounce, matter bounce, super bounce, and oscillatory bounce have been examined. Expressions for energy density, pressure, equation of state parameter have been derived in the most general manner and then reduced to 4D Einstein Gauss-Bonnet gravity for isotropic, homogenous, FLRW cosmos. Physical interpretation of Hubble and deceleration parameters has also been discussed and plotted for each model from non-vanishing scale factors. Non-singular bouncing models indulge in accelerating late-time cosmic acceleration phenomenon. It has been analysed that the Gauss-Bonnet coupling parameter has a lesser contribution to the dynamics of modified gravity while the bouncing parameter has noticeable effects. We have examined various energy conditions and witnessed the violation of strong and null energy conditions in bouncing models. Analytical expressions for jerk and snap parameters have also been calculated in terms of cosmic time and redshift. We have explored bouncing models through specific cosmographic tests to check their validity. Also, through stability analysis, matter bounce becomes the most stable model by increasing the value of the bouncing parameter. To find best-fit values, bouncing models have been constrained with Hubble data set and \(\Lambda\)CDM. We have calculated the values of parameters by applying the least-square fitting method. To make this analysis quantified, we have employed reduced chi-squared method on \(H(z)\) data sets for each model.
**Keywords**: 4D Einstein Gauss-Bonnet Modified Gravity; Bouncing Cosmology; Cosmography;
## I Introduction
Diverse experiments and various observations have been scrutinized in order to understand the theory of general relativity (GR) in strong and weak gravitational fields, and all are according to observational data sets [1]. Infact, this theory also envisions us about space-time singularities under natural constraints [2]. This loophole leads us to the fact that still, we need more authentic theories that completely describe space-time, its composition, gravity, and its whereabouts [3]. Beyond GR significant number of theories have been proposed regarding gravitation and cosmology. Including superstring or M-Theory, drawn up in higher dimensional space-time is the most favourable concept. Two particular parameters have to be introduced to make the system in the superstring theory. One parameter is the string coupling parameter which is \(g_{s}^{2}=e^{\phi}\) where \(\phi\) is dilation field, the second one is inverse string tension \(\gamma^{{}^{\prime}}\). When the value of \(\gamma^{{}^{\prime}}\) is small (tension is high) in comparison to the energy scale of the system, it becomes challenging to excite strings, its size becomes very small, and it is regarded as the particle of zero-order approximation. In this limiting case, GR and other light fields can be recovered. This is called \(\gamma^{{}^{\prime}}\) expansion [4]. While driving \(\gamma^{{}^{\prime}}\) higher-order terms, curvature corrections terms appear. The next level involves studying the Gauss-Bonnet (GB) term, which includes the ordinary set of equations with the maximum up to second-order derivative. In spite of, \(\gamma^{{}^{\prime}}\) expansion of type IIB superstring theory includes ghost-free combinations with higher curvature combinations [5; 6]. Many analysis have been performed by considering highly symmetric space-time, the system becomes much more complicated than in GR [7; 8; 9].
The natural generalization of Einstein gravity in higher dimension is known as Lovelock theory [10]. The action is a homogeneous polynomial in Riemann curvature. It has an incredible quality that inspite of action being polynomial in Riemann curvature, still equation of motion remains second order because higher-order terms in action do not contribute to the equation where \(D>2N\) here D is dimension and \(N\) is the degree of curvature polynomial in action. We have experienced that Einstein action consists of many complex scalar terms made up of different combinations of matter fields and geometrical functions such as Einstein tensor, Ricci tensor, Riemann tensor. One of them is called the GB term. The GB invariant is second-order while GR is first-order Lovelock. Therefore, we can write that
Lovelock is a higher-dimensional generalization of GR. Hence, instead of using the GB term in pure form, which is a total derivative, the modified GB term, coupled with other fields, has been used.
To understand the dynamics of GB invariant, there are two useful scenarios, one of them is to couple GB invariant with scalar field while other one involves generic function of GB term. On the based of second scenario, GB modified gravity is another theory which has acquired acceptance in the last few years also known as \(f(\mathcal{G})\) modified gravity [11; 12]. Through this theory we can study early as well as late times cosmological evolution by avoid ghost contributions [13]. The reconstruction and stability of \(f(\mathcal{G})\) modified gravity has been discussed by [14] and they have investigated the inflationary survey by using different models. Energy conditions with different models of \(f(G)\) modified gravity have also been evaluated by [15], they have used recent updated values of the Hubble, deceleration, jerk and snap parameters to find out viability of these models. The generalization of \(f(\mathcal{G})\) modified gravity has also been purposed by scientists which is called as \(f(\mathcal{G},T)\) modified gravity [16]. There is another modified theory which is obtained by modifying Einstein Hilbert action (replacing \(R\) by \(f(R,\mathcal{G})\)) [17; 18; 19] and \(f(\mathcal{G})\) gravity is its simplest form of \(f(R,\mathcal{G})\). According to first scenario and recent observations, a new theory has came up with GB term [20], while GB coupling constant is scaled as \(\gamma\rightarrow\dfrac{\gamma}{(D-4)}\). By substituting \(D=4\) in the whole equation, it reduces into 4D. In this way GB term contributes towards the gravitational dynamics and this idea is known as 4D Einstein Gauss-Bonnet (EGB) theory. This theory would bypass the results of Lovelock's theorem and keep away form the Ostrogradsky instability [21]. Black hole solutions have been investigated in 4D EGB gravity under various circumstances including a vaidya like radiating, coupled to magnetic charge, nonlinear electrodynamics [22; 23; 24; 25; 26; 27; 28; 29]. Moreover investigation have also been made to understand quasi-normal modes, deflection of light and shadow cast by black holes [30; 31; 32]. In 4D EGB exact spherically symmetric wormhole solution for isotropic and anisotropic matter have been evaluated by considering radial space function and power law density profile [33]. Moreover, the possible reconstruction of strange stars have been investigated in quark matter phases with in the background of 4D EGB [34] and find out that GB term shows nontrivial contribution in the dynamics of gravitation. Electrically charged Quarks stares with static spherically symmetric spacetime have also been explored, impact of GB coupling constant on mass-radius have calculated [35]. The cosmological implications of constrained EGB gravity have been evaluated in 4D and author have concluded that matter density falls more frequently at larger values of redshifts [36].
One of the most crucial cosmological problems is a cosmological singularity which is somehow resolved by the introduction of bouncing cosmology [37; 38]. It has been found that bouncing cosmologies are a substitute for standard inflationary theories [39; 40]. Many efforts have been put to study the bouncing cosmology in the framework of different modified theories. An investigation has been made based on bouncing cosmology in Teleparallel Gravity (TG). In this scenario, through detailed analysis,it has been observed that bouncing cosmologies turn up as a natural outcome in various early universe frameworks [41; 42; 43; 44]. In \(f(T)\) gravity, possibilities of matter bounce cosmology has also been studied. Many attempts have been made on effective field theory of loop quantum gravity in TG, which gives reliable results and aligned with BICEP2 data and Planck's experimental data [45; 46]. Besides this, number of researchers investigated bouncing cosmology with GB invariant theories and leads us to the several models in which bouncing cosmologies can result in early universe scenarios [47; 48; 49; 50]. Non-singular bouncing cosmology has been presented by using scalar matter with non-standard kinetic term [51]. Authors used standard matching conditions and conclude that spectral index remains the same during the bounce. The review of success of Inflationary Cosmology, String Gas Cosmology and how cosmological fluctuations corresponds to current data generated by these two cosmological models have been discussed by [52]. At classical quantum level NEC is violated and cyclic bouncing cosmology scenarios are not possible while all others remain valid [53]. Recently, non-singular bouncing cosmology and scale-invariant power spectrum have been discussed in [54; 55], where authors utilized single scalar field coupled with gravity in the background DHOST theories. Moreover, the relationship of bouncing models with necessary parameters in DHOST cosmology has also been investigated in [56]. The bouncing solutions have been explored by considering logarithmic trace term and linear trace term in \(f(G,T)\) modified gravity and it was concluded that NEC and SEC are violated [57]. Authors being motivated from [47], bouncing solutions have also been studied in \(f(T,B)\) modified theory by considering different types of gravitational Lagrangians and bouncing models [58]. Similarly, the cosmological matter bounce model has also been discussed in the framework of symmetric teleparallel gravity \(f(Q)\) with two gravitational Lagrangians and they have examined their stability and energy conditions [59]. In addition to this, matter bounce model has also been used in reconstruction of \(f(R,T)\) modified gravity [60]. The exponential and power law bouncing models have been used to reconstruct \(f(R)\) and \(f(G)\) modified gravity, further second order polynomial is constructed to check stability [61; 62].
The present study attempts to explore the bouncing models in the framework of 4D EGB with a flat, isotropic FRW universe. Sections of the present analysis are organized as follows. Section **II**: It consists of the basic formulation of higher and 4D EGB gravity. Section **III**: In this section, four bouncing models, namely symmetric bounce, matter bounce, super bounce and oscillatory cosmology have been studied in 4D EGB gravity. In section **IV** and **V**, we have discussed the energy conditions the cosmography of bouncing models in terms of cosmic time and red shift. Moreover,
stability of bouncing models is evaluated in section **VI**. Section **VII**: In this section, we have fitted the bouncing models with observational data sets. Last section **VIII** concludes our findings.
## II Basic formulation of Friedmann equation in 4D Einstein Gauss-Bonnet gravity
In D-dimensional space time EGB gravity can be derived by following action [20]
\[\mathbb{I}_{\mathcal{G}}=\int d^{D}x\sqrt{-g}\bigg{[}\frac{M_{p}^{2}R}{2}+ \frac{\gamma\mathcal{G}}{D-4}\bigg{]}+\mathcal{S}_{matter}, \tag{1}\]
Here \(g\) is determinant of \(g_{\alpha\beta}\), \(R\) is Ricci scalar which provides information about GR part of action. \(M_{p}\) is reduced Plank mass in terms of gravitational constant i.e. \(M_{p}=\sqrt{\frac{1}{8\pi G}}\) where (\(M_{p}=2.436\times 10^{18}GeV\)). Second term in action indicate GB action, \(\mathcal{G}\) is GB constant and \(\gamma\) is GB coupling constant. Third term contains baryonic and dark matter components. GB term is defined as [11]
\[\mathcal{G}=R^{\alpha\beta\mu\nu}R_{\alpha\beta\mu\nu}-4R^{\alpha\beta}R_{ \alpha\beta}+R^{2}, \tag{2}\]
where \(R_{\alpha\beta}\) is Ricci tensor and \(R_{\alpha\beta\mu\nu}\) is Riemann tensor. By varying equation (1) with respect to metric tensor \(g_{\alpha\beta}\) required equation is
\[G_{\alpha\beta}=\frac{1}{M_{p}^{2}}\bigg{(}\frac{\gamma\mathcal{G}}{D-4}H_{ \alpha\beta}+T_{\alpha\beta}\bigg{)}, \tag{3}\]
where, \(G_{\alpha\beta}\) is Einstein tensor, \(H_{\alpha\beta}\) is Lancoz tensor, where expression for energy momentum tensor for \(\mathcal{S}_{matter}\) is defined as
\[T_{\alpha\beta}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{S}_{matter}) }{\delta g^{\alpha\beta}}, \tag{4}\]
Expression for Lancoz tensor is
\[H_{\alpha\beta}=2\bigg{(}RR_{\alpha\beta}-2R_{\alpha\mu}R_{\beta}^{\mu}-2R_{ \alpha\mu\beta\nu}R^{\alpha\beta}-R_{\alpha\mu\nu\delta}R_{\beta}^{\mu\nu \delta}\bigg{)}-\frac{1}{2}\mathcal{G}g_{\alpha\beta}, \tag{5}\]
In D-dimensional, flat FLRW space time is defined as
\[ds^{2}=a^{2}(t)dx_{1}^{2}+a^{2}(t)dx_{2}^{2}+a^{2}(t)dx_{3}^{2}+a^{2}(t)dx_{4} ^{2}+......-dt^{2}. \tag{6}\]
where \(a(t)\) is cosmic scale factor which reveals about expension of universe. It is dimensionaless and key parameter for FLRW space time. GB scalar term for FLRW is derived as
\[\mathcal{G}=(D-1)(D-2)(D-3)4H^{2}(\dot{H}+H^{2})+(D-1)(D-2)(D-3)(D-4)H^{4}. \tag{7}\]
By variation of action (1) with respect to \(g_{\alpha\beta}\), following non-zero components are obtained,
\[\frac{(D-1)(D-2)H^{2}}{2} = -\frac{\gamma H^{4}}{M_{p}^{2}}(D-1)(D-2)(D-3)(D-4)+\frac{\rho}{M _{p}^{2}}, \tag{8}\] \[-\frac{(D-2)(D-3)H^{2}}{2}-\frac{(D-2)\ddot{a}}{a} = \frac{p}{M_{p}^{2}}+\frac{\gamma}{M_{p}^{2}}\bigg{[}(D-4)(D-3)(D- 2)(D-5)H^{4}+\] \[4(D-4)(D-3)(D-2)4\frac{\ddot{a}}{a}H^{2}\bigg{]}. \tag{9}\]
Here, Hubble parameter is expressed as \(H=\frac{\dot{a}}{a}\), \(\rho\) is energy sensity and \(p\) is pressure. Eq. (9) can be re-written as
\[\dot{H}=-\frac{\rho+p}{4\gamma(D-2)(D-3)(D-4)H^{2}+(D-4)M_{p}^{2}}. \tag{10}\]
The field equations of D-dimensional EGB gravity in FLRW space time for energy density \(\rho\), \(p\) and equation of state parameter (EoS) \(w=\dfrac{p}{\rho}\) are expressed as
\[\rho = \gamma(D-1)(D-2)(D-3)(D-4)H^{4}+\dfrac{(D-1)(D-2)M_{p}^{2}H^{2}}{2}, \tag{11}\] \[p = -\gamma(D-1)(D-2)(D-3)(D-4)H^{4}-\dfrac{(D-1)(D-2)M_{p}^{2}H^{2}} {2}-\dot{H}(D-2)\bigg{(}4\gamma(D-3)(D-4)H^{2}+M_{p}^{2}\bigg{)},\] (12) \[w = -1-\dfrac{\dot{H}(D-2)\bigg{(}4\gamma(D-3)(D-4)H^{2}+M_{p}^{2} \bigg{)}}{\gamma(D-1)(D-2)(D-3)(D-4)H^{4}+\dfrac{(D-1)(D-2)M_{p}^{2}H^{2}}{2}}. \tag{13}\]
These equations can be reduced into 4D by putting \(\gamma(D-4)\) equal to finite non-zero value. This is only possible, by setting \(D=4\) and \(\gamma\rightarrow\dfrac{\gamma}{(D-4)}\). Thus Energy density, pressure and EoS parameter are reduced as follows
\[\rho = 3M_{p}^{2}H^{2}+6\gamma H^{4}, \tag{14}\] \[p = -3M_{p}^{2}H^{2}-6\gamma H^{4}-2\dot{H}\bigg{(}4\gamma H^{2}+M_{p }^{2}\bigg{)},\] (15) \[w = -1-\dfrac{2\dot{H}\bigg{(}4\gamma H^{2}+M_{p}^{2}\bigg{)}}{3M_{p }^{2}H^{2}+6\gamma H^{4}}. \tag{16}\]
In 4D FLRW space, Ricci scalar \(R\) and GB term \(\mathcal{G}\) are defined as \(R=6(\dot{H}+2H^{2})\) and \(\mathcal{G}=24H^{2}(H^{2}+\dot{H})\). We can mainly assume from here that modification in cosmic evolution depends on \(\rho\), \(p\) and \(w\). It is obvious that the dynamical behaviour of these parameters depends on GB coupled parameter \(\gamma\). For \(\gamma=0\), we can get EoS parameter in GR.
## III Bouncing behaviour in 4D Einstein Gauss-Bonnet gravity
In the present analysis, we intend to discuss various bouncing scenarios in 4D EGB gravity. This investigation comprises of, dynamics of energy density \(\rho\), pressure \(p\) and EoS parameter \(w\). The preceding section consists of their expression in D and 4D EGB gravity connected with the Hubble parameter. Generally, the following conditions have been satisfied by bouncing models.
\(\bullet\) Bouncing models experiences contracting phase before, leading to non-singular bounce _i.e._, the expansion of universe \(a(t)\) decreases with time as \(\dot{a}(t)<0\). Therefore, Hubble parameter \(H=\dfrac{\dot{a}}{a}<0\) represents contracting era of universe.
\(\bullet\) The scale factor contracts to zero at bouncing point \(\dot{a}(t)=0\). Accordingly, the Hubble parameter vanishes at bouncing point \(H=0\). For homogenous and flat FRW, the EoS \(w\) and deceleration parameter \(q\) are expressed as \(w=-1-\dfrac{2\dot{H}}{3H^{2}}\) and \(q=-1-\dfrac{\dot{H}}{H^{2}}\) respectively. It can be seen that, at the bounce point both expressions show singular behaviour.
\(\bullet\) After the bouncing point scale factor \(a(t)\) starts accelerating with increase in cosmic time \(t\) this implies \(\dot{a}(t)>0\) and therefore \(H>0\). We can predict that close to the bouncing point acceleration should give rise to positive values of derivative of \(H\) (\(\dot{H}>0\)). For a bouncing scenario, the EoS parameter evolves in a phantom era.
This segment of manuscript consists of dynamics of geometric parameters such as scale factor, Hubble parameter and deceleration parameter in 4D EGB gravity. Here, the contribution of theory can be measured by GB coupled parameter \(\gamma\) while bouncing effects can be measured by bouncing parameter. This section consists of four bouncing models: symmetric bounce, matter bounce, super bounce, and oscillatory bounce. Furthermore, the behaviour of energy density, pressure and EoS parameter have also been studied in terms of cosmic time.
### Symmetric Bounce
The symmetric bounce can be pictured through exponential scale factor as [63; 64],
\[a(t)=e^{\lambda t^{2}}, \tag{17}\]
Positive values of \(\lambda\) control cosmic expansion and negative values of \(\lambda\) control cosmic contraction. As the Universe is expanding day by day, there is no need to discuss the contraction phase, therefore we set \(\lambda>0\). Bouncing point appears at \(t=0\). The Hubble parameter is expressed as
\[H(t)=2\lambda t. \tag{18}\]
Figure 1(a) shows evolutionary behaviour of scale factor for four different values of bouncing parameter \(\lambda=1,2,3,4\). It can be predicted that scale factor shows symmetric behaviour for \(t<0\), \(t>0\) and \(t=0\) is bouncing epoch. With the increase in the bouncing parameter \(\lambda\), scale factor expands. It can be observed that curvature of curves are directly proportional to parameter \(\lambda\). Figure 1 (b) shows behavior of \(H(t)\) for four different values of bouncing parameter \(\lambda\). In this case \(H(t)\) comprises of positive and negative values of cosmic time therefore, the Hubble parameter ranges from negative domain to positive domain and the bouncing parameter only accelerates the numeric values of \(H(t)\). The deceleration parameter in terms of Hubble parameter is defined as
\[q+1=-\frac{\dot{H}}{H^{2}}, \tag{19}\]
For symmetric scale factor deceleration parameter is defined as
\[q(t)=-1-\frac{1}{2\lambda t^{2}}. \tag{20}\]
The positive range of the \(q(t)\) specifies the decelerated universe and negative range predicts the accelerated universe. Figure 1 (c) shows negative values of deceleration parameter \(q(t)\) for all values of cosmic time \(t\) therefore, symmetric bouncing model always predicts accelerated phase. It can be observed from Figure 1 (c) that \(q(t)\) shows symmetric behaviour around \(t=0\). For very small values (large negative) of cosmic time \(t\), \(q(t)\) stays at \(q=-1\), its value gradually decreases close to the bouncing point. For large positive values of cosmic time \(t\), \(q(t)\) ranges at \(q=-1\) then gradually decreases to large negative values near the bouncing point. For present model energy density \(\rho(t)\), pressure \(p(t)\) and EoS parameter \(w(t)\) are expressed as
\[\rho(t) = 12M_{p}^{2}t^{2}\lambda^{2}+96t^{4}\gamma\lambda^{4}, \tag{21}\] \[p(t) = -12M_{p}^{2}t^{2}\lambda^{2}-96t^{4}\gamma\lambda^{4}-4\lambda \bigg{(}M_{p}^{2}+16t^{2}\gamma\lambda^{2}\bigg{)},\] (22) \[w(t) = -1-\frac{4\lambda\bigg{(}M_{p}^{2}+16t^{2}\gamma\lambda^{2}\bigg{)} }{12M_{p}^{2}t^{2}\lambda^{2}+96t^{4}\gamma\lambda^{4}}. \tag{23}\]
Expression for energy density \(\rho(t)\) and pressure \(p(t)\) shows its dependence on the bouncing parameter \(\lambda\) and GB coupled parameter \(\gamma\). It can be observed from energy density expression that it will always remain positive for both
Figure 1: Evolution of \(a(t)\), \(H(t)\) and \(q(t)\) against \(t\) for different values of \(\lambda\).
\(\lambda<0\) and \(\lambda>0\) as expression contains even powers. Figure 2 (a) and (b) depicts the behaviour of energy density \(\rho(t)\) and pressure \(p(t)\), with an increase in bouncing parameter curvature of energy density curves increases while in the case of pressure same behaviour is experienced with negative range. Evolutionary behaviour of EoS parameter again cosmic time for different choices of \(\lambda\) and \(\gamma\) is displyed in Figure 3 (a) and (b). If \(\gamma\) is subtituted equal to zero in equation (23) then results reduced to EoS parameter in GR which is \(w=-1-\dfrac{1}{3\lambda t^{2}}\). In symmetric bouncing cosmology, the EoS parameter evolves in the phantom region from large negative values to large positive values except at the bouncing point. \(w\) gives rapid results near the bounce than far away from the bouncing point concerning time. The equation (23) shows that the bouncing parameter \(\lambda\) contributes significantly near the bounce than far away from the bounce. The EOS parameter shows singular behaviour for \(t=0\implies w(t)\rightarrow\infty\). In order to restrict EOS parameter, we apply L'Hospital rule, we get \(w=-\dfrac{16\gamma\lambda}{3\left(M_{p}^{2}+48\gamma t^{2}\right)}\). We have constrainted the \(\lambda\), \(\gamma\) and \(M_{p}\) through the condition \(w<-1\implies\dfrac{16\gamma\lambda}{3}<M_{p}^{2}\). By using this constraint, we have find the contribution of GB coupled parameter \(\gamma\), by taking the fixed values of bouncing parameter \(\lambda\) plotted in Figure 3 (a). It can be observed from Figure 3 (b) that there is an impressive contribution of \(\gamma\). We have checked it at \(\gamma=1,2,3,4\) up till large values of \(\gamma\) and for the constant value of bouncing point \(\lambda=1\).
Figure 3: The plot (a) corresponds to the behaviour of \(w(t)\) against \(t\) when \(\gamma=0.1\) and \(\lambda\) is varying, whereas graph (b) depicts the behaviour \(w(t)\) against \(t\) when \(\lambda=1\) while \(\gamma\) is not fixed.
Figure 2: The left plot (a) corresponds to the behaviour of \(\rho(t)\), whereas right plot (b) depicts the behaviour of \(p(t)\). Both graphs are plotted versus \(t\) for different values of \(\lambda\) and \(\gamma=2\).
### Matter Bounce
In this subsection, we have examined another bouncing scale factor in 4D EGB gravity which is defined as [60]
\[a(t)=\left(a_{0}+\alpha^{2}t^{2}\right)^{\dfrac{1}{2}}, \tag{24}\]
Here, \(\alpha\) is a positive parameter as this study is for the accelerated expansion phase. \(a_{0}\) is the radius of scale factor at the bouncing point. Special form of scale factor for matter bounce model \(a(t)=\left(a_{0}+\alpha^{2}t^{2}\right)^{\dfrac{1}{3}}\) and \(a(t)=\left(a_{0}+\alpha^{2}t^{2}\right)^{\dfrac{1}{4}}\) and other different powers \((\dfrac{2}{3},\dfrac{4}{3},\dfrac{3}{2},\dfrac{3}{4})\) has also been studied by [65; 66; 67; 68; 69]. These bouncing scenarios examine non-singular bounce coupled with the contracted matter-dominated state. Such type of models provides alternatives to inflation by reproducing observed spectrum of cosmological fluctuations. These models do not satisfy SEC near the bouncing epoch by instigating a new form of matter in the background of GR. One can analyze that one must study beyond GR to understand bouncing cosmology while keeping the matter content unmodified [70]. Figure 4 (a) shows the behaviour of scale factor against cosmic time for different values of bouncing parameter \(\alpha\). It can be observed that the bouncing point is at \(t=0\), and it is symmetric. The slope of scale factor \(a(t)\) depends on bouncing parameter \(\alpha\). In short, parameter \(\alpha\) is prominent factor in controlling the slope of \(a(t)\). Expression for Hubble parameter is defined as
\[H(t)=\dfrac{t\alpha^{2}}{a_{0}+\alpha^{2}t^{2}}. \tag{25}\]
Figure 4 (b) shows variation of Hubble parameter against cosmic time for different values of bouncing parameter. The deceleration parameter for present model is defined as
\[q(t)=-\dfrac{a_{0}}{t^{2}\alpha^{2}}. \tag{26}\]
Figure 4 (c) expresses the behaviour of deacceleration parameter, its negative range shows accelerated phase of expansion. Energy density \(\rho(t)\), pressure \(p(t)\) and EoS parameter \(w(t)\) for present model are as follows
Figure 4: Evolution of \(a(t)\), \(H(t)\) and \(q(t)\) against \(t\) for different values of \(\alpha\).
\[\rho(t) = \frac{6t^{4}\gamma\alpha^{8}}{\left(t^{2}\alpha^{2}+a_{0}^{2}\right)^ {4}}+\frac{3M_{p}^{2}t^{2}\alpha^{4}}{\left(t^{2}\alpha^{2}+a_{0}^{2}\right)^{2 }}, \tag{27}\] \[p(t) = -\frac{6t^{4}\gamma\alpha^{8}}{\left(t^{2}\alpha^{2}+a_{0}^{2} \right)^{4}}-\frac{3M_{p}^{2}t^{2}\alpha^{4}}{\left(t^{2}\alpha^{2}+a_{0}^{2} \right)^{2}}-2\bigg{(}M_{p}^{2}+\frac{4t^{2}\gamma\alpha^{4}}{(t^{2}\alpha^{2} +a_{0}^{2})^{2}}\bigg{)}\bigg{[}-\frac{2t^{2}\alpha^{4}}{(t^{2}\alpha^{2}+a_{0 }^{2})^{2}}+\frac{\alpha^{2}a_{0}^{2}}{(t^{2}\alpha^{2}+a_{0}^{2})^{2}}\bigg{]},\] (28) \[w(t) = -1-\frac{2\bigg{(}M_{p}^{2}+\frac{4t^{2}\gamma\alpha^{4}}{\left( t^{2}\alpha^{2}+a_{0}^{2}\right)^{2}}\bigg{)}\bigg{[}-\frac{2t^{2}\alpha^{4}}{(t^{2} \alpha^{2}+a_{0}^{2})^{2}}+\frac{\alpha^{2}a_{0}^{2}}{(t^{2}\alpha^{2}+a_{0}^ {2})^{2}}\bigg{]}}{\frac{6t^{4}\gamma\alpha^{8}}{\left(t^{2}\alpha^{2}+a_{0}^ {2}\right)^{4}}-\frac{3M_{p}^{2}t^{2}\alpha^{4}}{\left(t^{2}\alpha^{2}+a_{0}^ {2}\right)^{2}}}. \tag{29}\]
It is observed from the above expressions that energy density and pressure are function of bouncing parameter \(\alpha\) and GB coupled parameter \(\gamma\). In the present work, we have considered suitable values of these parameters and plotted them against cosmic time as depicted in Figure 5 (a) and (b).
The dynamical behaviour of the EoS parameter is plotted in Figure 6 for different values of \(\alpha\) and \(\gamma\). The EoS parameter largely depends on the bouncing parameter \(\alpha\) for non-zero constant values of \(\gamma\). Results show symmetric behaviour near the bounce. Like in symmetric bounce, matter bounce has less dependence on \(\alpha\) near the bouncing point. Near the bouncing point, \(w\) depicts singular effects. Singular effects can be vanished by constraining EOS parameter, the condition is \(\frac{8\gamma\alpha^{2}}{a_{0}^{2}}<M_{p}^{2}\). Figures 6 (a) and (b) show the effect of the EoS parameter \(w\) on cosmic time by varying \(\alpha\) and keeping \(\gamma\) constant and by varying \(\gamma\) keeping \(\alpha\) constant, respectively. Both plots depict the phantom region as \(w<-1\). It is pretty significant to understand that bouncing parameter \(\alpha\) has more beneficial effects on the EoS parameter than GB coupled parameter \(\gamma\).
### Super Bounce
The super bounce by power law scale factor is expressed as [58]
\[a(t)=a_{0}+\beta t^{2n}. \tag{30}\]
The Hubble parameter for above model is expressed as
\[H(t)=\frac{2nt^{-1+2n}\beta}{t^{2n}\beta+a_{0}}, \tag{31}\]
Figure 5: Left plot (a) corresponds to the behaviour of \(\rho(t)\), whereas right graph (b) depicts the behaviour of \(p(t)\). Both graphs plotted versus \(t\) for different values of \(\alpha\) and \(\gamma=2\).
where \(a_{0},\beta\) are positive constants, \(n\) is positive natural number. The bouncing point occur at \(t=0\). Scale factor decreases for \(t<0\) and increases with \(t>0\), expressing contraction and expansion phases respectively. If \(a_{0}\) is substituted zero, a bounce with singularity appears because the Hubble parameter will not be defined at \(t=0\). Therefore, we find divergent behaviour of Hubble parameter \(H\) and GB invariant \(\mathcal{G}\). However, the scale factor keeps on increasing and does not become singular. The deceleration parameter is defined as follows,
\[q(t)=-\frac{(-1+2n)t^{-2n}(t^{2n}\beta+a_{0})}{2n\beta}. \tag{32}\]
Figure 7 (a) and (b) show variation of scale factor and Hubble parameter for different values of bouncing parameter respectively. Figure 7 (c) shows that universe evolve in accelerated phase of expansion. Now, for superbounce energy
Figure 6: The left plot (a) corresponds to the behaviour of \(w(t)\) against \(t\) when \(\gamma=1\) and \(\alpha\) is varying, whereas graph (b) depicts the \(w(t)\) against \(t\) when \(\alpha=1\) is fixed while \(\gamma\) is varying.
Figure 7: Evolution of \(a(t)\), \(H(t)\) and \(q(t)\) against \(t\) for different values of \(\beta\).
density \(\rho(t)\), pressure \(p(t)\) and EoS parameter \(w(t)\) are as follows,
\[\rho(t) = \frac{96n^{4}t^{-4+8n}\gamma\beta^{4}}{\left(t^{2n}\beta+a_{0} \right)^{4}}+\frac{12M_{p}^{2}n^{2}t^{-2+4n}\beta^{2}}{\left(t^{2n}\beta+a_{0} \right)^{2}}, \tag{33}\] \[p(t) = -\frac{96n^{4}t^{-4+8n}\gamma\beta^{4}}{\left(t^{2n}\beta+a_{0} \right)^{4}}-\frac{12M_{p}^{2}n^{2}t^{-2+4n}\beta^{2}}{\left(t^{2n}\beta+a_{0} \right)^{2}}-2\bigg{[}M_{p}^{2}+\frac{16n^{2}t^{-2+4n}\gamma\beta^{2}}{\left(t ^{2n}\beta+a_{0}\right)^{2}}\bigg{]}\bigg{[}-\frac{4n^{2}t^{-2+4n}\beta^{2}}{ \left(t^{2n}\beta+a_{0}\right)^{2}}\] \[+ \frac{2n(-1+2n)t^{-2+2n}\beta}{\left(t^{2n}\beta+a_{0}\right)} \bigg{]},\]
\[w(t) = -1-\frac{\bigg{[}t^{-2n}\bigg{(}t^{2n}\beta+(1-2n)a_{0}\bigg{)} \bigg{(}t^{4n}(M_{p}^{2}t^{2}+16n^{2}\gamma)\beta^{2}+2M_{p}^{2}t^{2+2n}\beta a _{0}+M_{p}^{2}t^{2}a_{0}^{2}\bigg{)}\bigg{]}}{3n\beta\bigg{(}t^{4n}(M_{p}^{2} t^{2}+8n^{2}\gamma)\beta^{2}+2M_{p}^{2}t^{2+2n}\beta a_{0}+M_{p}^{2}t^{2}a_{0}^{2} \bigg{)}}. \tag{35}\]
From Figure 8(a) it can be observed that \(\rho(t)\) is positive but opposite behaviour can be observed from Figure 8(b) for pressure profile. The plots for EoS parameter \(w(t)\) is shown in Figure 9. The EoS parameter shows singularity at \(t=0\) (bouncing point). For superbounce, the condition obtained is \(\frac{8\gamma\beta}{a_{0}}<M_{p}^{2},\quad n=1\) by constraining EOS
Figure 8: The left plot (a) corresponds to the behaviour of \(\rho(t)\), whereas right plot (b) depicts the behaviour of \(p(t)\). Both graphs are plotted versus \(t\) for different values of \(\beta\) and \(\gamma=2\).
Figure 9: The left plot (a) corresponds to the behaviour of \(w(t)\) against \(t\) when \(\gamma=1\) and \(\beta\) is varying, whereas plot (b) depicts the behaviour of \(w(t)\) against \(t\) when \(\beta=1\) while \(\gamma\) is varying.
parameter. Figure 9(a) illustrates that, the effects of \(\beta\) is quite prominent as compared to \(\gamma\) on EoS parameter when plotted against cosmic time.
### Oscillatory Bounce
The oscillatory bounce can be expressed in the form of following function [58].
\[a(t)=sin^{2}(\zeta t). \tag{36}\]
It represents cyclic universe followed by self-sustaining, infinite cycles. Hubble parameter for oscillatory bounce is
\[H(t)=2\zeta cot(\zeta t). \tag{37}\]
In a cyclic universe, a series of contraction and expansion is experienced. When the scale factor becomes zero singularity appears throughout each cycle. In this case Hubble parameter also become singular. The bounce which takes place at \(t=n\pi\) (where \(n\) is an integer), shows big bang singularity. This singularity can be vanished by using a non-singular scale factor. At \(t=(2n+1)\pi\) second bounce occurs, universe reaches its maximal size. This is how the universe stops expanding and starts to contract, regarded as Big Crunch singularity [71]. Figure 10 (a) and (b) show the behaviour of scale factor and Hubble parameter with increasing bouncing parameter \(\zeta=0.2,0.4,0.6,0.8\). It can be seen that oscillatory behaviour of scale factor is experienced. The scale factor and Hubble parameter show symmetric curves. The deceleration parameter is defined as
\[q(t)=-1-\frac{1}{2}sec^{2}(\zeta t). \tag{38}\]
Figure 10 (c) shows that the range of the deceleration parameter is positive. Therefore, oscillating bouncing models present a deceleration universe for bouncing parameter \(\zeta=0.2,0.4,0.6,0.8\). Values of deceleration parameter is greater than \(-1\) (\(q>-1\)). For \(q\) belongs to \([-1,0)\), this implies an accelerated era. Therefore some of the range of \(q\) falls in the accelerated phase.
Expression for energy density and pressure and EoS parameter are as follows
\[\rho(t) = 12M_{p}^{2}\zeta^{2}cot(\zeta t)^{2}+96\gamma\zeta^{4}cot^{4}( \zeta t), \tag{39}\] \[p(t) = -12M_{p}^{2}\zeta^{2}cot^{2}(\zeta t)-96\gamma\zeta^{4}cot^{4}( \zeta t)+4\zeta^{4}\bigg{(}M_{p}^{2}+16\gamma\zeta^{2}cot^{2}(\zeta t)\bigg{)} cosec^{2}(\zeta t),\] (40) \[w(t) = \frac{\sec^{2}(\zeta t)\left[M_{p}^{2}+16\gamma\zeta^{2}\cot^{2} (\zeta t)\right]}{3\left[M_{p}^{2}+8\gamma\zeta^{2}\cot^{2}(\zeta t)\right]}-1. \tag{41}\]
It can be observed from the Eq.(39) and Figure 11(a) that energy density is a positive quantity as it largely depends on the bouncing parameter \(\zeta\). While Figure 11 (b) shows the behaviour of pressure against cosmic time. It can be noticed from Figure 12 that the EoS parameter \(w(t)\) is greater than -1. This leads us to a non-phantom regime. The variation of EoS parameter \(w(t)\), keeping GB coupled parameter \(\gamma\) fixed is presented in Figure 12 (a), prominent effects can be experienced. The variation of EoS parameter \(w(t)\) when bouncing parameter is constant shown in Figure 12 (b). It can be observed, by varying GB coupled parameter, does not leave an impressive contribution on the EoS parameter \(w(t)\).
Figure 10: Evolution of \(a(t)\), \(H(t)\) and \(q(t)\) against \(t\) for different values of \(\zeta\).
## IV Energy conditions
In GR and modified theories of gravitation, when it is impractical to express matter content explicitly, one alternative is to explore energy conditions. Any reasonable matter content will satisfy these conditions. These constraints are not a physical property of the system. Instead, these are mathematically imposed conditions. Moreover, energy conditions gives the criteria about the state of matter and its common properties. It also offers information about all well established non-gravitational fields in physics, being sufficiently fit to eliminate various unphysical solutions of field equations [72]. For perfect fluid
\[T_{\alpha\beta}=(\rho+p)u_{\alpha}u_{\beta}+g_{\alpha\beta}p. \tag{42}\]
where \(\rho\) is energy density, \(p\) is pressure and \(u_{\alpha},u_{\beta}\) is four velocity. These constraints are as follows
\(\bullet\) Weak energy condition (WEC): \(\implies\)\(\rho(t)\geq 0\) and \(\rho(t)+p(t)\geq 0\).
\(\bullet\) Null energy condition (NEC): \(\implies\)\(\rho(t)+p(t)\geq 0\).
\(\bullet\) Dominant energy conditions (DEC): \(\implies\)\(\rho(t)\geq 0\) and \(\rho(t)\pm p(t)\geq 0\).
\(\bullet\) Strong energy conditions (SEC): \(\implies\)\(\rho(t)+3p(t)\geq 0\) and \(\rho(t)+p(t)\geq 0\).
Model A evolves in the phantom region definitely some of the energy conditions are violated. Energy conditions for the symmetric model have been plotted in Figure 13 (a). \(\rho\) is positive for all values of cosmic time \(t\). There is no
Figure 11: The left plot (a) corresponds to the behaviour of \(\rho(t)\), whereas right plot (b) depicts the behaviour of \(p(t)\). Both graphs are plotted versus \(t\) for different values of \(\zeta\) which are \(\zeta=0.5,1,1.5,2\).
Figure 12: The left plot (a) corresponds to the behaviour of \(w(t)\) against \(t\) when \(\gamma=0\) and \(\zeta=2,4,6,8\), whereas plot (b) depicts the behaviour of \(w(t)\) against \(t\) when \(\zeta=1\) and \(\gamma=2,4,6,8\).
singularity near the bouncing epoch ( \(t=0\) is bouncing point) for symmetric bounce and it is symmetric in nature. In the symmetric bouncing scenario, near the bouncing point, \(\rho+p\) and \(\rho+3p\) show negative values, which shows that the model expands in the phantom region. Energy conditions for matter bounce have been plotted in Figure 13 (b). In the present matter bouncing scenario, near the bouncing point, \(\rho+p\) and \(\rho+3p\) show negative ranges, which take us to the fact that the model exists in the phantom region. Figure 14 shows the energy conditions of the super bounce scenario. It can be observed from Figure 14 (c), weak and strong energy conditions are not satisfied. Violation of energy conditions brings us to the point that, super bounce model also evolves in the phantom era. Energy conditions of the oscillating bouncing model are plotted in Figure 14 (d). It can be seen from Figure 14 (d) Null and dominant energy conditions are satisfied, but strong energy condition is violated. Therefore we can say that this fact leads us to the non-phantom phase.
## V Cosmography of bouncing models
The need of the hour is to introduce a model-independent approach to characterize the dark energy behaviour because of dissipation among the cosmological models. This approach only relies on observational assumptions of cosmological facts. The foundation of the standard cosmographic technique is based on the Taylor series expansion of observables. These observables can be compared to data and the results of this technique are free from the EoS parameter. Therefore, cosmography is a very fantastic tool to break the degeneracy between cosmological models and considerably adopted techniques to recognize the dynamics of the universe. Alam et al. [73] founded new cosmological tool \((r,s)\) known as statefinder. Geometrical interpretation of these pairs permits us to specify the properties of dark energy in a model-independent approach. \(r\) and \(s\) are dimensionless quantities and assembled only from scale factor and its time derivative. In the background of FRW cosmology, deceleration parameter, jerk parameter and snap parameters have been calculated, and results are compatible with observational data [74].
Figure 14: The plot (c) corresponds to the energy conditions of Model C, while plot (d) represents the energy conditionsof Model D.
Figure 13: The plot (a) corresponds to the energy conditions of Model A, while plot (b) represents the energy conditions of Model B.
The taylor series around the scale factor at present era is presented as
\[a(t)=a(t_{0})+\frac{1}{1!}\frac{\partial a(t_{0})}{\partial t}(t-t_{0})+\frac{1}{ 2!}\frac{\partial^{2}a(t_{0})}{\partial^{2}t}(t-t_{0})^{2}+\frac{1}{3!}\frac{ \partial^{3}a(t_{0})}{\partial^{3}t}(t-t_{0})^{3}+...., \tag{43}\]
where \(t_{0}\) denotes present cosmic time. The coefficients in the above expansion are known as cosmographic coefficients. These coefficients offers better geomatrical understanding of universe. A set of these coefficients involved with derivatives of scale factor \(a(t)\) for any cosmic time \(t\) are expressed as [75]
\[H(t)=\frac{\dot{a}}{a},\quad q(t)=-a(\frac{\ddot{a}}{\dot{a}^{2}}),\quad j(t)= \frac{\dddot{a}}{aH^{3}},\quad s(t)=\frac{\dddot{a}}{aH^{4}}. \tag{44}\]
where \(j(t),s(t)\) are jerk parameter and snap parameter respectively. \(j\) and \(s\) are also know as statefinder pair and denoted as \((j,s)\) or \((r,s)\). This pair is considered to be better tool for geometrical understanding of models. Different combinations of statefinders represents various dark energy models such as presented in Table 1.
At a very high shift, there is uncertainty in the observational data of redshift \(z\). Therefore, the values of these statefinder pairs are not definite. In table 2, we have tested bouncing models against these parameters. Firstly, values of these parameters have been calculated at different bouncing point form analytical expressions of these parameters. On account of observations of high redshift \(z\) supernova and other observational data sets, the present day value of deceleration parameter, jerk parameter and snap parameter are \(q_{0}=-0.81\pm 0.14\), \(j_{0}=2.16^{+0.81}_{-0.75}\) and \(s_{0}=-0.22^{+0.18}_{-0.18}\) respectively [76; 77; 78]. All values of the deceleration parameter are negative, presenting accelerated expansion phase. The cosmographic constraints in Table 2 is motivated form ref. [79; 80]. On the generalizing the behaviour of these parameters, the jerk parameter certainly evolves from huge positive values to an initial epoch, whereas the snap parameter evolves from \(-1\) to an initial phase (to \(0\)) at past times for symmetric bounce. The jerk parameter for matter bounce evolves from large negative values to \(-1\), while snap parameter evolves between \(0\) and \(1\). Futher, jerk parameter for super bounce evolves form large positive values to \(0.5\) while snap parameter evolves around \(-1\). Moreover, the oscillatory jerk parameter shows decreasing behaviour while the snap parameter evolves from \(0\) to \(1.5\). For larger value of cosmic time \(t\) Model A (symmetric bounce) shows similar behaviour as that of \(\Lambda\)CDM \((1,0)\) while for Model B and C its statefinder values are\((0,\frac{2}{3})\), and \((\frac{3}{8},\frac{1}{6})\) respectively. For Model D statefinder is \((0,\frac{1}{3})\) as cosmic time vanishes. Therefore, we have analyzed the dynamics of cosmographic parameters for all values of cosmic time.
In remaining part of this section we wish to study cosmological parameters interms of redshift \(z\) for better vision. Here we have expressed Hubble parameter \(H(z)\) of each model interms of redshift \(z\) by using the relation \(z+1=\frac{1}{a}\) in equations (17), (24), (30), (36) [81].
\[H(z) = 2\sqrt{\lambda}\sqrt{\log(\frac{1}{1+z})}, \tag{45}\] \[H(z) = \alpha(z+1)\sqrt{1-a_{0}(z+1)^{2}},\] (46) \[H(z) = 2n(z+1)^{1/2n}\beta^{1/2n}(1-a_{0}(z+1))^{2n-1/2n},\] (47) \[H(z) = 2\zeta\cot(\sin^{-1}(\frac{1}{1+z})). \tag{48}\]
To understand the dynamics of universe, we can find deceleration parameter \(q(z)\), jerk parameter \(j(z)\), snap parameter
\begin{table}
\begin{tabular}{||c|c|c||} \hline Dark Energy Models & j & s \\ \hline \(\Lambda\)CDM & 1 & 0 \\ \hline SCDM & 1 & 1 \\ \hline HDE & 1 & \(\frac{2}{3}\) \\ \hline CG & \(>\)1 & \(<\)0 \\ \hline Quintessence & \(<\)1 & \(>\)0 \\ \hline Matter dominated & 1 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 1: Statefinder pairs for different dark Energy models
\(s(z)\) and lerk parameter \(l(z)\) from below mentioned expressions [82]
\[q(z) = -1+\frac{dH(z)}{dz}\frac{(1+z)}{H}, \tag{49}\] \[j(z) = -q(z)+2q(z)^{2}+(1+z)\frac{dq(z)}{dz},\] (50) \[s(z) = -2j(z)-3q(z)j(z)-(1+z)\frac{dj(z)}{dz},\] (51) \[l(z) = -3s(z)-4q(z)s(z)-(1+z)\frac{ds(z)}{dz}. \tag{52}\]
The deceleration parameter \(q(z)\) for each model represents the accelerating universe. The \(q(z)\) shows singular behaviour at \(z=0\) except for matter bounce. Also, in symmetric bounce and oscillatory bounce, singularity appears at \(z=0\) for jerk parameter \(j(z)\). The positive regime can be observed in the case of the snap parameter for each model. The lerk parameter shows negative values in the case of matter bounce, while for all other models positive range can be observed. These parameters also tell us about the past, present and future value of the universe [82]. To evaluate the nature of dark energy models, statefinder pairs \((j,s)\) has been plotted for each bouncing model in Figure 19 (a), (b) and 20 (c), (d). It can be interpreted that at large value of redshift \(z\)\((j,s)\rightarrow(1,1)\) for model A and \((j,s)\rightarrow(3,-15)\) for model B.
## VI Stability analysis
In this section, stability of bouncing models in 4D EGB gravity is discussed by squared speed of sound method. Squared speed of sound is denoted by \(C_{s}^{2}\) and defined as \(C_{s}^{2}=\frac{dp}{dz}\times\frac{dz}{d\rho}\). In mechanically and thermodynamically stable system, squared speed of sound should give non-negative values. Therefore above bouncing models would be called stable for positive values of Squared speed of sound \(C_{s}^{2}\). For mechanical stability analysis, squared speed of sound \(C_{s}^{2}\) should remain between zero and one.
\begin{table}
\begin{tabular}{||c|c|c|c|c||} \hline CC’s & Symmetric Bouncce & Matter Bouncce & Supper Bouncce & Oscillatory Bouncce \\ \hline & -1.5 & -1 & -1.5 & -0.47945 \\ \(q(t)\) & -1.25 & -0.25 & -1.125 & -0.41062 \\ & -1.1666 & -0.1111 & -1 & -0.2659 \\ & -1.25 & -0.0625 & -0.9375 & -0.1452 \\ \hline & 2.5 & -3 & 1.5 & -0.04109 \\ \(j(t)\) & 1.75 & -0.75 & 0.84375 & -0.1787 \\ & 1.5 & -0.3333 & 0.6666 & -0.4680 \\ & 1.375 & -0.1875 & 0.5859 & -1.0601 \\ \hline & -0.25 & 0.8888 & -0.8333 & 0.35431 \\ \(s(t)\) & -0.14285 & 0.7777 & -0.38333 & 0.43148 \\ & -0.1 & 0.7272 & -0.2222 & 0.638853 \\ & -0.0769 & 0.7037 & -0.1369 & 1.46135 \\ \hline \end{tabular}
\end{table}
Table 2: Variation of \(q(t),j(t)\) and \(s(t)\) for different values of bouncing parameter.
Figure 16: Evolution of \(H(z)\), \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) for bouncing model B against \(z\) when \(a_{0}=0.1\) and \(\alpha=1\).
Figure 17: Evolution of \(H(z)\), \(q(z)\), \(j(z)\), \(s(z)\) and \(l(z)\) for bouncing model C against \(z\) when \(\beta=1\), \(n=2\) and \(a_{0}=0.1\).
Figure 19: Left plot shows evolution of \((j,s)\) for model A when \(\lambda=1\) while, right plot shows evolution of \((j,s)\) for Model B when \(\alpha=1\).
Figure 20: Left plot shows evolution of \((j,s)\) for model C when \(n=2\), \(a_{0}=0.1\) and \(\beta=1\) while, right plot shows evolution of \((j,s)\) for Model D when \(\zeta=1\).
The expressions of squared speed of sound \(C_{s}^{2}\) in terms of redshift for Model A, B, C and D are as follows
\[C_{s}^{2}(z) = \frac{192\gamma\lambda^{2}\log(\frac{1}{1+z})}{1+z}-\frac{1+z}{12M_ {p}^{2}\lambda}\bigg{[}\frac{12M_{p}^{2}\lambda}{1+z}-\frac{32\gamma\lambda^{3/2 }}{(1+z)^{2}\sqrt{\log(\frac{1}{1+z})}} \tag{53}\] \[+ \frac{192\gamma\alpha^{2}\log(\frac{1}{1+z})}{1+z}+\frac{\sqrt{ \alpha}(M_{p}^{2}+16\gamma\alpha\log\frac{1}{1+z})}{(1+z)^{2}\log(\frac{1}{1+z })^{3/2}}-\frac{2\sqrt{\alpha}(M_{p}^{2}+16\gamma\alpha\log(\frac{1}{1+z}))}{( 1+z)^{2}\sqrt{\log(\frac{1}{1+z})}}\bigg{]},\] \[C_{s}^{2}(z) = 6M_{p}^{2}(1+z)\alpha^{2}(1-(1+z)^{2}a_{0})-24(1+z)^{5}\gamma \alpha^{4}a_{0}(1-(1+z)^{2}a_{0})\] (54) \[+ 24(1+z)^{3}\gamma\alpha^{4}(-1+(1+z)^{2}a_{0})^{2}+\frac{1}{3}M_ {p}^{2}(1+z)^{4}\alpha^{3}a_{0}\bigg{[}\frac{1}{(1-(1+z)^{2}a_{0})^{3/2}}\] \[\times -3M_{p}^{2}(1+z)^{2}\alpha a_{0}+3M_{p}^{2}\alpha(1-(1+z)^{2}a_{0 })-12(1+z)^{4}\gamma\alpha^{3}a_{0}(1\] \[- (1+z)^{2}a_{0})+12(1+z)^{2}\gamma\alpha^{3}(-1+(1+z)^{2}a_{0})^{2 }+\frac{(8\gamma(\alpha-2(1+z)^{2}\alpha a_{0})^{2})}{\sqrt{1-(1-z)^{2}a_{0}}}\] \[- (a_{0}(-3+2(1+z)^{2}a_{0})(-M_{p}^{2}-4(1+z)^{2}\gamma\alpha^{2}+ 4(1+z)^{4}\gamma\alpha^{2}a_{0}))\bigg{]},\] \[C_{s}^{2}(z) = -\frac{1}{6n}\bigg{[}72M_{p}^{2}n^{2}(1+z)^{-1+1/n}\beta^{1/n}(1- (1+z)a_{0})^{2-1/n}+16n^{3}(1+z)^{-3+5/2n}\] (55) \[\times \gamma\beta^{5/2n}(1-(1+z)a_{0})^{3}-\frac{5}{2n}(M_{p}-2M_{p}n( 1+z)a_{0})^{2}+6(-1+2n)\] \[\times (1+z)^{-2+1/2n}\beta^{1/2n}(1-(1+z)a_{0})^{-1-3/2n}(-M_{p}^{2}(1- (1+z)a_{0})^{1/n})\] \[- 16n^{2}(1+z)^{1/n}\gamma\beta^{1/n}(-1+(1+z)a_{0})^{2}\bigg{]},\] \[C_{s}^{2}(z) = \frac{1}{6\zeta M_{p}^{2}(z+1)^{3}\left(\frac{z(z+2)}{(z+1)^{2}} \right)^{3/2}}\bigg{[}-144\zeta^{3}M_{p}^{4}z(z+2)\sqrt{\frac{z(z+2)}{(z+1)^{ 2}}}-M_{p}^{2}(z+1)\] (56) \[\times \bigg{(}768\gamma\zeta^{4}z^{3}+2304\gamma\zeta^{4}z^{2}+1536 \gamma\zeta^{4}z-1\bigg{)}+16\gamma\zeta^{2}z\left(z^{2}+3z+2\right)\bigg{]}.\]
Figures 21 (a) and (b) show stability analysis for Model A (symmetric bounce) and Model B (Matter bounce). It can be seen from Figure 21 (a) for different values of bouncing parameter, squared speed of sound show negative values which predicts unstable behaviour of model A. Stable behaviour of matter bounce model can be observed from Figure 21 (b). When values of bouncing parameter \(\alpha\) increases, squared speed of sound \(C_{s}^{2}\) gives positive values. While opposite results can be observed for matter bounce scenario in \(f(Q,T)\) gravity [81]. Model C and Model D does not satisfy stability conditions. The GB coupled parameter \(\gamma\) does not contribute much to the dynamics of stability analysis. It will only show its contribution when it is taken to be very high.
## VII Observational constraints of bouncing models
In this section, we want to find the best fit values of bouncing model parameters (\(a_{0}\), \(\lambda\), \(n\)). For this purpose, we constrain the parameters with observational data sets. The values of parameters have been estimated by using the least square method [83]. The calculated values of bouncing parameter for model B, model C and model D have been displayed in table 3-V respectively. Figure 23-25 show error bar plots of Model B, Model C and Model D, respectively. It is not possible to plot an error bar plot for model A. Because the Hubble parameter for model A is defined as \(H(z)=2\sqrt{\lambda}\sqrt{\log(\frac{1}{1+z})}\) and this expression \(\sqrt{\log(\frac{1}{1+z})}\) gives real values only for negative values of redshift function \(z\), But redshift values of Hubble data sets ranges \(0<z<2.5\). These model curves have been compared with the observational data sets and \(\Lambda\)CDM model. \(\Lambda\)CDM model is defined as \(H(z)^{2}=\omega_{m}(1+z)^{3}+\omega_{k}(1+z)^{2}+\omega_{r}(1+z)^{4}+\omega_{ \Lambda}(1+z)^{0})^{1/2}\) where \(\sum\omega_{i}=1\) and \(\omega_{i}\) are free parameters of the model [84]. At certain values of redshift \(z\) Hubble parameter is measured from two methods.
\(\bullet\) Extraction of H(z) from differential ages of galaxies (DA method)
\(\bullet\) Estimations of H(z) from line of sight baryon acoustic oscillations (BAO).
Hubble data sets points have been taken from previous literature [85; 86; 87; 88; 89; 90; 91; 92] as presented in Table 4.
To make this analysis quantified, we have employed reduced chi-squared method on \(H(z)\) data set, it is defined as weighted summation of squared deviations and mathematically expressed as
\[\mathcal{X}_{HD}^{2}(p_{i})=\sum_{k=1}^{N_{H}}\frac{\left[H_{th}(p_{i},z_{k})- H_{obs}(z_{k})\right]^{2}}{\sigma_{H_{k}}^{2}}. \tag{57}\]
where \(H_{th}(p_{i},z_{k})\) and \(H_{obs}(z_{k})\) shows theoretical and observed values of Hubble parameter. \(p_{i}\) denotes model parameters e.g. (model B has two parameters that is \(\alpha\) and \(a_{0}\), model C has three parameters \(\beta\), \(a_{0}\) and \(n\), model D has only one parameter \(\zeta\)). \(\sigma_{H_{k}}^{2}\) shows uncertainty in values of observed Hubble parameter and \(z_{i}\) is redshift. Here \(N_{H}\) is data points from Hubble data set. \(HD\) stands for Hubble data sets from DA and BAO methods. Table 3-5 shows reduced chi-squared and model parameter values for model B, C and D.
## VIII Results and summary
The dynamics of bouncing models in 4D EGB are summerized as follows
\(\bullet\) The four non-singular bouncing models have been investigated in the framework of 4D EGB gravity, leading us
Figure 21: The plot (a) corresponds to the stability analysis of model A for different values of \(\lambda\) while \(\gamma=2\) whereas plot (b) represents the stability analysis of model B for different values of \(\alpha\) while is \(\gamma=2\) and \(a_{0}=0.1\).
Figure 22: The left (c) shows the stability analysis of model C for different values of \(\beta\) while \(\gamma=2\), \(n=1\) and \(a_{0}=0.1\) whereas the stability analysis of model D for different values of bouncing parameter when \(\gamma=2\), \(n=1\) and \(a_{0}=0.1\) is presented in plot (d).
to late time cosmic acceleration.
\(\bullet\) Evolution of scale factor, Hubble parameter, deceleration parameter, energy density, pressure and EoS parameter have been studied in detail. Kinematics of these parameters are greatly affected by bouncing parameters. The GB coupling parameter contributes less in the dynamics of EoS parameter. By choosing \(\gamma=1\times 10^{10}\) and \(\gamma=1\times 100^{120}\) or higher than these values of \(\gamma\), only than we can observe the prominent contributions of 4D EGB gravity. The behaviour near bounce is primarily dependent on the bouncing parameter.
\(\bullet\) The bouncing scale factors specify that during classical times, the cosmos is in a contracting period followed by a bounce, and then it is in an accelerating period at late times.
\(\bullet\) All deceleration parameter values exhibit negative range and indicate accelerated expansion era, but oscillatory scale factor fails to do so, giving rise to the decelerating phase of the universe.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Hubble data sets (HD) & \(\mathcal{X}_{min}^{2}\) & parameters \\ \hline DA & 4.13944 & \(\zeta=42.4589\) \\ \hline BAO & 7.56656 & \(\zeta=39.6013\) \\ \hline DA+BAO & 5.37551 & \(\zeta=39.7871\) \\ \hline \end{tabular}
\end{table}
Table 5: Summary of statistical analysis of model D.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Hubble data sets (HD) & \(\mathcal{X}_{min}^{2}\) & parameters \\ \hline DA & 0.639637 & \(\beta=71.2983\), \(a_{0}=-69.4779\),\(n=0.4362\) \\ \hline BAO & 0.88322 & \(\beta=52.6446\), \(a_{0}=0.2918\), \(n=0.4850\) \\ \hline DA+BAO & 0.75192 & \(\beta=53.1023\), \(a_{0}=0.2923\), \(n=0.4856\) \\ \hline \end{tabular}
\end{table}
Table 4: Summary of statistical analysis of model C.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Hubble data sets (HD) & \(\mathcal{X}_{min}^{2}\) & parameters \\ \hline DA & 0.60873 & \(\alpha=61.3453\), \(a_{0}=-0.0122\) \\ \hline BAO & 0.84247 & \(\alpha=57.4612\), \(a_{0}=-0.0325\) \\ \hline DA+BAO & 0.73011 & \(\alpha=57.7235\), \(a_{0}=-0.0312\) \\ \hline \end{tabular}
\end{table}
Table 3: Summary of statistical analysis of model B.
Figure 23: Error bar plots of \(H(z)\) datasets for matter bounce (model B).
\(\bullet\) The EoS parameter for all models represents phantom phase (\(w<-1\)) while for oscillatory bouncing scale factor predicts (\(w>-1\)) non-phantom regime.
\(\bullet\) Violation of Null energy conditions (\(\rho+p\)) and strong energy conditions (\(\rho-3p\)) near the bounce region is plotted. This is the most appropriate state for achieving non-singular bounce. Also, violation of energy condition is a clear sign that EoS parameter evolves in the phantom region \(w<-1\).
\(\bullet\) The bouncing models have been legitimized through certain cosmographic tests. The jerk, snap and lerk parameters have been found out in terms of cosmic time and redshift. The dynamics of these parameters have been presented in tabular form against cosmic time and plotted against redshift. It has been analysed through statefinder diagnostic that symmetric bounce shows \(\Lambda\)CDM behaviour at large value of time while statefinder values of matter bounce, super bounce and oscillatory bounce are \(\left(0,\dfrac{2}{3}\right)\), \(\left(\dfrac{3}{8},\dfrac{1}{6}\right)\) as \(t\rightarrow\infty\) and \(\left(0,\dfrac{1}{3}\right)\) as \(t\to 0\) respectively. It has been explored that higher derivatives of \(H(z)\) represents accelerating cosmos and indulge in late time cosmic acceleration. In the case of redshift, statefinder pairs are \((1,1)\) and \((3,-15)\) for model A and model B at a larger value of time, respectively while for model C and model D \((j,s)\to 0\).
\(\bullet\) Stability of bouncing models have been checked by applying the squared speed of the sound method. It has been observed that the most stable model is the matter bounce model.
\(\bullet\) To find best-fit values, bouncing models have been constrained with DA and BAO Hubble data sets. We have calculated the values of parameters by applying the least-square fitting method [83].To make this analysis quantified, we have employed reduced chi-squared method on \(H(z)\) data sets for model B, C, D and statistical results have been
Figure 24: Error bar plots of \(H(z)\) datasets for power law bounce (model C).
Figure 25: Error bar plots of \(H(z)\) datasets for oscillatory bounce (model D).
presented in Table 3 respectively. Also, these curves have been plotted with observational data sets and \(\Lambda\)CDM model for comparison.
## IX Data Availability Statement
No Data associated in the manuscript.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(H(z)\) & \(z\) & \(\sigma_{H}\) & \(References\) \\ \hline
[MISSING_PAGE_POST]
5 & 1.965 & 50.4 & [88] \\ \hline \end{tabular}
\end{table}
Table 6: Data Set of Hubble parameter \(H(z)\) with the standard error \(\pm\sigma_{H}\) from DA method.
## X Conflict of interest statement
The authors have no competing interests to declare that are relevant to the content of this article.
|
2301.13360 | Skeleton-based Human Action Recognition via Convolutional Neural
Networks (CNN) | Recently, there has been a remarkable increase in the interest towards
skeleton-based action recognition within the research community, owing to its
various advantageous features, including computational efficiency,
representative features, and illumination invariance. Despite this, researchers
continue to explore and investigate the most optimal way to represent human
actions through skeleton representation and the extracted features. As a
result, the growth and availability of human action recognition datasets have
risen substantially. In addition, deep learning-based algorithms have gained
widespread popularity due to the remarkable advancements in various computer
vision tasks. Most state-of-the-art contributions in skeleton-based action
recognition incorporate a Graph Neural Network (GCN) architecture for
representing the human body and extracting features. Our research demonstrates
that Convolutional Neural Networks (CNNs) can attain comparable results to GCN,
provided that the proper training techniques, augmentations, and optimizers are
applied. Our approach has been rigorously validated, and we have achieved a
score of 95% on the NTU-60 dataset | Ayman Ali, Ekkasit Pinyoanuntapong, Pu Wang, Mohsen Dorodchi | 2023-01-31T01:26:17Z | http://arxiv.org/abs/2301.13360v1 | # Skeleton-based Human Action Recognition via Convolutional Neural Networks (CNN)
###### Abstract
Recently, there has been a remarkable increase in the interest towards skeleton-based action recognition within the research community, owing to its various advantageous features, including computational efficiency, representative features, and illumination invariance. Despite this, researchers continue to explore and investigate the most optimal way to represent human actions through skeleton representation and the extracted features. As a result, the growth and availability of human action recognition datasets have risen substantially. In addition, deep learning-based algorithms have gained widespread popularity due to the remarkable advancements in various computer vision tasks. Most state-of-the-art contributions in skeleton-based action recognition incorporate a Graph Neural Network (GCN) architecture for representing the human body and extracting features. Our research demonstrates that Convolutional Neural Networks (CNNs) can attain comparable results to GCN, provided that the proper training techniques, augmentations, and optimizers are applied. Our approach has been rigorously validated, and we have achieved a score of \(95\%\) on the NTU-60 dataset.
## I Introduction
Recently, the research community has been witnessing a surging interest in skeleton-based action recognition, owing to its many advantageous attributes such as computational efficiency, informative features, and immunity to changes in lighting conditions. The field continues to explore the optimal way of representing human actions through skeleton representation and feature extraction. This has resulted in a marked increase in the number of human action recognition datasets. On the other hand, deep learning-based algorithms have gained immense popularity as a result of their effectiveness in various computer vision tasks. Most cutting-edge contributions in the realm of skeleton-based action recognition adopt Graph Neural Network (GCN) architecture for representing the human body's articulated structure and extracting features. However, our findings demonstrate that Convolutional Neural Networks (CNNs) can deliver comparable results to GCN if proper training methods, data augmentation techniques, and the appropriate optimizer are utilized.
The prospect of equipping machines with human-like visual capabilities has garnered significant attention among researchers, inspiring the development of various technologies, algorithms, and techniques to facilitate this task. To date, numerous computer vision tasks have been successfully addressed, including image classification [1] and object detection [2]. Among the active research areas in computer vision is human activity analysis in videos, including human action detection, recognition, and prediction.
The human gesture, which is characterized by a shorter duration and less complex movements performed by a limited number of body parts, represents the first category of human action. Examples of human gestures include hand waving and head nodding. On the other hand, human action encompasses longer-duration movements involving more body parts. A sequence comprising multiple actions is referred to as human activity. Finally, human interaction contains human interactions with the surrounding environment, including both human-to-human and human-to-object interactions. The presence of multiple humans or interactions with various objects increases the complexity of motion analysis, which can further be complicated by online or offline human behavior analysis.
In the past, capturing human actions through human-performance systems often necessitated the application of markers on the subjects' bodies and the use of distinctive attire. Despite overcoming these limitations, the high cost of cameras remained a hindrance to widespread adoption [3]. However, recent advancements have led to the development of cost-effective contactless sensing devices, such as Microsoft Kinect, Intel Realsense, and Doppler radar. Historically, the RGB modality in human action capture has been challenged by various factors, including illumination, occlusions, background clutter, frame rate, viewpoint, and biometric variations. By contrast, RGB-D sensors have mitigated some of these difficulties, particularly with regard to illumination, and provide a crucial advantage by enabling the generation of 3D structural information of the scene. As a result, estimating 3D human skeleton joints has become a relatively straightforward process in the RGB-D modality.
For many years, machine learning algorithms have been the primary technologies incorporated to address various computer vision problems, including action recognition. The conventional approach to human motion analysis involves capturing and representing spatiotemporal information, which enhances the accuracy and robustness of the analysis. Typically, features are extracted manually and incorporated into classical machine learning algorithms to perform different tasks. The research community has explored various features representations, including joint coordinates [4], the center of gravity [5], the angle between skeleton joints [6], motion velocity [7], and
co-occurrence features [8]. The selection of the appropriate algorithm also recreates a vital role in the machine learning era. Many algorithms have been utilized, including Support Vector Machine (SVM) [9], Linear Discriminant Analysis (LDA) [10], Naive Bayes Nearest Neighbor [11], Logistic Regression [12], and KNN [13]. However, the generalization of machine learning algorithms is challenging and requires significant effort in feature engineering.
Recent advancements in deep learning-based techniques have demonstrated superior performance in various computer vision problems, such as image classification [1], object detection [2], action recognition [14], and action detection [15]. The increasing interest in skeleton-based representation has been driven by the abundant and discriminative features that can be derived from skeleton joints. For instance, features such as Skeleton Map [16], Joint Trajectory Map [17], Joint Distance Map [18], and many others can be extracted from the skeleton joint data alone. Consequently, the utilization of deep-learning algorithms has emerged as a prevalent approach for skeleton-based human action recognition. Our contributions are summarized as follows:
* Construct an easy-to-integrate and modular Convolutional Neural Network (CNN) for the action recognition task, which attains results comparable to the State-of-the-Art (SOTA) methods.
* Despite the prevalence of graph neural network-based methods in the SOTA contributions to the action recognition task, we demonstrate that CNNs can attain comparable results by implementing various training techniques.
* Our results indicate that incorporating a diverse set of augmentation techniques leads to an improvement in the generalization and robustness of the model.
* Our findings reveal that utilizing a margin-based cosine loss function instead of the conventional cross-entropy loss leads to a significant enhancement in performance.
## II **Related Work**
Deep-learning approaches have been demonstrated superiority over conventional machine learning algorithms in various computer vision tasks, including image classification as demonstrated in [1] in the ImageNet dataset, object detection as introduced by Ren et al. [2] with their Faster R-CNN framework, action recognition as recently revisited by Duan et al [14], and action detection as proposed by Wang et al. [15]. In recent years, skeleton-based representations have garnered increasing attention due to the wealth of rich and discriminative features that can be obtained from the skeleton joints. Examples of such representations include the Skeleton Map [16], Joint Trajectory Map [17], Joint Distance Map [18], among others, all of which are based solely on features extracted from the skeleton joints data.
#### Ii-1 **Convolution Neural Network - CNN Approaches**
Wang _et al._[17] proposed the Joint Trajectory Map (JTM), a representation that captures the spatiotemporal information of a sequence in the form of 2D images. This work emphasizes human motion magnitude and speed as the core features, represented by the saturation value in the HSV (Hue-Saturation-Value) color space, with higher motion magnitude and speed yielding a higher saturation value. Du _et al._[16] introduced a method for encoding the spatiotemporal features of an action sequence into an image matrix. This representation is constructed by vertically encoding the skeleton's joint coordinates into the RGB channels and horizontally encoding the sequence frames. The encoded information is then quantified, normalized, and transformed into an image for use with a CNN classification network. In addition to the skeleton map representation proposed by Du _et al._, Li _et al._[19] proposed a two-stream CNN-based network that leverages the skeleton map. Li _et al._[18] extracted discriminative features from the pair-wise distances between skeleton joints in a human action sequence to construct a Joint Distance Map (JDM), which maps the skeleton sequence into images. Bilinear interpolation was applied to address the issue of variable sequence duration. Li _et al._[8] embarked on the problem of joint co-occurrence by proposing an end-to-end hierarchical framework that facilitates better feature learning, gradually aggregating point-level features into global co-occurrence features. Ke _et al._[20] proposed a translation, scale, and rotation invariant body part vector-based representation, using geometrical features of the skeleton to generate two sets of features: cosine distance and normalized magnitude. The human skeleton joints are grouped into five parts: trunk, right arm, left arm, right leg, and left leg. Both cosine distance and normalized magnitude features are computed within the body and skeleton, yielding ten feature vectors fed into a CNN for further feature extraction and training.
#### Ii-2 **Recurrent Neural Network - RNN Approaches**
The recognition of human actions is a problem of spatiotemporal representation. To capture the discriminative features, researchers have proposed utilizing conventional RNN networks. Yet, the vanishing gradient problem has necessitated adapting memory gating techniques, such as long-short term memory (LSTM) and gated recurrent unit (GRU). To address this issue, Du _et al._[21] proposed a hierarchical bidirectional recurrent neural network (BRNN) that models the long-term temporal sequence through an end-to-end approach. To facilitate better feature representation, the human body is divided into five parts and processed through five different BRNN subnets. The outputs are fused and classified in higher layers before being fed into the final BRNN. To further improve the system's robustness, Du _et al._[22] implemented random rotation transformation during preprocessing and scale transformation to account for varying human sizes. Salient motion patterns have been leveraged as essential features in various human action-related tasks. However, LSTM struggles to capture the spatiotemporal dynamics. To resolve this, Veeriah _et al._[23] proposed a differential gating for LSTM-RNN networks that quantifies the salient motion between frames through the derivative of state (DoS). Skeleton-based action recognition presents several challenges, including the intra-frame joint spatiotemporal information of all frames in a sequence. To
address these challenges, Sogn _et al._[24] proposed an end-to-end spatiotemporal attention model that adaptively learns intra-frame and inter-frame dependencies through recurrent RNN and LSTM. Human skeleton data contains discriminative features such as acceleration, angles, spatial coordinates, orientation, and velocity. Zhang _et al._[25] evaluated eight geometric features on a 3-layer LSTM network and found that computing the distance between joints and selected lines outperformed the other features.
#### Ii-B3 **Graph Convolution Network - GCN Approaches**
Yan _et al._[26] introduced the initial spatiotemporal graph convolutional network (ST-GCN), which established edges between joints in both the intra-frame and inter-frame dimensions. Typically, in graph-based techniques, the convolution operation is performed over the neighboring joints within the receptive field. However, conducting a convolution over non-Euclidean data necessitates a judicious partition strategy to attain a label map. Although ST-GCN [26] achieved substantial results in the human action recognition task, it faced several limitations. The graph representation in ST-GCN was created based on the human body's physical structure, which may not be ideal for characterizing human action. For example, in an act like clapping, the relationship between the hands is crucial in classifying the action. In this case, ST-GCN fails to capture such a relationship since the two hands are not connected and are distant from each other in the kinematic tree. Additionally, certain geometric features, such as bone orientation and length, are challenging to represent in graph-based algorithms. To overcome these limitations, Shi _et al._[27] proposed an end-to-end data-driven multi-stream attention-enhanced adaptive graph convolutional network (MS-AAGCN), where the graph topology is learned and generated adaptively from the input data. In practical scenarios, occlusion is a prevalent challenge that is often unavoidable. Song _et al._[28] proposed a GCN-based model that is trained on incomplete spatiotemporal skeleton data, with intra-frame joints and inter-frame frames masked to imitate occlusion effects. Moreover, Yoon _et al._[29] incorporated noise into the skeleton data to enhance the model's robustness.
## III **Data Preprocessing**
Typically, skeleton data is represented in a camera coordinate system, leading to a diverse representation of captured sequences from different viewpoints, as illustrated in Figures 3 (A), (B), and (C). To mitigate the impact of view variation, a common approach is to transform the skeleton data into a unified coordinate system [30, 31, 32, 33]. This is typically accomplished through a sequence of geometric translation, rotation, and normalization operations, as depicted in Figures 3 (D), (E), and (F). There are various transformation strategies presented in the literature. In frame-based approaches, the transformation operation is applied to each frame in the sequence, which, however, results in the loss of some relative motions. For example, when applied to the walking action, this technique produces an effect as if walking on a treadmill. On the other hand, in sequence-based strategies, the first frame in the sequence is designated as the reference frame, and all transformations applied to the subsequent frames are relative to this reference frame. This approach provides a more realistic representation of human skeleton motions.
### _Encoding Skeleton to Image_
Given a sequence of skeletons \(S\), where \(s\in(1,...,S)\), and the \(j^{th}\) joint in the \(t^{th}\) frame of skeleton \(s\) is represented as \(s_{t,j}=[x_{t,j},y_{t,j},x_{t,j}]\), with \(j\in(1,...,J)\) denoting the number of joints in a frame and \(t\in(1,...,T)\) denoting the total number of frames in the sequence \(S\).
Inspired by the work of Du et al. [16], we transform the raw skeleton data into a skeleton map image that preserves spatiotemporal information, as depicted in Fig. 2. Given an RGB image of dimensions \([H,W,C]\), where \(H\) represents height, \(W\) represents width, and \(C\) represents the conventional RGB image channels, we map the action sequence \(S\), represented in the dimension of \(T\times N\times 3\), to an image of dimensions \([H,W,C]\). Here, \(T\) represents the number of frames and maps to the height of the image, \(N\) represents the number of joints and maps to the width of the image, and the last dimension represents the 3D joint coordinates of the joints in a frame, which are mapped to the three channels of the image.
Since raw skeleton data may have different value ranges than the image, pixel normalization is necessary to ensure that the mapped values are in the range of \(0-255\). This is accomplished by:
\[P_{t,j}=floor(255\times\frac{s_{t,j}-C_{min}}{C_{max}-C_{min}}) \tag{1}\]
## IV **Data Augmentation**
Data augmentation is a well-established technique utilized to enhance the performance of machine learning algorithms by diversifying the training data. This approach involves synthesizing new samples from the existing training data by employing transformations such as scaling, rotation, translation, and other deformations. Adding these transformed samples to the training set can improve the generalization performance of the machine learning algorithm, making it more resilient
Fig. 1: Action representation from NTU-D 60 dataset A) -45\({}^{\circ}\) skeleton visualization, B) 0 \({}^{\circ}\) skeleton visualization, C) 45\({}^{\circ}\) skeleton visualization. (D, E, F) are the transformed skeleton for the same skeletons in (A, B, C)
to variations in the input data. Data augmentation is widely applied across several deep-learning tasks, including image classification, object detection, and natural language processing.
On the other hand, Skeleton-based data augmentation is used explicitly for tasks involving 3D pose information, such as 3D human pose estimation. This approach involves transforming the 3D joint positions of a skeleton to generate new pose samples. Image-based data augmentation is utilized for tasks where the input data consists of images, including image classification and object detection.
In this research, we investigated various image-based and skeleton-based techniques for data augmentation, as outlined in TABLE. Our approach is inspired by RandAugmentation [34], which randomly injects the training pipeline with predefined augmentations based on the number and magnitude of augmentations to be applied.
The **Flipping Sequence** augmentation technique, as described in [35], involves the generation of synthetic pose sequences by horizontally flipping the input pose sequences. This is achieved through the reflection of the 3D joint positions across a horizontal plane when applied to skeleton data and by flipping in both the horizontal and vertical planes when applied to image data. As a result, a new pose sequence with actions performed in the opposite direction is produced.
The **Geometric Rotation** augmentation, as discussed in [34], generates new synthetic samples by applying random rotations to the input data. This is achieved by rotating the input images or 3D objects around a fixed point, utilizing a specified angle and center of rotation.
The **Cutout** augmentation, as presented in [36], is a regularization technique utilized in deep learning. This technique involves randomly masking a portion of an image during the training process, which necessitates the model to learn to ignore these regions and focus on the remaining parts. This can enhance the model's generalization to new data, thus improving its performance. Cutout augmentation is frequently adopted in image classification tasks, where it can aid the model in recognizing objects in images, despite variations in their appearance. It can be easily implemented in most deep learning frameworks as a simple and effective regularization method for deep learning models.
The **Zoom** augmentation, as described in [34], involves rescaling the input images or 3D objects using a specified zoom factor, which determines the extent of the zooming. This augmentation method is appropriate for both image and skeleton data. Additionally, the **Shear** augmentation, as discussed in Cubuk et al. (2020) [34], is achieved through the utilization of a linear transformation that maps the x-axis or y-axis coordinates of the input data to new positions. The **Translate** augmentation, also presented in Cubuk et al. (2020) [34], involves the shifting of the x-axis, y-axis, or both coordinates of the input data by a specified amount.
The introduction of various types of noise to image data during training has been demonstrated to enhance the generalizability and robustness of machine learning models [37]. **Salt-and-Pepper** noise [38] is a form of noise that mimics the effect of corrupted or missing pixels in the input data. This is achieved by randomly setting a specified proportion of pixels to either the minimum or maximum intensity value, creating the appearance of salt and pepper grains on the image. The remaining pixels remain unaltered.
Another noise-based augmentation strategy is **Localvars** noise [37]. This method generates random noise samples from a specified distribution. **Speckle** noise [37] generates random noise samples from a Poisson distribution with a specified mean. In contrast, **Gaussian** noise [37] generates random noise samples from a Gaussian distribution with a specified mean and standard deviation.
In the context of skeleton data, **Bone Shuffling** augmentation is applied. This is achieved by randomly permuting the 3D joint positions of the skeleton, resulting in a new pose sequence in which a different body configuration performs the same actions. Additionally, **Bone Masking** augmentation is implemented by setting the 3D joint positions of the skeleton to zero for a randomly selected subset of bones. A similar technique can also be applied on the frame level rather than the bone level.
Fig. 3: Various augmentation implementation
Fig. 2: The pipeline of generating the skeleton map image
## V Loss function
In the realm of Deep Learning, a loss function serves as an evaluation metric of a model's ability to predict the desired outcome. The objective of training a model is to find the optimal set of parameters that minimize the loss function, leading to the model's improved accuracy in making predictions on unseen data. The use of classification loss is prevalent in classifying outputs into discrete class labels. The Cross-Entropy loss is a popular choice for action recognition classification problems [39]. Although the Cross-Entropy loss often proves effective in training models, it has certain limitations, including sensitivity to the relative magnitude of predicted and true values, rendering it challenging to use in specific scenarios.
One critical observation when utilizing the Cross-Entropy loss function is that the distance between samples of different classes is minimal. As a result, the classifier trained with Cross-Entropy is susceptible to making incorrect predictions. An intuitive alternative to the Cross-Entropy loss is the implementation of metric learning techniques, which involve learning a distance metric from the data. Unlike conventional Machine Learning, where the aim is to learn a mapping function between inputs and outputs, the goal of metric learning is to learn a metric that can be utilized to measure the distance between data points [39]. This distance metric can then be leveraged for making predictions or other tasks, maximizing the inter-class distance and minimizing the intra-class distance. The Additive Angular Margin Loss (AAML) [40] is a widely used loss function in Deep Learning for face recognition. It is designed to learn a discriminative feature representation for face images by maximizing the angular margin between the features of the same identity while minimizing the angular margin between the features of different identities. The AAML loss function is based on learning a weight vector and a feature vector for each identity. The angle between the weight and feature vectors is maximized for the same identity and minimized for different identities. The loss function encourages the weight and feature vectors to lie on the surface of a hypersphere. The large margin between the vectors of the same and different identities enhances the learned feature representation's discriminative power. The AAML loss function has been demonstrated to outperform other loss functions for face recognition on several benchmark datasets, resulting in a 1.5% improvement in classifier accuracy.
## VI Deep-Learning Optimizer and Schedulers
### _Training Optimizers_
The purpose of an optimizer in deep learning is to determine the set of parameters that minimize the loss function, thus yielding the model configuration most representative of the data. This is achieved through iterative refinement of the model parameters, guided by various algorithms and techniques, with the optimizer making progressive adjustments to the parameters until an optimal solution is reached. Commonly used optimizers include Stochastic Gradient Descent, Adam [41], and RMSprop [42], each of which can significantly impact the performance of a deep learning model. Thus, the selection of the appropriate optimizer is a critical consideration.
The MadGrad [43] optimizer, a variation of the commonly used Adam [41], aims to improve the generalization performance of deep learning models. This is achieved by incorporating multiplicative noise into the gradients during the training process. Specifically, the gradients are multiplied by a random noise tensor at each iteration, derived from a Gaussian distribution with a mean of 1 and a slight variance, and scaled by a factor that decreases over time. Results have demonstrated that MadGrad outperforms Adam on various tasks, including image classification and natural language processing. This study validated its effectiveness, resulting in a 1.1% improvement in accuracy.
### _Learning rate schedulers_
The objective of a learning rate scheduler in deep learning is to modulate the learning rate. This crucial hyperparameter determines the magnitude of updates made to the model's parameters by the optimizer. Ineffective regulation of the learning rate can result in either significant, erratic updates that lead to suboptimal convergence or slow, incremental updates that impede the pace of the training process. A learning rate scheduler resolves these issues by dynamically adjusting the learning rate during training, using various techniques such as fixed scheduling, scheduling based on the training progress, or scheduling based on model performance. This contribution demonstrates the efficacy of the combination of the Cosine Annealing scheduler [44] and the ReducedLR scheduler [45] in stabilizing the learning process.
The ReducedLR scheduler [45] is a commonly utilized approach in deep learning for regulating the learning rate during the training process. This scheduler operates by reducing the learning rate by a predetermined factor when the training loss fails to improve over a specified number of epochs, thereby mitigating the risk of getting stuck in a suboptimal local minimum and enhancing the generalization performance of the model. Implementable as a callback function within deep learning frameworks, the ReducedLR scheduler offers the flexibility to specify the reduction factor and the number of epochs for which the learning rate reduction should be triggered.
The CosineAnnealing scheduler [44] is a learning rate scheduler that adjusts the learning rate following a cosine curve, which commences at a high value and gradually decreases to a lower value as training progresses. This technique helps to improve the convergence of the training process and address the challenge of premature saturation, where the learning rate becomes excessively low and the training slows down.
## VII Regularization
Regularization is a widely employed technique in machine learning for mitigating overfitting, which refers to the scenario in which a model demonstrates superior performance on the
training data yet subpar performance on unseen data. Overfitting transpires when the model has learned specific patterns in the training data that do not generalize to other data, resulting in poor performance on previously unseen data. Regularization helps to address overfitting by incorporating a penalty term in the loss function during the training process. This promotes the model to learn more generalizable parameters, thereby yielding enhanced performance on new data. There exist several types of regularization methods, such as label smoothing regularization, dropout, batch normalization, and early stopping, which can be utilized in a complementary manner to improve the generalizability of machine learning models.
Label Smoothing Regularization [46] is a popular regularization technique utilized in deep learning to enhance the generalization performance of classifiers. The objective of label smoothing is to reduce the confidence of the classifier's predictions, thereby improving its generalization capabilities on new data. This technique is commonly utilized in image classification tasks, where instead of using one-hot encoded training data labels with a label vector of [0, 0, 1, 0] to indicate that an image belongs to class 3, label smoothing would modify the label vector to [0.1, 0.1, 0.8, 0.1], suggesting that the image has a high probability of belonging to class 3 but also a tiny likelihood of belonging to the other classes. This regularizes the classifier and reduces the risk of overfitting the training data. Essentially, one-hot encoded training data labels should not contain zero values for the non-class index.
Early stopping, as described in [47], is a prevalent regularization strategy utilized in deep learning to mitigate overfitting. The technique entails prematurely halting the training process prior to the model's convergence to its optimal performance. This is achieved by continuously monitoring the model's performance on a validation set and interrupting the training when the performance either ceases to improve or begins to decline. The objective of early stopping is to restrict overfitting, which is a situation where a model performs well on the training data but poorly on unseen data. By curtailing the training before the model attains full convergence, early stopping can enhance its ability to generalize to novel data and improve its overall performance. Early stopping is a straightforward and practical approach that can be effortlessly integrated into most deep-learning frameworks.
Dropout, as a regularization technique in deep learning, aims to enhance the generalization performance of neural networks [2]. The approach involves randomly setting a specified fraction of the activations of the neurons in the network to zero during training. This reduction in the co-adaptation of neurons leads to a network that is less sensitive to the specific weights of individual neurons. As a result, training a network with dropouts can lead to learning more robust and diverse feature representations, which are less prone to overfitting the training data. Dropout is commonly applied to fully-connected layers in neural networks but can also be implemented in other layer types, such as convolutional and recurrent layers. It is usually deployed with a high dropout rate (e.g., 0.5) during training and a low dropout rate (e.g., 0.1) or no dropout during
inference.
Batch normalization, as discussed in [48], is a technique employed in deep learning to improve the performance and stability of neural networks. The strategy entails normalizing the inputs to each layer in the network, which can expedite the training process and enhance the generalization performance of the network. A batch normalization layer is inserted between the input and output of a neural network. It is designed to normalize the activations of the preceding layer using the mean and variance of the current mini-batch of data. This reduction of internal covariate shift, which refers to the phenomenon of the distribution of inputs to each layer evolving over time during training, can enhance the network's convergence speed and stability and make it more resilient to initialization parameter choices.
## VIII Dataset and Performance Measures
### _Dataset_
The design of a data collection environment for capturing actions is dictated by the target task. For instance, the setup required for a gesture recognition task will differ from that required for a human interaction task. In most cases, the captured action occurs at the beginning of the sequence, obviating the need for action recognition without action detection. Additionally, many datasets use a global capturing length, which simplifies the data preprocessing by eliminating the requirement for consistent length. The presence of non-action-related subjects in the scene is another challenge commonly addressed by capturing data in a controlled environment to ensure that only sequences relevant to the task are captured.
The NTU RGB+D dataset, proposed by Shahroudy _et al._[49], is a substantial multi-view action dataset with 56,000 samples from 40 individuals and 60 action classes, which are classified into three categories: 40 daily activity classes, nine health-related classes, and 11 interaction classes. The Microsoft Kinect device was used for data collection, providing RGB, depth, skeleton, and infrared modalities. Three fixed camera setups were utilized, with capturing angles ranging from -45 to 45 degrees, and the cameras' distance and height were also varied to increase view variations. Liu _et al._[50] further extended the NTU-60 dataset using the same capturing system and modalities, with 106 subjects performing 120 action classes and 114,500 video samples.
### **Performance Measures**
Standardizing evaluation metrics is crucial for fairly comparing various approaches. The choice of metric depends on the task, and some metrics may be more intuitive than others. The choice of evaluation protocol may also play a role in demonstrating the difficulty and complexity of the scenario. The **Cross-views** evaluation protocol, proposed by Shahroudy _et al._[49] and Liu _et al._[51], assumes that samples from the same view cannot be used for both training and testing, for instance, camera 1 and 3 for training and camera 2 for testing only.
**Cross-subject** evaluation protocol, also proposed by the creators of the NTU-D 60 dataset [49, 51], dictates that the subjects selected for training cannot be used for testing, with 20 of the 40 subjects selected for training and the remaining 20 for testing. Lastly, Liu _et al._[51] also proposed the **cross-setup** evaluation protocol, which uses different vertical heights, distances, and backgrounds during the capturing to include natural variations, while keeping the horizontal three-camera fixed in terms of capturing angle.
## IX Results
We replicated the results of the End-to-End Two Streams Network for Action Recognition as reported by _et al._[33]. We established it as a baseline for our experiments. In their work, Zhang et al. _et al._[33] built upon their previous contribution in [52] by proposing an End-to-End Two Streams Network for Action Recognition. The network comprises two streams: one is similar to the one presented in [52], while the other is a CNN-based stream that includes a view adaptation subnetwork with a similar architecture. This CNN stream leverages the skeleton representation proposed by Du _et al._[16]. To enhance the robustness of the network, random rotation augmentation was incorporated on-the-fly. Finally, late score fusion was applied with different stream weights, favoring the CNN stream. Our results outperform those of the Two-Stream View Adaptive Module, demonstrating the potential of our approach with simple training techniques. We incorporated all mentioned strategies and techniques to achieve the results shown in the tables.
## X Conclusion
The proliferation of sensing devices has increased the availability of diverse datasets in multiple modalities. Among these, skeleton-based modality holds promise due to its computation efficiency and the wealth of information the human skeleton provides. Accurately estimating human pose from the video is a prerequisite for extracting 3D skeleton data from various modalities. In this work, we have shown that Convolutional Neural Networks (CNNs) utilizing diverse training techniques can achieve state-of-the-art (SOTA) results comparable to those obtained using Graph Neural Networks (GNNs) for action recognition tasks. Furthermore, our findings indicate that using various data augmentation techniques can improve the generalization and robustness of the model. This results in improved performance on unseen data and reduced sensitivity to variations or distortions in the input. Additionally, we have demonstrated that using MadGrad as the optimizer and implementing a learning rate scheduler can improve the model's accuracy. Furthermore, we have shown that using a margin-based cosine loss function instead of the traditional cross-entropy loss can enhance the model's performance, leading to more accurate predictions and improved overall results. Regularization techniques can further prevent overfitting, improving the model's performance on unseen data. |
2309.08604 | Tuning Pythia for Forward Physics Experiments | Event generators like Pythia play an important role in physics studies at the
Large Hadron Collider (LHC). While they make accurate predictions in the
central region, i.e. at pseudorapidities $\eta<5$, a disagreement between
Pythia and measurements in the forward region, $\eta>7$, has been observed. We
introduce a dedicated forward physics tune for the Pythia event generator to be
used for forward physics studies at the LHC, which uses a more flexible
modelling of beam remnant hadronization and is tuned to available particle
spectra measured by LHCf. Furthermore, we provide an uncertainty estimate on
the new tune in a data-driven way which can be used as a means of flux
uncertainty for future forward physics studies. We demonstrate an application
of our tune by showing the updated neutrino and dark photon spectra at the
FASER experiment. | Max Fieg, Felix Kling, Holger Schulz, Torbjörn Sjöstrand | 2023-09-15T17:58:34Z | http://arxiv.org/abs/2309.08604v1 | # Tuning Pythia for Forward Physics Experiments
###### Abstract
Event generators like Pythia play an important role in physics studies at the Large Hadron Collider (LHC). While they make accurate predictions in the central region, i. e. at pseudorapidities \(\eta<5\), a disagreement between Pythia and measurements in the forward region, \(\eta>7\), has been observed. We introduce a dedicated forward physics tune for the Pythia event generator to be used for forward physics studies at the LHC, which uses a more flexible modelling of beam remnant hadronization and is tuned to available particle spectra measured by LHCf. Furthermore, we provide an uncertainty estimate on the new tune in a data-driven way which can be used as a means of flux uncertainty for future forward physics studies. We demonstrate an application of our tune by showing the updated neutrino and dark photon spectra at the FASER experiment.
## I Introduction
The Large Hadron Collider (LHC) has been instrumental in constraining physics both within and beyond the Standard Model. Its main experiments, ATLAS, CMS, LHCb and ALICE, have discovered and measured properties of the Higgs, constrained dark sectors, probed new physics in the flavor sector, and more generally, have furthered our understanding of fundamental particle physics. These experiments benefit greatly from Monte Carlo event generators, which can make accurate predictions of particle distributions in the central region with pseudorapidities \(\eta\lesssim 5\). Much work has been put into improving, validating and tuning these generators for the experiments at the LHC, and often excellent agreement has been reached.
Recently, there has been new interest in particle production in the forward direction at the LHC, corresponding to \(\eta\gtrsim 7\), where much less data has been collected as compared to the central experiments. The implementation of the FASER experiment has already set leading bounds in certain BSM scenarios [1] and lead to the first direct observation of neutrinos produced at a collider [2; 3]. Additionally, the Forward Physics Facility (FPF) has been proposed to house a suite of experiments to further study particles produced in the forward direction during the high-luminosity LHC era [4; 5]. The success of these experiments will be greatly enhanced if similar event generators can be used to make accurate predictions.
However, in the context of the LHC, the popular event generator Pythia[6; 7] has only been tuned in the central region, and thus one should not expect reliable predictions in the forward direction. Indeed, the LHCf experiment, which can measure distributions of neutral particles with \(\eta\gtrsim 9\), shows a distinct disagreement with Pythia's predictions obtained using the popular tune relying on data from central experiments -- the so-called _Monash_ tune [8]. Notably, Pythia predicts an excess of mesons but a deficit of baryons when compared to LHCf data [9; 10; 11; 12].
In this paper we provide a forward physics tune for the Pythia event generator by fitting hadronization parameters to LHCf measurements of neutral pion, photon and neutron production. In particular, we will primarily fit parameters that have little impact on central physics, so as to not spoil the success of Pythia in this region.
In addition to our forward tune, we will also provide an uncertainty estimate on these parameters. Currently, existing generators typically only provide one central prediction but no measure of uncertainty. One approach often used in astroparticle physics is to define an uncertainty based on the spread of event generators' predictions. While this definition captures a spread of underlying physics modelling, it is not data-driven and it is not clear if it has any statistical meaning. Here, for the first time, we follow a different approach and provide the uncertainty on a single generator in a data-driven way.
This paper is organized as follows. In Sec. II, we discuss how hadronization is done in Pythia in the forward direction. In Sec. III we discuss our tuning procedure to the LHCf measurements and provide our tune on these kinematic parameters. In Sec. IV, we show how our tune impacts the predictions for forward neutrino and dark photon production at the FASER experiment. In Sec. V, we summarize and conclude.
## II Modeling of Forward Particle Production in Pythia
There are few theory constraints in the modelling of forward physics. While at least some aspects of
central physics are governed by perturbation theory, such as jet production, the forward region is entirely of nonperturbative origin.
An early assumption was so-called Feynman scaling [13], i. e. that the \(x_{E}\,\mathrm{d}n/\mathrm{d}x_{\mathrm{F}}\) distribution should be collision-energy-independent. Here \(x_{\mathrm{F}}=2p_{z}/E_{\mathrm{CM}}\) and \(x_{\mathrm{E}}=2E/E_{\mathrm{CM}}\) in the rest frame of the event, and \(n\) is the number of produced particles per event. Perfect Feynman scaling would correspond to a collision-energy-independent central rapidity plateau \(\mathrm{d}n/\mathrm{d}y\), while data instead show this distribution to be rising with energy, suggesting that an increasing fraction of the total energy is taken from the forward region to produce more particles in the central one.
Central particle production in Pythia is generated by multiparton interactions (MPIs). That is, since hadrons are composite objects, several parton-parton subcollisions can occur inside a single \(pp\) event. The high-\(p_{\perp}\) tail of these subcollisions corresponds to regular jet production. The bulk of them occur at a few GeV, however, where they are not distinguished individually but are only visible by their collective effect. The rise of the rapidity plateau is mainly driven by an increasing average number of MPIs.
The beam remnants stem from those partons that are _not_ kicked out of the incoming protons by MPIs. The remnants and the MPIs are related to each other by flavor and color. An MPI can take away both valence and sea quarks from the original \(uud\) valence content of a proton, giving rise to varied remnant topologies, e. g. \(ud\) or \(uud\overline{s}\). Each kicked-out (anti)quark also carries away some (anti)color, and each gluon both a color and an anticolor, that have to be compensated in the remnant so as to preserve the overall color state. In the Lund string model [14], each separated color-anticolor pair gives rise to a linear confinement field, a string, that will fragment into hadrons. This would mean that the momentum of a remnant often had to be shared between many string systems, making it difficult to obtain a leading baryon that carries a significant fraction of the incoming proton energy. Also note that the number of MPIs goes up with increasing collision energy, implying softening baryon spectra.
Indeed, the problem in Pythia is to produce a spectrum with a fair amount of high-momentum baryons, and some corrections have to be introduced to the baseline picture, as will be outlined in this section. We here do not consider the class of elastic scattering, which obviously is quite separate and not of interest here. We also leave diffraction aside for now but return to it later.
Early on [15] it was realized that a picture of fully independent MPIs does not reproduce collider phenomenology, e. g. the rise of the average transverse momentum of charged particles with increasing multiplicity. Hence the need for color reconnection (CR), the assumption that nature has a tendency to rearrange colors such that the total string length to be drawn out is reduced. Many possible scenarios have been proposed over the years, and a few of them are implemented in Pythia. We will here study two of them.
In the default CR scenario, it is assumed that the partons pulled out from the colliding protons are strongly correlated in color, in a way that the color of one such parton may be the same as the anticolor of another such. In a picture where only gluons are pulled out, the resulting remnant would then be in a color octet state, which conveniently can be subdivided into a triplet single quark and an antitriplet diquark. If in addition one valence quark is kicked out, only a diquark remains. These are the two most common outcomes, but others are possible and modelled. One is that all three valence quarks are kicked out. Then a single gluon is assigned to carry the remaining energy and momentum. Another is that the removal of sea quarks leaves their antipartners behind. Then the remnant is simplified by splitting off a hadron, e. g. \(uud\overline{s}\to ud+u\overline{s}\to ud+K^{+}\).
The other scenario is the QCDCR one [16]. In it, explicit colors are assigned both to quarks and gluons, and reconnections can occur between identical colors if they reduce the total string length. Such a detailed tracing of color is not done in the default scenario. Another distinguishing feature of QCDCR is so-called junction reconnections. In it, two triplet strings can combine into an antitriplet one, according to the familiar color algebra \(\mathbf{3}\otimes\mathbf{3}=\overline{\mathbf{3}}\oplus\mathbf{6}\). This leads to Y-shaped color topologies that carry non-vanishing baryon numbers. Notably, the QCDCR model correctly predicts an increased fraction of charm baryons in \(pp\) relative to \(e^{+}e^{-}\) events, which the default does not [17; 18].
Zooming in on the remnant region, the QCDCR starting point is again to assign explicit colors to each parton pulled out of the incoming protons, with opposite colors in the remnant. This allows a bigger color charge to accumulate in the remnant than assumed in the default scenario, and this requires additional remnant gluons. In a first instance the remnant is only simplified when e. g. the color of one gluon equals the anticolor of another gluon. But again, high remnant color charges are deemed less likely, so an exponential suppression in the size of the remnant multiplet is introduced, whereby more remnant color lines are forced to cancel.
In the following, we will introduce a new forward physics tune that uses the QCDCR scenario with its suggested parameter values [16] as a starting point.
On top of that, some old or new parameters are varied, with a special eye towards consequences in the forward region. An alternative tune that uses the default CR scenario and the Monash tune [8] as starting point, is presented in Appendix A.
Whenever the remnant consists of more than one parton, the remnant energy and (longitudinal) momentum have to be shared between them. To this end, there are assumed shapes for valence and sea quark momentum fractions \(x\), as well as for gluons. With each \(x\) first picked at random according to these shapes, and then rescaled to unit sum, each parton energy is now assigned a fraction \(x_{\rm rescaled}\) of the full remnant energy. A diquark receives the sum of the constituent quark \(x\) values, but is in addition allowed a further enhancement factor, by default 2. A remnant hadron receives the sum of its constituent momenta. The bottom line is that, in the two most common cases, either a diquark carries the full remnant momentum, or it carries an average of 80% of it.
It is this diquark that primarily can fragment to produce the leading baryon, e. g. the neutron measured by LHCf. In spite of the steps already taken to make the diquark hard, it still turns out that the default fragmentation results in too soft neutrons. We have therefore sought ways to further harden the leading baryon spectrum. This requires modifications to the fragmentation of a leading diquark, relative to the normal string scenario.
To give some background, consider the normal string fragmentation, as probed most directly in \(e^{+}e^{-}\) annihilation events, \(e^{+}e^{-}\to\gamma^{*}/Z^{0}\to q_{0}\overline{q}_{0}\). There the string between the \(q_{0}\) and \(\overline{q}_{0}\) breaks by the production of new \(q_{i}\overline{q}_{i}\) pairs, to give a sequence \(q_{0}\overline{q}_{1}-q_{1}\overline{q}_{2}-q_{2}\overline{q}_{3}-\cdots-q_{n -1}\overline{q}_{0}\) of \(n\) mesons. Here \(q_{0}\overline{q}_{1}\) is called the first-rank hadron of the \(q_{0}\) jet, \(q_{1}\overline{q}_{2}\) the second-rank one, and so on. The simplest extension to baryon production is to allow also antidiquark-diquark breaks, where the color antitriplet diquark takes the role of an antiquark, and vice versa. Thereby the baryon and antibaryon are nearest neighbors in rank, giving rise both to flavor and momentum correlations. Specifically, since two flavor pairs are shared, you could not produce a \(\Xi-\overline{p}\) combination this way. Studies mainly at LEP have shown that baryon-antibaryon pairs are more decorrelated than this picture allows for.
This is where the popcorn mechanism enters. In it, diquarks are not bound objects, but quarks can drift apart along the string, in such a way that a meson can be produced between the baryon and antibaryon, whereby the latter two only share one \(q_{i}\overline{q}_{i}\) pair. Tunes to LEP data suggest that half the time the baryon and antibaryon are nearest neighbors, and half the time they are separated by a meson in between. Translated to the fragmentation of a leading diquark, this means that the production of a baryon and of a meson as the first-rank particle are about equally likely. But we do not have quite as nice a test bed for diquark fragmentation as \(e^{+}e^{-}\) offers for quark one, and also have not spent a corresponding effort at tuning, so this assumption is untested. On the contrary, it is plausible that an initial diquark from an incoming proton sticks together better than assumed for new string breaks. Therefore, we introduce a new parameter, \(d_{\rm pop}\) (see Table 1 for the full name in the code) uniquely for diquarks at the end of strings. If zero, then such a diquark will never break up, while if unity such a split is as likely as inside a string. A second-rank baryon takes less average momentum than a first-rank one does, so a reduced admixture of the former gives a harder baryon spectrum.
For an initial parton in a string aligned along the \(z\) axis, the first-rank hadron takes a fraction \(z_{1}\) of the total lightcone momentum \(E+p_{z}\), the second-rank a fraction \(z_{2}\) of what is left after the first, i. e. a fraction \(z_{2}(1-z_{1})\) of the original amount, and so on. In each step we assume the \(z\) value to be picked at random according to the Lund symmetric fragmentation function (LSFF). In its full generality the LSFF allows for one separate parameter for each quark/diquark flavor species, and quark/diquark mass correction factors for the first-rank hadron. In practice this full generality is seldom used, and then the LSFF simplifies to
\[f(z)\propto\frac{1}{z}\,(1-z)^{a}\,\exp\left(-\frac{bm_{\perp}^{2}}{z}\right). \tag{1}\]
Here \(m_{\perp}^{2}=m^{2}+p_{\perp}^{2}\) is the squared transverse mass of the produced hadron, and \(a\) and \(b\) are free parameters to be tuned. A relevant aspect is that hadrons with a larger mass also take a larger average \(z\) value. Nevertheless, it appears that the forward baryon spectrum needs to be harder than is default. For the purposes of this tune we have therefore allowed \(a\) and \(b\) to be set separately when a diquark jet produces a first-rank baryon; hence the new parameters \(a_{\rm remn}\) and \(b_{\rm remn}\) which can be turned on by setting \(f_{\rm remn}=\) on. In a future, with more data and understanding at hand, alternative modifications could be considered.
In addition to the flavor and longitudinal structure of particle production, also the transverse fragmentation must be considered. Here the discussion can be split into the partonic setup and the string fragmentation.
In the first stage, each parton taken out of the incoming proton to become part of an MPI is assumed to have some transverse motion, "primordial \(k_{\perp}\)". This is expected to be of the order of the quark
constituent mass, say a third of a GeV. For hard processes, notably \(Z\)-boson production, empirically a higher scale of order 2 GeV is required. This could be owing to an imperfect modelling of the low-\(p_{\perp}\) behavior of initial-state parton showers, but whatever the reason an interpolation is introduced wherein soft systems receive a lower primordial \(k_{\perp}\) and hard systems a higher one. The full expression for the Gaussian width \(\sigma\) is
\[\sigma=\frac{\sigma_{\rm soft}\,Q_{\rm half}\!+\!\sigma_{\rm hard}\,Q}{Q_{\rm half }+Q}\frac{m}{m\!+\!m_{\rm half}\sqrt{\frac{E}{m}}}. \tag{2}\]
Here the \(Q\), \(m\) and \(E\) are the hard scale, mass and energy of the MPI subsystem, while \(\sigma_{\rm soft}\), \(\sigma_{\rm hard}\), \(Q_{\rm half}\) and \(m_{\rm half}\) are free parameters. The second factor is intended to reduce \(\sigma\) for low-mass systems, especially if these are strongly boosted in the forward direction (\(E\gg m\)).
Also, the left-behind constituents of the beam remnants, mainly quarks and diquarks, are each assigned a primordial \(k_{\perp}\) with a Gaussian width \(\sigma_{\rm renn}\). Taken together, the MPI initiators and the remnant constituents add to give a net \(p_{\perp}\). An opposite recoil is shared evenly by them all, except that the damping factor for low-mass systems in Eq. (2) is used also here, such that transverse momentum overall is conserved.
With the kinematics of partons fully defined, string fragmentation can be applied. Again consider a string piece aligned along the \(z\) axis. Then, in each string break, the new \(q_{i}\) and \(\overline{q}_{i}\) are assumed to receive opposite and compensating \(p_{\perp}\) kicks, which add vectorially to give the total \(p_{\perp}\) of each \(q_{i}\overline{q}_{i+1}\) hadron. Again a Gaussian distribution is used, with width \(\sigma\). The full \(p_{\perp}\) of a hadron is then obtained after the rotation and boost back to the full \(pp\) collision frame, which notably depends on the primordial \(k_{\perp}\) assigned to the remnant in the previous step.
A final note. So far we have considered nondiffractive events. Diffraction in Pythia is based on the Ingelman-Schlein picture [19], wherein a diffractive system can be modelled as a proton-glueball collision, where the glueball "hadron" is viewed as a representation of a pomeron. Notably the proton end of this system, which is in the forward direction, is next-to identical with the one of a nondiffractive system. The glueball end usually is at more central rapidities, and has negligible impact on the forward region. The picture is slightly modified for low-mass diffraction, but is there assumed dominated by the production of a string with one leading diquark. Therefore, the modifications already introduced for nondiffractive events can be reused, without the introduction of any further ones.
In summary, the two main new modifications of the Pythia code are to allow a reduced probability for a remnant diquark to break up, and to allow a harder fragmentation function for it. In addition, some existing parameters are also modified within the tuning effort.
## III Tuning Kinematics
As described in the previous section, the modeling of forward particle production introduces a number of phenomenological parameters. Their role is to parameterize the inability to make first principle predictions in the absence of perturbative methods. For the simulation to have predictive power, it is imperative that these parameters are set to values ("tuned") in such a way that the simulation reproduces a wide range of measured datasets, in this case from LHCf. In this section, we first discuss the datasets, parameters and methodology before presenting the results in the form of a forward physics tune that is based on the QCDCR scenario. The tuning parameters and their values for both the baseline tune and the forward physics tune are shown in Table 1. Results for an alternative tune that is based on the default CR scenario and the Monash tune are presented for comparison in Appendix A.
### Datasets
We exclusively use data measured by the LHCf experiment for tuning purposes in this study as it is by far the most relevant source of information on forward particle production. LHCf measured neutral hadron and photon fluxes at forward rapidities \(\eta\gtrsim 8.8\)[20]. It is worth noting that forward photon production is dominated by \(\pi^{0}\to\gamma\gamma\) decay. We reasonably assume that the same mechanisms govern hadronization mechanisms at \(\sqrt{s}=\)7 TeV and 13 TeV collision energies. We therefore use LHCf data from both energies. The following list is a summary of the LHCf datasets we use to tune our phenomenological parameters with:
* neutron energy spectra at 7 TeV [9]
* neutron energy spectra at 13 TeV [10]
* \(\pi^{0}\) energy spectra at 7 TeV [11]
* photon \(p_{z}\) spectra at 13 TeV [12]
The data are publicly available in the form of histograms of cross-sections that are differential in either \(\eta\) or \(p_{\perp}\).
We note that we use a very recently published LHCf measurement on \(\eta\) mesons [21] for validation of our methodology. We further validate our result by confronting the tuned simulation with more central measurements from CMS and TOTEM in Sec. III.5.
### Tuning Parameters
Our mission is to identify and tune the value of phenomenological parameters relevant to forward physics while at the same time keeping the excellent predictive power of Pythia for central physics intact. In this context, working with parameters related to the modeling of the beam remnants (Table 1) is a natural choice. They predominantly influence forward particle production while, as we will show, their influence on central particle production is limited. In the following, we discuss the effects these parameters have on the predictions of forward particle spectra, how the parameters are tuned to data, and finally, we present a robust uncertainty estimate for the most relevant parameters.
Compared to the experimental data, the default Pythia configuration predicts too many hard pions in the LHCf phase-space. Disabling the popcorn mechanism for meson production from beam remnants (i. e. setting \(d_{\mathrm{pop}}=0\)) leads to the desired reduction of hard pions. We note that we studied the effect of varying \(d_{\mathrm{pop}}\) but found only little sensitivity for small \(d_{\mathrm{pop}}>0\) and hence set this parameter to 0. A side-effect of disabling the popcorn mechanism in beam remnants is an increase in the production of hard neutrons, simply because remnant diquarks can no longer hadronize into mesons. This turns out to be fortuitous, as Pythia's default predicts too few hard neutrons in the most forward direction \(\eta>10.76\).
By adjusting other parameters associated with the beam remnant, we can tune the overall normalization of the forward hadronic flux. In particular, we can modify the initial \(k_{\perp}\) of the partons in the incoming protons: partons with a relatively larger \(k_{\perp}\) will generally pull hadrons towards distributions of smaller \(\eta\). The phenomenology of this effect is governed by the width of the primordial \(k_{\perp}\) distribution for the MPI initiators. The corresponding tuning parameters are \(\sigma_{\mathrm{soft}},\sigma_{\mathrm{hard}},\) and \(Q_{\mathrm{half}}\), and for the beam remnant, \(\sigma_{\mathrm{remn}}\). The net effect is a non-zero \(p_{\perp}\) imparted on hadrons, the manifestation of which can be seen in the forward neutron and pion spectrum.
The overall effects of \(\sigma_{\mathrm{soft}},\sigma_{\mathrm{hard}}\) and \(\sigma_{\mathrm{remn}}\) on Pythia's predictions for LHCf measurements are qualitatively similar while their sensitivities are not (See our discussion in Sec. II). An increase in any of these parameters makes it more likely that forward hadrons inherit larger transverse momenta and therefore populate more central phase-space regions (i.e. bins with smaller \(\eta\) in the LHCf data). We exploit this freedom the model gives us and take a pragmatic approach. To keep \(\sigma_{\mathrm{hard}}\) at its default value of 1.8 GeV, we reduce its sensitivity by increasing the (poorly constrained) \(Q_{\mathrm{half}}\) to 10 GeV. As can be seen in Eq. (2), this makes the \(k_{\perp}\) distribution more dependent on \(\sigma_{\mathrm{soft}}\). To remove the remaining degeneracy between \(\sigma_{\mathrm{soft}}\) and \(\sigma_{\mathrm{remn}}\), which have default values of 0.9 GeV and 0.4 GeV, we define a parameter \(\sigma\) that relates the two: \(\sigma=\sigma_{\mathrm{soft}}=f\)\(\sigma_{\mathrm{remn}}\), where \(f\) is a number that fixes the ratio. We studied the effect of tuning \(\sigma\) when choosing different values of \(f\) in the vicinity of \(f=1\). Since we found only marginal improvement, we choose to fix \(f\) at a value of \(f=1\) and keep only \(\sigma\) as a tuning parameter.
Two parameters, \(a_{\mathrm{remn}}\) and \(b_{\mathrm{remn}}\), that govern the baryon fragmentation function complete our set of tuning parameters. They allow us to have an almost exclusive handle on the neutron spectrum, without much impact on the pion spectrum. In our setup, lowering (raising) \(a_{\mathrm{remn}}\) while raising (lowering) \(b_{\mathrm{remn}}\) results in slightly harder (softer) forward neutron spectra. Initially, we studied the effect of treating \(a_{\mathrm{remn}}\) and \(b_{\mathrm{remn}}\) as independent tuning parameters. However, we found that equally good quality of Pythia predictions can be achieved by fixing \(a_{\mathrm{remn}}\) to the base tune's value for the LSFF of 0.36 and tuning only \(b_{\mathrm{remn}}\).
\begin{table}
\begin{tabular}{l|l||c|c|c} \hline \hline Full name & Shorthand & Baseline (QCDCR) & Forward Tune & Uncertainty \\ \hline BeamRemnants:dampPopcorn & \(d_{\mathrm{pop}}\) & 1 & 0 & \\ BeamRemnants:hardRemnantBaryon & \(f_{\mathrm{remn}}\) & off & on & \\ BeamRemnants:aRemnantBaryon & \(a_{\mathrm{remn}}\) & - & 0.36 & \\ BeamRemnants:bRemnantBaryon & \(b_{\mathrm{remn}}\) & - & 1.69 & \\ BeamRemnants:primordialKTooft & \(\sigma_{\mathrm{soft}}\) & 0.9 & 0.58 & \(0.26\ldots 1.27\) \\ BeamRemnants:primordialKThard & \(\sigma_{\mathrm{hard}}\) & 1.8 & 1.8 & \\ BeamRemnants:halfScaleForKT & \(Q_{\mathrm{half}}\) & 1.5 & 10 & \\ BeamRemnants:halfMassForKT & \(m_{\mathrm{half}}\) & 1 & 1 & \\ BeamRemnants:primordialKTremnant & \(\sigma_{\mathrm{remn}}\) & 0.4 & 0.58 & \(0.26\ldots 1.27\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The main Pythia parameters studied in this article, their default parameters in the QCDCR tune (according to the _Mode 2_ configuration in Ref. [16]), and their values in the Forward Physics Tune obtained in this study. The last column shows the uncertainty range for \(\sigma_{\mathrm{soft}}=\sigma_{\mathrm{remn}}\) as discussed in Sec. III.4.
### Tuning Methods
The observations detailed in the previous paragraph lead to a reduction of the dimensionality of the tuning problem. We are left with two free parameters, \(\sigma\) and \(b_{\rm{remn}}\) which we will explore in the ranges of \([0-1.5]\) and \([0.5-5]\), respectively. We use Pythia to simulate 7 million \(pp\) collisions at \(\sqrt{s}=7\) TeV and 5 million collisions at 13 TeV for each point we initially explore in the (\(\sigma\), \(b_{\rm{remn}}\)) space. We analyze the simulated events with Rivet, enabling the analysis routines that correspond to the experimental data listed in Sec. III.1. The result of the analyses is a set of histograms obtained from simulation that can immediately be compared with the corresponding experimentally measured histograms. It should be noted that we obtain a set of histograms for each point in the so-explored parameter space.
Equipped with experimentally measured histograms and a method to obtain simulated histograms for any point in the parameter space, we could define a goodness-of-fit measure and numerically find a best-fit point that minimizes the measure. However, the computational cost to do so is prohibitively expensive. Instead, we construct an analytic surrogate model of the simulation response to shifts in the parameter space. The model allows us to predict the simulation outcome at any point in the parameter space at a fraction of the cost of computing the actual simulation. Not only is the model cheap to evaluate but, due to its analytic nature, it is also straightforward to compute first- and second-order derivatives. These qualities make it an ideal fit for numerical minimization. We use the Aprentice toolkit for event generator tuning [22] to facilitate the construction of the surrogate, the definition of a goodness-of-fit measure, and the minimization thereof. We explored different options for the surrogates and found no benefit in going beyond quadratic polynomials. As input to the surrogate, we use the full simulation results at 64 uniformly distributed points in the specified range for \(\sigma\) and \(b_{\rm{remn}}\).
The Aprentice toolkit allows to bias the goodness-of-fit measure using a weighting mechanism for individual histograms and even bins. In general, one might wish to better reproduce either the neutron spectra, photon spectra, pion spectra, or a subset of certain \(\eta\) bins. We, however, wish to be agnostic and place the neutron, photon, and pion spectra measured at LHCf on equal footing. Since the datasets under consideration have quite different numbers of bins, we decided on a democratic weighting such that each of the four analyses is normalized according to the number of data points in that analysis. For a given particle spectrum and collision energy from Sec. III.1, the weighting can be expressed as \(w=(N_{\rm{bins}})^{-1}\) where \(N_{\rm{bins}}\) is the number of data points across \(\eta\) (or \(p_{\perp}\)) bins in that set.
Apprentice is then used to minimize the weighted goodness-of-fit measure. The outputs are a best-fit point \(\sigma_{0},b_{\rm{remn},0}\), and predicted spectra at that point, computed from the surrogate model. These spectra are compared against the actual output of the simulation when run with the parameters of the best-fit point in a necessary effort to validate the method. The best-fit values for \(\sigma\) and \(b_{\rm{remn}}\) for our forward physics tune can be found in Table 1.
### Tuning Uncertainty
In addition to the central tuning prediction, we wish to provide a measure of uncertainty on our best fit. An approach to estimate the uncertainty sometimes used in astroparticle physics is taken to be the spread in different event generators' predictions. While this does capture differences in underlying physics modeling, this definition is not data-driven and the error band lacks statistical meaning.
Naively, one might follow the usual method of looking for \(\Delta\chi^{2}=1\) to obtain a 68% confidence interval. However, due to unknown correlations in experimental data, and imperfections in the physics modeling, the goodness-of-fit measure does not follow a \(\chi^{2}\) distribution. If one were to nonetheless follow that approach with our model, the observed \(\chi^{2}_{\rm{min}}\) results in an unusable underestimate of uncertainties.
In light of this, we take a more practical approach. Our goal is to provide a well-defined range for our tuning parameters that can return a spread of particle fluxes for future studies at the FPF. This range can be obtained by varying the prediction in the vicinity of the best-fit and testing how much the predictions change. The question remains: how much should one vary the tuning parameters to find the corresponding upper and lower bound? A practical parameter uncertainty range is one that covers distances of Pythia's prediction at the best-fit point from the experimentally measured data and data uncertainties.
We find that our fitting parameters, \(\sigma\) and \(b_{\rm{remn}}\), are not strongly correlated and that deviations about the best-fit point are most sensitive to \(\sigma\). We therefore choose to vary and provide an uncertainty on \(\sigma\). To obtain this uncertainty, we define a prediction band specified by two points, \((f\times\sigma_{0},\sigma_{0}/f)\), where \(f\) is a number that is increased until the band contains 68% of points (for \(f=1\) the band obviously contains zero points). Now, even for extremal values of \(\sigma\) in our range, there are a small number of data points which Pythia has difficulty describing;
the central value of these points lies just outside the prediction range specified by \(\sigma\in[0-1.5]\) and are typically found in the highest or lowest bins of the energy spectrum. Since we do not want those points to drive our estimation of uncertainty, we exclude them when counting the fraction of points inside the band specified by \(f\). Across the four analyses there are 20 of these out of 306 total data points.
The method yields two parameter points \(\sigma_{-},\sigma_{+}\) which define a robust uncertainty band containing 68% of points: \(0.26<\sigma<1.27\).
### Discussion of Results
Turning to the tuned LHCf particle spectra, we show our results in Fig. 1. Here, we show the baseline QCDCR prediction (dashed), our obtained forward physics tune result (solid), and its error band (shaded band) against LHCf measurements.
The pion and photon spectra show similar behavior as most of the photons come from pion decay, so we discuss them together. The pion (photon) spectra can be found in the upper (lower) left panel of Fig. 1. For the pion spectra, two \(p_{\perp}\) bins are excluded for display purposes, but this discussion also applies to them. We see that the default configuration predicts too many particles, with a pronounced excess for the most forward bins at high \(p_{z},E\). Our tune greatly reduces this excess at \(E_{\pi^{0},\gamma}\approx 3\) TeV energies, which can in large part be attributed to the removal of the popcorn mechanism on the beam remnant. At smaller momenta, \(p_{z}\sim\) TeV, the default curves do better for the largest \(\eta\) (smallest \(p_{\perp}\)) pion (photon) bins, but this is small improvement compared to the excess that are reduced in other bins. For most curves, our uncertainty band envelopes most of the data points with the exception of some curves which are still in tension (e.g. pions with \(0.8<p_{\perp}[\text{GeV/c}]<1.0\)).
The predicted and measured neutron spectra are shown in the upper (lower) right panels of Fig. 1 for LHC centre-of-mass energies of 7 TeV (13 TeV). For the \(\sqrt{s}~{}=~{}13\) TeV neutrons, we show three of six representative \(\eta\) bins. The most clear deficiency of the default Pythia prediction is an underproduction of neutrons with \(\eta>10.76\), resulting in a spectrum that peaks at lower energies relative to the measured peak. As with the pions and photons, by disabling the popcorn mechanism on the beam remnant our tune can address this deficiency at both LHC energies by producing more hard neutrons.
We show the impact of our tune on the forward \(\eta\) meson distribution as measured by LHCf in the upper left panel of Fig. 2. The default Pythia configuration overpredicts the number of \(\eta\) mesons by almost two orders of magnitude for some bins. While we did not tune to this dataset at all, we see that our tune improves on this by producing less \(\eta\)'s.
In the remaining panels of Fig. 2, we show our tune as compared to the rapidity distribution of charged particles at CMS and TOTEM's T2 telescope [23; 24; 25], measurements of the rapidity gap distribution at CMS [23], and the energy spectrum measured by CMS' CASTOR calorimeter at \(-6.6<\eta<-5.2\)[26]. There is also a similar rapidity gap analysis from ATLAS [27] that we checked but do not show, which in addition to the CMS rapidity gap was used to tune the parameters in Pythia associated with the modelling of the elastic, diffractive and total cross section [28]. Besides LHCf, these measurements are the most sensitive to the beam remnant, with TOTEM, and CASTOR covering \(\eta\sim 5\ldots 7\) respectively. If our tune had an impact on central physics, we would expect to see an effect on the predicted spectra at these experiments, with a sub-leading impact on predictions of the rapidity gap at CMS and ATLAS. In all cases we find a negligible difference between our forward physics tune and the default Pythia prediction, while our uncertainty band produces at most a 5% variation (seen in the CMS and TOTEM measurements of charged particle pseudorapidity distribution).
## IV Application at FASER
In this section we discuss how our tune can be applied at current and future forward physics experiments. As our tune modifies forward hadron production rates, the decay products of these hadrons will also be affected. Forward hadrons may decay into neutrinos and as a result produce a highly collimated intense neutrino beam along the collision axis of the LHC. Similarly, these hadrons might also decay into yet undiscovered light and weakly interacting particles. As the LHC ring curves away, these neutrinos and BSM particles will travel unimpeded down the collision axis. A new class of experiments has recently begun operating to exploit this particle flux.
One of these experiments is FASER [29], which is located along the collision axis, 480m downstream of the ATLAS IP, and covers \(\eta\gtrsim 9\). Located at the front of the experiment is the FASER\(\nu\) neutrino detector which is a \(25~{}\text{cm}\times 25~{}\text{cm}\times 1~{}\text{m}\) tungsten emulsion detector [30; 31]. The FASER detector also consists of a long-lived particle detector which searches for the decay products of BSM particles via a series of trackers and a calorimeter. The SND@LHC experiment is also currently taking data, and is located 480m from IP on the opposite side of the ATLAS as FASER [32]. SND@LHC collects off-axis neutrinos
from the \(pp\) collision, and covers \(7.2<\eta<8.7\)
To fully utilize the HL-LHC era, upgrades to these experiments have been envisioned, as well as the implementation of further forward physics experiments. These proposed experiments would be located in the FPF [4, 5], which is a dedicated cavern for forward physics, located 620 m from the ATLAS IP with space to host a suite of experiments. This includes three detectors aimed at studying neutrinos as well as FASER2 for long-lived particle decays and the FORMOSA experiment for milli-charged particle detection.
In the following, we apply our tune to make predictions for neutrino fluxes and the dark photon search sensitivity at FASER. These predictions can of course also be applied for other experiments at the FPF.
### Neutrinos
The LHC produces an intense flux of high energy neutrinos. This has been first realized in the 1980's [33] but no detection has been made until recently. The first candidates were first detected using the FASER\(\nu\) pilot detector in 2021 [2], and further observed by the FASER detector in 2023 [3]. These neutrinos are expected to originate from pion, kaon, and charm meson decays.
The first estimate of the neutrino flux was provided in Ref. [34], which takes into account both the prompt flux from charm meson decay occurring at the IP, and the displaced decays of long-lived hadrons. This estimate uses a variety of MC event generators from cosmic-ray physics (EposLhc[35], Sibyll 2.3d[36], QgsJet 2.04 [37], DpmJet 3.2019.1 [38]) as well as Pythia to model
Figure 1: LHCf measurements of pions (upper left), photons (lower left) and neutrons at \(\sqrt{s}=7\) TeV (upper right) and \(\sqrt{s}=13\) TeV (lower right) as compared to our tune and the default Pythia prediction. The solid curve is the central prediction of our forward tune, and the shaded region defines our uncertainty band. The dashed curve is the default Pythia prediction and the black error bars are the measured data points. The text near the curves indicates the \(\eta\) (or \(p_{\perp}\)) of the curve, as well as a multiplicative shift that we use for display purposes.
the hadron production at the LHC. The average and spread of these generators have then been used to define a first rough neutrino flux estimate and its uncertainty.
Using our improved forward physics tune, we make predictions for the event rate at FASER\(\nu\). For this, we use the dedicated fast simulation as introduced in Ref. [34] to model the production and decay of long-lived hadrons when passing through the LHC beam pipe and magnetic fields. We have updated the magnet field configuration to those used at the beginning of Run 3, and use the same beam crossing angle of 160 \(\mu\)rad downwards. We then convolute the neutrino flux obtained using Pythia with the interaction cross-sections obtained from Genie[39] to calculate the number of expected events in FASER\(\nu\).
Our results are shown in Fig. 3 for an integrated luminosity of 150 fb\({}^{-1}\). The left and right panel are the electron and muon neutrino spectrum, respectively. The red line is our central prediction for our forward tune, and the dashed black line is the spectrum with the default configuration of Pythia. The red shaded region is our uncertainty band as determined in Sec. III.4. For comparison we also show the predictions from the Sibyll event generator. In the bottom panel we show the ratios of the curves to our tuned curve - we see that our uncertainty gives roughly a 20% uncertainty in the neutrino interaction rate.
Also indicated in Fig. 3 is composition of the neutrinos in terms of their parent mesons, shown in dotted, dash-dotted, and dashed curves for pion, kaon, and charm meson respectively. Clearly, the majority of electron neutrinos come from kaon decay, with
Figure 2: In the upper left panel we show the \(\eta\) meson distribution as measured by LHCf [21]. Our tune (solid red) improves on this distribution, as compared to the default configuration (dashed red) [16]. In the remaining panels we compare our tune to more central measurements. In particular we show CMS and TOTEM charged particle pseudorapidity distribution [23; 24; 25] (upper right), CMS rapidity gap measurement [23] (lower left), and CMS energy spectrum from \(-6.6<\eta<-5.2\) [26] (lower right). These measurements are expected to be the most sensitive to our tuning parameters and we see a small deviation from the default prediction.
a significant charm component at higher energies. Muon neutrinos on the other hand, are dominantly produced by pion decay at lower energies, and kaon decay at high energies. While Pythia models charm production, we note that there are ongoing efforts to provide refined predictions of forward charm production mode using perturbative QCD [40; 41; 42; 43], some of which predict significantly enhanced charm production rates. In the regime where light hadron decays dominate the neutrino composition, the obtained flux uncertainty with our tune roughly agrees with that of Ref. [34]
We note that currently, we only include uncertainties associated with the kinematic distribution. There could be additional sources of uncertainties associated with the flavor composition, especially the kaon to pion production fraction. Indeed, observation from astroparticle physics suggest that forward kaon production might be different than predicted by existing hadronic interaction models. Over more than two decades, cosmic ray experiments have reported significant discrepancies between the number of observed muons in high-energy cosmic ray air showers and model predictions [44; 45; 46]. This observation is commonly referred to as the muon puzzle. Extensive studies have suggested that an enhanced rate of strangeness production in the forward direction could explain the discrepancy [47; 48; 49]. While forward strange measurements could shed light on this discrepancy, no attempt was made to include this in our tune due to the lack of data.
### Dark Photons
The other main purpose of FASER is the search for light long-lived particles with MeV-GeV masses [50; 51; 52]. These are for example motivated by dark matter and, more generally, dark sectors. One of the primary examples discussed in the literature is dark photons. The dark photon is a gauge field of a broken U(1) in the dark sector. Through kinetic mixing with the SM photon, the dark photon, \(A^{\prime}\), can interact with SM fields. This interaction is suppressed by the kinetic mixing parameter \(\epsilon\) with an interaction Lagrangian, \(\mathcal{L}\supset\epsilon/2\ F^{\prime\mu\nu}F_{\mu\nu}\) where \(F\) (\(F^{\prime}\)) is the field strength of the (dark) photon. For massive dark photons with \(2m_{e}<m_{A^{\prime}}<2m_{\mu}\), the dark photon will primarily decay into \(e^{+}e^{-}\). With sufficiently small \(\epsilon\), the dark photon will travel several hundred meters before decaying and could decay inside FASER which has been designed to detect this signal.
Recently, FASER reported the first results for the search for dark photons [1]. In the probed regime, dark photons mainly come from neutral pion decay with small contributions from eta-meson decay and dark bremsstrahlung. The FASER collaboration has estimated the dark photon flux using Epos-Lhc.
Figure 3: Predicted neutrino energy spectrum at FASER\(\nu\) for \(\nu_{e}+\bar{\nu}_{e}\) (left) and \(\nu_{\mu}+\bar{\nu}_{\mu}\) (right). The solid red curve is the spectrum computed using the neutrino flux from our tune and the shaded region is our uncertainty band. The dotted, dash-dotted and dashed red curves show the composition of the neutron flux in terms of the parent meson. For comparison we show the interaction spectrum predicted by the default Pythia configuration (dashed black) as well as the Sibyll event generator (dotted blue). In the bottom panel of each figure, we show the ratio of the curves to our tune - our uncertainty analysis gives about a \(20\%-30\%\) uncertainty on the interacting neutrino spectrum.
The signal uncertainty was estimated by comparing with Sibyll and QgsJet.
We use our Pythia forward physics tune to model forward particle production and Foresee [53] to then obtain the expected dark photon event rate at FASER. The left panel of Fig. 4 shows the energy spectrum of dark photons decaying in FASER during Run3 with 150 fb\({}^{-1}\) integrated luminosity for \(m_{A^{\prime}}=25\) MeV and \(\epsilon=3\times 10^{-5}\). This point lies at the edge of the previously excluded region. The red curve is our main prediction, and the shaded band is error band. The bottom panel shows the ratio of the curves to our central prediction and shows that our uncertainty is roughly 30%. For comparison, we also show the dark photon spectrum from the default Pythia configuration (dashed black) and the prediction from Sibyll, Epos-Lhc, and QgsJet in dotted, dash-dotted, dashed blue curves. We can see that the predictions from these other generators are consistent with our prediction. We note that our uncertainty is slightly larger than the uncertainty obtained by comparing generators at low energy and similar at higher energy.
The right panel shows the FASER sensitivity for Run 3 with 150 fb\({}^{-1}\) in the dark photon parameter space spanned by \(\epsilon\) and \(m_{A^{\prime}}\). The gray shaded areas are excluded by existing experiments (from above by prompt resonance searches, from below by long-lived particle searches in beam dumps) as obtained from DarkCast[54, 55]. The constraints shown in light gray are obtained by recasting experimental analyses while dark gray bounds were reported directly by the experiments. Using our tune we draw our expected sensitivity contour in red with our uncertainty as the shaded contour, and compare with sensitivity contour as calculated with Epos-Lhc in dashed blue. We find that the sensitivity calculated with each configuration is comparable. We also note the overall effect of the flux uncertainty on the sensitivity reach is small. This is due to an exponential (\(\epsilon^{4}\)) suppression of the event rate at large (small) \(\epsilon\).
## V Conclusion
In recent years, a new set of experiments has begun their operation in the forward direction of the LHC, with the purpose of observing and studying collider neutrinos as well as searching for light long-lived particles. This emerging forward neutrino and particle search program requires precise predictions of the anticipated particle fluxes. Currently, forward particle production is often simulated using specialized MC event generators developed for cos
Figure 4: Dark photon spectrum at FASER for \(m_{A^{\prime}},\epsilon=25\) MeV, \(3\times 10^{-5}\) (left) and the discovery reach for FASER using the spectrum predicted by our tune. In the left panel, we show the spectra predicted by our tune (solid red) as well as the associated uncertainty that we calculate (shaded red). For comparison we show the spectra predicted by the default Pythia configuration (dashed black), Sibyll (dotted blue), Epos-Lhc (dash-dotted blue) and QgsJet (dashed blue). In the bottom section of the left panel, we show the ratio of the curves to our tune and see that our uncertainty imparts a \(\approx 50\%\) uncertainty on the number of dark photon decays in FASER. In the right panel we show FASER’s sensitivity in dark photon parameter space for our tune (solid red), our associated uncertainty (shaded red) and the sensitivity predicted by Epos-Lhc for comparison (dashed blue). While the uncertainty we calculate has a small impact on FASER’s sensitivity, see that the uncertainty is most important when FASER is limited by exposure (i.e. at small \(\epsilon\), large \(m_{A^{\prime}}\)).
mic ray physics, such as Epos-LHC, QgsJetand Sibyll. Additionally, multipurpose event generators like Pythia can also be utilized. However, it has been noticed that the corresponding predicted spectra exhibit some discrepancies when compared to the measured flux obtained from the LHCf experiment.
This paper addresses this issue by introducing a new dedicated forward tune for Pythia, specifically designed for forward physics studies at the LHC. This newly proposed tune is based on the QCDCR scenario introduced in Ref. [16], and offers a more adaptable approach for modeling beam remnant hadronization and is tuned to match the available forward particle spectra measured by LHCf. A comprehensive list of the relevant parameters and their corresponding values can be found in Table 1. We also explored an alternative tune based on the well-established Monash configuration utilizing the default CR scenario. However, we found that this alternative tune exhibits a poorer agreement with LHCf data compared to the QCDCR-based approach, as discussed in Appendix A.
When fine-tuning event generators, the process currently lacks a well-established method for quantifying and incorporating measures of uncertainty. In addition to our fit, we also provide an uncertainty in a data-driven way for the first time. What has sometimes been done, is to take the spread in event generators' predictions to define an uncertainty band on the true particle distribution. In this paper, we vary the relevant tuning parameter, \(\sigma\), around the best-fit such that 68% of the data points are captured. This band can then be used for further applications to study the impact of flux uncertainties.
To demonstrate an application of our tune, we also show its impact on the predicted neutrino and dark photon sensitivity. A precise understanding of the neutrino flux that better agrees with forward physics data is important to study TeV neutrino interactions, and an improved understanding of the dark photon flux will increase experiments' search sensitivity. Our tune also provides a means of understanding the flux uncertainty in each case. For both cases, we find that our tune is consistent with the Sibyll, Epos-Lhcand QgsJetgenerators, and that our uncertainty band is a bit wider than the spread of these generators' predictions.
In conclusion, our forward tune of Pythia enables enhanced precision in the exploration of forward physics phenomena. Our approach presents a data-guided mechanism for honing the neutrino flux and its associated uncertainty. By gaining better control over the uncertainty in neutrino flux, it opens the gateway to improved investigations, including a refined modeling of neutrino production through hadron decay [56], exploration of sterile neutrino production, and a deeper understanding of neutrino interactions within experiments designed to unveil proton structure [57], and potential avenues toward uncovering new signatures of physics.
## Acknowledgment
We thank Aki Ariga, Tomoko Ariga, Eugenio Berti, Andy Buckley, Anatoli Fedynitch, Jonathan Feng, Hiroaki Menjo, Kenneth Osterberg, Stefan Hoeche, Tanguy Pierog, Christophe Royon for many fruitful discussions. We are grateful to the authors and maintainers of many open-source software packages, including scikit-hep[58]. This work utilized the infrastructure for high-performance and high-throughput computing, research data storage and analysis, and scientific software tool integration built, operated, and updated by the Research Cyberinfrastructure Center (RCIC) at the University of California, Irvine (UCI). The RCIC provides cluster-based systems, application software, and scalable storage to directly support the UCI research community. [https://rcic.uci.edu](https://rcic.uci.edu). The work of M.F. was supported by NSF Grant PHY-2210283 and was also supported by NSF Graduate Research Fellowship Award No. DGE-1839285. F.K. acknowledges support by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. T.S. has been supported by the Swedish Research Council, contract number 2016-05996.
### Alternate Monash Tune
Here we discuss and show the results of the alternate tune which is based off the well-known Monash tune to central physics, which we provide for comparison purposes. We show our fitting results in Table 2 and our fitted spectra against LHCf data in Fig. 5. While we find comparable tuning parameters for this Monash based tune as our main tune, the QCDCR configuration from Ref. [16] proves to be an important feature for our tuning purposes.
While the Monash based tune has some same advantages of our primary tune, there are some clear deficiencies. In particular, the photon spectra shows a significant underproduction of forward photons with \(E\lesssim 3\) TeV - a similar effect same can be seen for relatively softer pions. A further deficiency can be seen in the \(\eta>10.76\) neutron spectra, particularly for the \(\sqrt{s}=7\) TeV -- the Monash tune does not address the shape of the neutron spectra as well as our primary tune does.
\begin{table}
\begin{tabular}{l|c||c|c|c} \hline \hline Full name & Shorthand & Baseline (Monash) & Forward Tune & Uncertainty \\ \hline BeamRemnants:dampPopcorn & \(d_{\rm pop}\) & 1 & 0 & \\ BeamRemnants:hardRemnantBaryon & \(f_{\rm renn}\) & off & on & \\ BeamRemnants:aRemnantBaryon & \(a_{\rm renn}\) & - & 0.68 & \\ BeamRemnants:bRemnantBaryon & \(b_{\rm renn}\) & - & 1.22 & \\ BeamRemnants:primordialKTsoft & \(\sigma_{\rm soft}\) & 0.9 & 0.56 & \(0.2\ldots 1.42\) \\ BeamRemnants:primordialKThard & \(\sigma_{\rm hard}\) & 1.8 & 1.8 & \\ BeamRemnants:halfScaleForKT & \(Q_{\rm half}\) & 1.5 & 10 & \\ BeamRemnants:halfMassForKT & \(m_{\rm half}\) & 1 & 1 & \\ BeamRemnants:primordialKTremnant & \(\sigma_{\rm renn}\) & 0.4 & 0.56 & \(0.22\ldots 1.42\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The main Pythia parameters studied in this article, their default parameters in the Monash tune, and their values in the Monash based tune obtained in this study. The last column shows the uncertainty range for \(\sigma_{\rm soft}=\sigma_{\rm renn}\) as discussed in Sec. III.4.
Figure 5: LHCf spectra as compared to an alternate tune that we explored based on the Monash tune. The LHCf measurements of pions (upper left), photons (lower left) and neutrons at \(\sqrt{s}=7\) TeV (upper right) and \(\sqrt{s}=13\) TeV (lower right) as compared to our alternate, Monash based tune and the default Pythia prediction. The solid curve is the central prediction of our Monash based tune, and the shaded region defines our uncertainty band. The dashed curve is the default Pythia prediction and the black error bars are the measured data points. The text near the curves indicates the \(\eta\) (or \(p_{\perp}\)) of the curve, as well as a multiplicative shift that we use for display purposes. |
2309.13542 | Integrated Sensing and Communications for IoT: Synergies with Key 6G
Technology Enablers | The Internet of Things (IoT) and wireless generations have been evolving
simultaneously for the past few decades. Built upon wireless communication and
sensing technologies, IoT networks are usually evaluated based on metrics that
measure the device ability to sense information and effectively share it with
the network, which makes Integrated Sensing and Communication (ISAC) a pivotal
candidate for the sixth-generation (6G) IoT standards. This paper reveals
several innovative aspects of ISAC from an IoT perspective in 6G, empowering
various modern IoT use cases and key technology enablers. Moreover, we address
the challenges and future potential of ISAC-enabled IoT, including synergies
with Reconfigurable Intelligent Surfaces (RIS), Artificial Intelligence (AI),
and key updates of ISAC-IoT in 6G standardization. Furthermore, several
evolutionary concepts are introduced to open future research in 6G ISAC-IoT,
including the interplay with Non-Terrestrial Networks (NTN) and Orthogonal
Time-Frequency Space (OTFS) modulation. | Aryan Kaushik, Rohit Singh, Ming Li, Honghao Luo, Shalanika Dayarathna, Rajitha Senanayake, Xueli An, Richard A. Stirling-Gallacher, Wonjae Shin, Marco Di Renzo | 2023-09-24T03:59:08Z | http://arxiv.org/abs/2309.13542v1 | # Integrated Sensing and Communications for IoT: Synergies with Key 6G Technology Enablers
###### Abstract
The Internet of Things (IoT) and wireless generations have been evolving simultaneously for the past few decades. Built upon wireless communication and sensing technologies, IoT networks are usually evaluated based on metrics that measure the device's ability to sense information and effectively share it with the network, which makes Integrated Sensing and Communication (ISAC) a pivotal candidate for the sixth-generation (6G) IoT standards. This paper reveals several innovative aspects of ISAC from an IoT perspective in 6G, empowering various modern IoT use cases and key technology enablers. Moreover, the address the challenges and future potential of ISAC-enabled IoT, including synergies with Reconfigurable Intelligent Surfaces (RIS), Artificial Intelligence (AI), and key updates of ISAC-IoT in 6G standardization. Furthermore, several evolutionary concepts are introduced to open future research in 6G ISAC-IoT, including the interplay with Non-Terrestrial Networks (NTN) and Orthogonal Time-Frequency Space (OTFS) modulation.
Integrated Sensing and Communication (ISAC), 6G Standardization, ISAC-IoT, RIS, NTN, OTFS Modulation.
## I Introduction
The Internet of Things (IoT) has revolutionized the way we interact with our environment and technology. IoT devices rely on advanced communication protocols and networks to share the acquired data in real time. Leveraging the potential of advanced sensing and communication abilities, significant efforts are being made to revolutionize the IoT experience further. For instance, the radio communication division of the International Telecommunication Union (ITU-R) already drafted new recommendations for International Mobile Telecommunication 2030 IMT-2030 (6G) [1]. Integrated Sensing and Communication (ISAC), one of the key 6G usage scenarios, is of particular interest since it enables one to sense and better understand the physical world. ISAC embodies the seamless fusion of two critical IoT components: sensing capabilities and communication infrastructure over a single integrated platform or cooperating as individual entities over a single network and hence complements various modern 6G IoT use cases.
An illustration of the components of ISAC-enabled IoT is given in Fig. 1, which covers almost a complete \(360^{0}\) IoT framework, including enhanced positioning, seamless connectivity, smooth monitoring, and several other possibilities. In particular, ISAC offers numerous technological advantages, including [2]: _a) Real-time Data Insights:_ By combining sensing and communication operations, ISAC empowers IoT devices to instantaneously transmit and receive data, making it more convenient for timely decision-making, _b) Predictive Analytics:_ Joint utilization of emerging processing tools and real-time data analysis enables IoT applications to predict trends, anomalies, and potential issues, _c) Personalized Experiences:_ IoT devices with ISAC can tailor experiences as per the individual user's requirements and location.
The coexistence of ISAC, whether on a single platform or as individual sensing and communication units cooperating with each other, can provide most of the required IoT features envisioned in the next generation of wireless services. For instance,
Fig. 1: ISAC enabled 6G IoT framework.
enabling real-time data insights makes it more convenient to support critical IoT applications such as smart cities. All such features pave the way for innovative applications across industries, transforming the way we interact with technology. Given the emergence of 6G-IoT and the novel features of ISAC, this work highlights the benefits of ISAC implementations in 6G IoT, including its key enabling techniques, use cases, challenges, and future prospects.
## II ISAC-enabled IoT: Benefits and Integration
### _Transformative Benefits of ISAC_
_Better Resource Management:_ ISAC enables communication and sensing functionalities on a common platform, providing better spectrum utilization to support ultra-low latency and high bandwidth connections. This is indeed crucial for most of the recent IoT applications, i.e., industrial automation, autonomous vehicles, and augmented reality, etc.
_Massive Connectivity:_ 6G and Beyond (6GB) networks aim towards a massive number of IoT devices where ISAC can play a key role through accommodating diverse deployment needs, ranging from smart cities to healthcare, agriculture, smart transport, environmental monitoring and public safety applications.
_Contextual Awareness and Cost Efficiency:_ ISAC allows IoT devices to gather and process rich contextual information, especially gathered from their surroundings. An optimal utilization of these functionalities provides several benefits, including better decision-making processes, intelligent responses, etc., such as, for example, a reduction in frequent high-power communication, extending the battery life of devices.
_Distributed Intelligence and Edge-Cloud Computing:_ The coexistence enabled by ISAC allows IoT devices to collaborate and share device-to-device information directly with each other. This enables the networks to make decentralized decisions at the device end itself.
_Security and Privacy:_ ISAC can also help in enhanced security through localized threat detection and anomaly recognition. In addition, the integration of communication and sensing can provide enhanced user privacy, e.g., minimizing the transmission of sensitive information.
### _ISAC Support for Critical IoT Applications_
6G is intended to support critical IoT applications mainly characterized by the degree to which they fulfill various metrics, including reliability, resilience, sustainability, responsiveness, accuracy, etc. In the following, we summarize how the joint utilization of communication and sensing fulfills the key aspects of critical IoT applications.
_ISAC As DFRC:_ Under the umbrella of ISAC, Dual Function Radar Communication (DFRC) also proposes sensing and communication functionalities on a single platform [3], where the sensing function involves radar probing and target mapping. DFRC can achieve simultaneous data transmission and localization of a target source in terms of its signal's estimated Angle of Arrival (AoA). As depicted in Fig. 2, DFRC is helpful in millimeter Wave (mmWave) systems, especially in high mobility scenarios, where it can provide seamless beam estimation and prediction to support critical applications such as those in vehicular networks.
_ISAC Meets Intelligent Metasurfaces:_ Another parallel research thrust focuses on the fusion of Reconfigurable Intelligent Surfaces (RIS) with ISAC to support a seamless integration [4]. RIS is known for its ability to control the transmission environment via tuning multi-path phases, which makes it a convenient tool in supporting ISAC applications. RIS-added benefits to ISAC are discussed in more detail in subsequent sections.
## III Intelligent Metasurfaces and AI for ISAC-IoT
### _RIS-aided ISAC-IoT_
Owing to the superior ability to construct an intelligent and controllable electromagnetic environment by manipulating the electromagnetic properties of the metasurface, RIS has been envisioned as a promising technique to revolutionize ISAC-IoT networks. As depicted in Fig. 3, RIS can be deployed in ISAC-IoT networks to enhance the connectivity and coverage for various practical applications, e.g., intelligent transportation, smart cities, and beamforming [5]. The fusion of ISAC-IoT and RIS is more than the combination of two functionalities. ISAC-IoT grants devices not only enhanced communication
Fig. 3: RIS deployments in the ISAC-IoT network.
Fig. 2: DFRC-based ISAC for predictive beamforming.
performance but also active participation in the realm of environmental sensing.
The deployment of RIS introduces a layer of intelligence to ISAC-IoT networks, endowing them with the capacity for real-time signal enhancement, energy-efficient and reliable transmission and sensing, connectivity and coverage expansion, and effective interference management. In addition, with the ability to manipulate electromagnetic waves, intelligent metasurfaces can also act as wireless sensors, capturing environmental data such as temperature, humidity, and even the presence of objects, providing IoT devices with a comprehensive understanding of their surroundings. Therefore, the deployment of RIS offers much more than incremental performance improvements; plus, it plays a vital role as an active contributor to ISAC-IoT networks.
### _Artificial Intelligence for RIS-aided ISAC-IoT_
As discussed above, the joint utilization of RIS with ISAC can further extend various new capabilities within the realm of IoT. RISs are designed to go beyond passive functionality by incorporating sensors, communication modules, and Artificial Intelligence (AI)-driven processing to enable a wide range of advanced IoT applications. Indeed, RIS can be further fused with AI-driven algorithms, making it convenient to support modern IoT applications. In the following, we cover some of the key aspects of AI-driven RIS-aided ISAC-IoT:
_AI Assisted RIS Training for Massive IoT:_ RIS provides signal enhancement at the receiver but can require high-complexity channel training. Effective utilization of AI models, especially based on modern Machine Learning (ML) algorithms like deep neural networks, are intended to provide smooth formation of relationships between transmitted signals, reflections, and received signals. The model can be trained using the collected data from various IoT nodes to predict the channel characteristics, such as path loss, phase shifts, and delays, for different scenarios.
_ML Based Pre-coding and Optimization:_ Similar to channel training, RIS transmission precoding and optimization involve extensive computational complexity due to numerous multi-paths and channel coefficients, thereby scaling drastically with the advent of massive IoT. Nevertheless, ML-based solutions can also be applied to learn complex relationships and patterns through various RIS elements, enabling more effective precoding strategies and optimization.
_AI Added Data Security:_ The evolution of massive IoT is accompanied by another issue in the form of data security, especially in critical IoT use cases requiring sensitive data sharing. Research efforts are being conducted to combat such issues. For instance, AI-augmented data security pillared on RIS involves utilizing innovative AI techniques for the manipulation of electromagnetic signals for better security.
### _Demonstrating RIS-aided ISAC-IoT: Results and Future Prospects_
In order to better reveal the potential and advantage of deploying RIS in ISAC-IoT systems, a case study is provided in this subsection. We consider an RIS-assisted ISAC-IoT system, where a colocated multi-antenna Base Station (BS) or Access Point (AP) simultaneously performs multi-user communications and target detection with the assistance of an RIS. In particular, the BS/AP simultaneously transmits data to two single-antenna users and detects one target in the presence of a clutter source. Utilizing the algorithm in [6], we aim to maximize the radar Signal-to-Interference-plus-Noise Ratio (SINR) and satisfy the communication Quality-of-Service (QoS) and constant-modulus transmit power constraints.
We first present the resulting beampattern in Fig. 4. We display the signal power at each location in different colors according to the colorbar shown on the right-hand side in Fig. 4. In the considered system, the channel condition of the direct links is worse than that of reflected links. Therefore, the beams generated by the BS are mainly transmitted toward the RIS, whose passive beams further assist the BS to serve users and detect the target.
We plot the radar signal-to-noise ratio (SNR) gain of different paths versus the number of RIS elements \(N\) in Fig. 5. The scheme without an RIS ("w/o RIS") is also included for
Fig. 4: Beampattern of RIS-assisted ISAC-IoT system (BS: diamond, RIS: square, target: star, clutter: triangle, user: circle).
Fig. 5: Radar SNR gain versus the number of RIS reflection elements.
comparison. It can be easily observed that an RIS with more reflection elements offers higher radar SNR as it provides more spatial Degrees-of-Freedom (DoF). Besides, as the number of RIS elements increases, the BS will generate a stronger beam toward the RIS and a weaker beam toward the target to achieve better sensing performance because the virtual Line-of-Sight (LoS) link established by the RIS is enhanced. In particular, the role of the LoS link between the BS and the target dominates for relatively small \(N\), while the virtual LoS link created by the RIS is important for large \(N\).
In order to stimulate the deployment of RIS in ISAC-IoT systems and capture the opportunities they offer, we discuss several opportunities and shed light on various future directions, such as:
_Multiple RISs:_ Since multiple RISs can offer additional DoFs, their geographical distribution presents opportunities for optimizing signal transmission and mitigating interference. By strategically deploying multiple RISs, we can achieve high-quality communication QoS and accurate sensing performance in hotspots owing to the improved beamforming gains. Moreover, multiple views of a target provided by multiple RISs increase the spatial diversity of radar echo signals and thus improve the target detection and parameter estimation performance.
_Target/Device-Mounted RIS:_ Deploying RIS on the surface of the target or IoT device can effectively assist the cooperative sensing and tracking. This approach becomes particularly valuable when dealing with a target or IoT device that needs to be known and tracked by the ISAC-IoT system. In order to achieve better beam direction or target tracking performance, RIS can be deployed on the surface of a friendly target to enhance the back-scattered signal reflected to the radar receiver, which is especially useful when the target area is small. To achieve this goal in a complicated and dynamic electromagnetic environment, key challenges include mathematical modeling of the system, optimization of RIS reflection coefficients and design of RIS control signaling, etc.
### _Holographic MIMO-Aided ISAC-IoT_
Holographic MIMO (HMIMO) is another groundbreaking technology that represents a software-defined transceiver capable of producing three-dimensional (3D) beams using holographic principles and is being considered a promising technique for the upcoming wireless generation. Specifically, HMIMO-aided ISAC refers to a framework that makes a joint utilization of holography MIMO systems towards enhanced sensing and communication capabilities [7]. Since the IoT ecosystem pillars connectivity and data exchange, the incorporation of HMIMO further adds a layer of dynamic adaptability. Devices equipped with HMIMO can dynamically adjust their beams to ensure seamless communication even in challenging scenarios, such as dense urban areas.
Bringing together holographic MIMO and ISAC empowers communication capabilities to support various modern applications, including smart city, virtual/augmented reality, industrial automation, etc. Besides, beyond communication, HMIMO-assisted IoT is also intended to revolutionize sensing abilities. Via this integration, the same beams employed for communication can be repurposed for precise sensing, opening the doors for seamless IoT deployment for various modern applications, including environment monitoring, traffic flow patterns, etc.
## IV ISAC-IoT Standardization and Use Cases
### _ISAC-IoT: 3GPP Standardization_
In terms of the current standardization within the 3rd Generation Partnership Project (3GPP), the Service and System Aspects working group 1 (SA1) for the Fifth Generation (5G) New Radio (NR) has conducted a Study Item on use cases and requirements for future enhancement of 5G systems [8] to support ISAC sensing services for various applications. This 3GPP SA1 study has identified use cases with requirements for Vehicular-to-X (V2X), Unmanned Aerial Vehicles (UAVs), 3D map reconstruction, smart city, smart home, factories, healthcare, and applications for the maritime sector [9]. Within the indoor factory setting and of particular relevance to ISAC for Industrial IoT (IIoT), the following use cases are considered: detection of Automatic Guided Vehicles (AGV) and measurement of proximity to humans, collision avoidance for Autonomous Mobile Robots (AMR) and AGVs, the combined use of sensing and localization for accurate positioning of AGVs and AMRs, gesture detection and recognition, etc.
This 3GPP Study Item paves the way for future specifications of the Service and System Aspects (SA) and the Radio Access Network (RAN) required to support sensing services in subsequent releases (e.g., Release 19 and beyond) within the constraints of 5G systems. It is, however, expected that 6G will be designed from the outset to support ISAC in a more optimized way natively. Therefore, the expectation is that 6G will support ISAC-IoT in further use cases with more demanding Key Performance Indicators (KPIs).
### _ISAC-IoT Use Cases: Potential for 6G and Beyond_
_High-Accuracy Localization and Tracking:_ In a 6G ISAC-IoT system, the large bandwidth beyond mmWave bands and ultra-massive MIMO technologies can provide superior resolution and excellent multi-path resolving capabilities for high-accuracy localization applications for both device-based (e.g., UE connected in a 6G network) and device-free (environmental objects) scenarios. In addition, the dense deployment of massive antennas will also enable high-precision direction estimation. With such sensing capabilities, collaborative robots in future smart factories can work safely with humans and also with each other. Some examples of collaboration between moving robots may include a drone landing on a moving carrier vehicle to get charged; a delivery robot filling a smart container (bin/tank) when detected as empty, etc. In such proximity use cases, centimter-level localization accuracy is required to perform the tasks.
_Simultaneous Imaging, Mapping, and Localization:_ 6G ISAC-IoT based sensing capabilities in simultaneous imaging, mapping, and localization enable mutual performance improvements for these functions, which opens up a realm of possibilities in 3D indoor/outdoor non-line-of-sight imaging
and mapping. For instance, the sensors on a single mobile vehicle usually have a restricted view and limited coverage due to the weather, obstacles, and the sensors' power control. That said, nearby moving vehicles and stationary base stations can jointly provide a greater field of view, longer sensing distance, and higher resolution with the help of an ISAC system.
Thus, the vehicles can use the reconstructed map processed by the base stations to determine their next move to achieve higher levels of autonomy. Furthermore, the sensing resolution and accuracy significantly improve due to the fusion of imaging results that are shared globally through the network with cloud-based services. Densely distributed base stations in an urban area, together with ISAC, make environmental reconstruction and 3D localization possible, which in turn can be used to form a virtual urban city. Similar use cases could be applied to indoor factories with many autonomously moving AGVs or robots.
_Augmented Human Sense:_ With the use of much higher frequency bands, an ISAC-IoT system can be implemented in portable devices to augment human senses and enable people to _see_ beyond the limits of human eyes. Such capabilities can be incorporated into portable terminals, e.g., sensing devices such as 6G-enabled mobile phones, wearables, or medical equipment implanted beneath the human skin. This will open the door for numerous applications such as remote surgery, detection of product defects, sink leakage detection, etc., where very high range and cross-range resolutions are required.
_Smart Healthcare:_ Device-free gesture and activity recognition based on the joint capability of sensing and ML is another promising aspect of 6G ISAC-IoT applications to promote contactless user interfaces and camera-free supervision where privacy can be protected. With high classification accuracy, many functionalities, such as gesture recognition, emotion recognition, heartbeat detection, fall detection, respiration detection, sneeze sensing, intrusion detection, etc., can be implemented in a smart hospital in the foreseeable future. As a novel usage scenario, a medical rehabilitation system in a smart hospital enables automatic supervision of patients during their physiotherapy exercises. There will be automatic prompt alerts for incorrect movements or gestures, thus significantly improving the patients' rehabilitation.
### _ISAC IoT Over Non-Terrestrial Networks_
As the number of IoT sensors grows exponentially, achieving massive connectivity based on traditional terrestrial infrastructure becomes challenging. Moreover, IoT sensors in remote areas such as mountains, forests, oceans, deserts, and rural regions are difficult to serve through existing Terrestrial Networks (TN). To overcome these issues, low Earth orbit (LEO) satellite mega constellation networks with low latency and high throughput can be promising solutions. Moreover, the integration of UAVs and high altitude platform stations (HAPS), which are closer to the ground-based IoT sensors, enable the creation of a more flexible Non-Terrestrial Network (NTN) [10].
_ISAC-IoT Meets NTI:_ The interplay of NTN with ISAC-IoT provides an opportunity that enables NTN platforms to serve both communication and sensing functions. As shown in Fig. 6, this integration enables NTN platforms not only to seamlessly communicate with a large number of IoT sensors in remote areas but also to simultaneously perform remote sensing functions such as synthetic aperture radar (SAR) [11]. In order to achieve both massive connectivity and satisfactory sensing performance by leveraging the wide and flexible coverage of NTN, efficient resource allocation and multiple access techniques are vital. As depicted in Fig. 6, satisfying these demands through orthogonal resource allocation methods like time-division and frequency-division ISAC could be challenging due to the inefficient use of the wireless resource. In this regard, overlapped resource allocation methods based on non-orthogonal multiple access (NOMA) and Rate-Splitting Multiple Access (RSMA) are promising candidates. In this case, joint waveform design and interference control techniques become inevitable to simultaneously meet both communication and sensing performance requirements at a satisfactory level.
_UAV-Mounted RIS for ISAC-IoT:_ Combining the high mobility and flexibility of UAVs with the low cost, light weight and low power consumption of RIS, creates an excellent means for providing reliable coverage in ISAC-IoT systems. Lightweight RIS can be mounted on the UAV to create virtual LoS links to cover shadowed areas. Nevertheless, it is crucial to acknowledge that signals reflected from the UAV may interfere with target detection, and Doppler frequency components generated by the moving UAV may introduce ambiguities in velocity measurements. Therefore, the joint design of RIS reflection coefficients associated with transmit beamforming, received signal processing, and the UAV trajectory becomes an important tool for achieving satisfactory communication and sensing performance.
Fig. 6: NTN and its interplay with ISAC-IoT.
### _ISAC-IoT: Facilitating OTFS Modulation_
Communication in high-frequency bands provides the most efficient method to improve the capacity of IoT networks. As a result, mmWave and terahertz bands are becoming more popular in future IoT devices. This makes future IoT devices more susceptible to the impact of non-ideal power amplifiers and phase noise in high-frequency bands. Similarly, high-mobility IoT applications suffer from high Doppler due to complex time- and frequency-variant channels. Therefore, existing solutions fail in the context of future IoT devices that utilize high-frequency bands and operate in high-mobility networks.
Orthogonal Time Frequency Space (OTFS) modulation is a novel approach that provides an attractive solution for this problem due to its resilience to time and frequency-varying channels, low Peak-to-Average-Power Ratio (PAPR), robustness to phase noise, and higher spectral efficiency [12]. OTFS modulation-based solutions have been proposed for several IoT application areas, such as future vehicular networks, underwater acoustic communication, and NTN. For example, integrated grant-free non-orthogonal multiple access with OTFS is proposed to mitigate the severe Doppler effect and round trip delay experienced by LEO satellite-based non-terrestrial IoT networks.Given the advantages of using ISAC in IoT networks, it is reasonable to consider OTFS as a potential waveform for IoT-ISAC networks. Existing research on OTFS-ISAC enabled IoT focuses on the following topics:
_High-Mobility Vehicular IoT Networks:_ OTFS-ISAC-enabled vehicular networks allow the Road Side Unit (RSU) to obtain the necessary information to generate a dynamic vehicle network. Considering the precision of delay and Doppler shift estimation in OTFS, OTFS-ISAC signals are utilized in [13] to predict the motion state of moving vehicles combined with wide beams to estimate the position. This approach addresses the issue of vehicles with high speed not being covered by narrow beams in high-mobility vehicular IoT networks.
_RIS-aided OTFS for IoT:_ RIS-aided OTFS is capable of facilitating future IoT networks due to the combined benefits of flexible channel configurations and robustness in high-mobility communication. Given the real-time communication and sensing needed in future IoT networks, a new transmission scheme is proposed in [14] that uses delay and Doppler shifts in cascaded channels for sensing the user, followed by configuration of RIS passive beamforming based on the sensed parameters.
_Low-Complexity Sensing for IoT:_ Despite the popularity of OTFS-based ISAC solutions, existing sensing solutions involve complex algorithms and high computational complexity, which makes them infeasible for IoT devices. Recently, an echo pre-processing approach [15] and new waveform designs that couple OTFS with Frequency Modulated Continuous Waveforms (FMCW) have been proposed for OTFS-ISAC-aided IoT. However, research areas, such as sensing in the presence of timing and frequency offsets due to distributed, non-coherent, and asynchronous transceivers, are yet to be explored.
## V ISAC-IoT Challenges and Future Capabilities
ISAC in IoT systems provides numerous potential benefits, although innovative approaches are required to enable advanced ISAC coexistence. Nevertheless, ISAC opens up several doors for future applications, research, and integration.
### _Challenges and Solutions_
_Network Reliability and Security:_ Wireless transmissions generally encounter impairments including attenuation, interference, noise, etc., and ISAC integration necessitates a robust network implementation. Some useful tools may include error correction codes or adaptive modulation techniques to maintain reliable connections. Another integration issue arises in the form of data security, e.g., ensuring data confidentiality, device authentication, etc.
_Compliance and Regulatory Considerations:_ Another challenge arises in the form of non-uniform industrial regulations, e.g., data protection, electromagnetic compatibility, safety standards, etc. For seamless integration, ensuring the regulatory-approved behavior of IoT devices is necessary without compromising functionality or usability.
_Cost Effectiveness and Power Management:_ An exponential increase in IoT devices will proportionally affect power consumption, especially for ISAC functions. This requires efficient power management strategies, e.g., duty cycling, low-power components, sleep modes, etc., as well as innovative implementation and research efforts, optimizing hardware components and tracing, minimizing power consumption, embedding suitable communication modules, etc.
_Data Volume Handling and Processing:_ ISAC integration and evolving IoT techniques are expected to generate substantial amounts of data from a wide variety of sensors. This, in turn, requires efficient data handling and processing. Some helpful tools in this regard include data compression, filtering, aggregation, etc. In addition, research efforts are currently focused on advanced data processing techniques such as edge analytics, data fusion, etc.
### _Future Capabilities_
_Edge Computing Integration:_ Forthcoming IoT systems will leverage edge computing, enabling data processing and analysis to occur closer to the data source. This involves deploying edge servers or gateways that employ powerful processors, GPUs, and FPGAs to execute complex algorithms, reducing latency and conserving network bandwidth. This integration requires the development of edge applications, containerization techniques, and orchestration frameworks to manage resources effectively.
_Synergy with AI:_ As discussed above in Section III.B, ISAC combined with the potential of AI will lead to significant advancements, creating devices that can make decisions in real-time based on the data they collect. From an industrial perspective, AI algorithms can analyze sensor data to predict when maintenance is required. Similarly, AI-powered analysis of data received from various IoT sensors can enable doctors to detect early signs of diseases and even predict health issues
before they become critical. The utilization of AI will help in numerous IoT applications including smart cities, supply chains, agriculture, security, smart transport, augmented logistics, environmental monitoring, public safety, etc.
_Multi-Modal Sensing Fusion:_ Multi-modal sensor fusion aims to integrate data from different types of sensors/sources to create a more comprehensive and accurate understanding of the environment. Such integration will allow IoT systems to leverage various sensors to make more accurate decisions. For instance, in a smart city application, capturing and processing data from different traffic cameras and sound sensors will provide a more accurate understanding of traffic conditions.
_Swarm Intelligence and Collaborative Utilization:_ Apart from the above technologies, ISAC-assisted IoT can also benefit from swarm intelligence and collective utilization of massive IoT units and sensor data. Inspired by nature, their collective utilization can significantly benefit IoT networks, e.g., achieving better scalability, adaptability, etc. For instance, IoT devices can communicate and adapt themselves via collective information, enabling them to make group decisions towards optimizing an overall objective.
## VI Conclusion
IoT has evolved the way we interact with technology, and the coexistence enabled by ISAC is intended to revolutionize the IoT experience in 6G. This paper discusses several innovative aspects of ISAC-enabled IoT in 6G. Specifically, this work has emphasized the following contributions: _a)_ comprehensively explored ISAC-enabled IoT, highlighting its benefits, applications, challenges and future prospects in 6G-IoT, _b)_ describing the progress of ISAC-IoT in 3GPP standardization for 6G standards, and vital use cases of ISAC-IoT in real world plus synergies with the key technology enablers such as RIS, AI, NTN and OTFS modulation, _c)_ highlighting the future potentials such as edge computing integration and multi-modal sensing fusion for ISAC-enabled 6G IoT.
|
2309.03989 | CDFSL-V: Cross-Domain Few-Shot Learning for Videos | Few-shot video action recognition is an effective approach to recognizing new
categories with only a few labeled examples, thereby reducing the challenges
associated with collecting and annotating large-scale video datasets. Existing
methods in video action recognition rely on large labeled datasets from the
same domain. However, this setup is not realistic as novel categories may come
from different data domains that may have different spatial and temporal
characteristics. This dissimilarity between the source and target domains can
pose a significant challenge, rendering traditional few-shot action recognition
techniques ineffective. To address this issue, in this work, we propose a novel
cross-domain few-shot video action recognition method that leverages
self-supervised learning and curriculum learning to balance the information
from the source and target domains. To be particular, our method employs a
masked autoencoder-based self-supervised training objective to learn from both
source and target data in a self-supervised manner. Then a progressive
curriculum balances learning the discriminative information from the source
dataset with the generic information learned from the target domain. Initially,
our curriculum utilizes supervised learning to learn class discriminative
features from the source data. As the training progresses, we transition to
learning target-domain-specific features. We propose a progressive curriculum
to encourage the emergence of rich features in the target domain based on class
discriminative supervised features in the source domain. We evaluate our method
on several challenging benchmark datasets and demonstrate that our approach
outperforms existing cross-domain few-shot learning techniques. Our code is
available at https://github.com/Sarinda251/CDFSL-V | Sarinda Samarasinghe, Mamshad Nayeem Rizve, Navid Kardan, Mubarak Shah | 2023-09-07T19:44:27Z | http://arxiv.org/abs/2309.03989v2 | # CDFSL-V: Cross-Domain Few-Shot Learning for Videos
###### Abstract
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples, thereby reducing the challenges associated with collecting and annotating large-scale video datasets. Existing methods in video action recognition rely on large labeled datasets from the same domain. However, this setup is not realistic as novel categories may come from different data domains that may have different spatial and temporal characteristics. This dissimilarity between the source and target domains can pose a significant challenge, rendering traditional few-shot action recognition techniques ineffective. To address this issue, in this work, we propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning to balance the information from the source and target domains. To be particular, our method employs a masked autoencoder-based self-supervised training objective to learn from both source and target data in a self-supervised manner. Then a progressive curriculum balances learning the discriminative information from the source dataset with the generic information learned from the target domain. Initially, our curriculum utilizes supervised learning to learn class discriminative features from the source data. As the training progresses, we transition to learning target-domain-specific features. We propose a progressive curriculum to encourage the emergence of rich features in the target domain based on class discriminative supervised features in the source domain. We evaluate our method on several challenging benchmark datasets and demonstrate that our approach outperforms existing cross-domain few-shot learning techniques. Our code is available at [https://github.com/Sarinda251/CDFSL-V](https://github.com/Sarinda251/CDFSL-V)
## 1 Introduction
Even though deep learning is inspired by the biological brain, in sharp contrast to humans, current deep models rely on large reservoirs of data to learn. The few-shot learning problem [41] is introduced to close this gap, where a learning model should generalize solely based on a handful of training data. In traditional few-shot learning [6], the learning model is initially exposed to an annotated _base dataset_, to learn generic features for the domain of interest. Then, this model is fine-tuned on a few labeled examples (support samples) of the test dataset and consequently evaluated on unlabeled test examples (query samples). However, this classic pipeline assumes the base and test datasets are from the same domain, thus closely related [35].
To mitigate this shortcoming, cross-domain few-shot learning (CDFSL) was proposed in [12], where the base dataset is from a different domain than the test data. Interestingly, it is shown in [25] that standard transfer learning--consisting of pre-training on the base dataset and fine-tuning on test data-- can significantly outperform few-shot learning methods in the cross-domain few-shot learning problem. Recently, extra unlabeled test examples were incorporated in addition to the base dataset in [28, 16]. Their approaches push forward cross-domain few-shot learning performance. In this paper, we follow this recent adaptation of CDFSL.
While few-shot learning is widely studied in the computer vision community [26], video few-shot learning is less explored [4]. To the best of our knowledge, current methods in cross-domain few-shot learning are solely focused on image data. In this work, for the first time, we study cross-domain few-shot learning in the video domain. A common scheme in video few-shot learning [45] utilizes an implicit assumption about video data, such as: a common mode of variation, similar temporal dynamics, or class distinctive features. However, in cross-domain few-shot learning, the base dataset can be drastically different from the
target data, For instance, the RareAct dataset [22] contains atypical actions which significantly deviate from the common actions present in the standard video datasets in terms of spatio-temporal dynamics, and the Diving48 dataset [21] contains temporally fine-grained actions which have very similar spatial layout. Therefore, it is challenging to apply standard video few-shot learning methods to these datasets.
In the context of cross-domain few-shot learning, _supervised_ pre-training on the source dataset has emerged as a common first step for most techniques [28, 16]. This is because a strong source backbone can significantly contribute to the overall performance of the model [25]. However, simply relying on supervised pre-training may not be sufficient, especially when the target domain is substantially different from the source domain. To address this, in this work we propose to perform _self-supervised_ pre-training on both source and target data to learn generic features. To be particular, we use recently proposed masked auto-encoder based [36] feature learning to learn generic features which are highly scalable and show better generalization performance. Nevertheless, the challenge remains on how to balance the learning of generic features (from source and target domain) and class discriminative features from the source dataset.
To this end, we propose a curriculum learning scheme by designing a progressive curriculum that balances learning the discriminative information from the source dataset with the generic information learned from the target domain. In the initial phase of the training, our curriculum utilizes supervised cross-entropy loss to learn class discriminative features from the source data. As the training progresses, we strive to transition to the target domain through learning discriminative features in the target domain. To achieve this, we devise a schedule that increases the weight of a consistency loss to help with this transition. We conduct extensive experiments to demonstrate the effectiveness of our proposed approach on various benchmark datasets. Our experiments show significant improvements in cross-domain few-shot action recognition performance.
In summary, our work makes the following major contributions,
* We propose a new, challenging, and realistic problem called cross-domain few-shot learning in videos (CDFSL-V).
* We propose a novel solution based on self-supervised feature learning and curriculum learning for this challenging problem, which can address the difficulties associated with CDFSL-V by striking a balance between learning generic and class-discriminative features.
* We conduct extensive experimentation on multiple benchmark datasets. Our proposed method outperforms the existing methods in cross-domain few-shot learning, as well as, strong baselines based on transfer learning.
## 2 Related Work
**Few-Shot Classification** Few-shot Learning methods can be split into two main categories: Meta-Learning and Transfer Learning [27]. Meta-Learning [32, 31] framework provides a very common technique for few-shot learning algorithms, where the training procedure mimics the evaluation procedure. Just as few-shot evaluation consists of multiple few-shot episodes on the target test set, meta-learning techniques train a model in an episodic fashion on a meta-train set. In meta-learning this is done to encourage fast adaptation on the meta-test set. The other main approach in few-shot learning is Transfer Learning, where a model is pretrained on the source dataset before being fine-tuned on
Figure 1: On the left, we have the existing benchmark for CDFSL in the image domain. On the right, we present our proposed benchmark for CDFSL in the video domain. Our benchmark includes tasks from diverse target datasets, which require recognizing novel actions from different data distributions (UCF101, HMDB51), strong temporal reasoning (SSV2), atypical action understanding (RareAct), and fine-grained temporal understanding (Diving48).
the target data for few-shot evaluation [35, 19, 40, 6]. Methods that use transfer learning aim to leverage as much information as possible from the source dataset in order to produce easily transferable features to be adapted to the target dataset. Both methods assume some degree of similarity between the source and target datasets, hinging on the idea that features that can discriminate classes in the source domain can also discriminate classes in the target domain. When moving from images to videos, the introduction of temporal information adds to the difficulty of the task. OTAM [4] uses temporal alignment to improve few-shot classification for videos, using a distance metric to compare frames of the queries and the support set. STRM [34] introduces a spatio-temporal enrichment module to look at visual and temporal context at the patch and frame level. HYSRM [39] uses a hybrid relation model to learn relations within and across videos in a given few-shot episode. Our method focuses on training an encoder with generalizable features by leveraging unlabeled target data during training through both self-supervised learning and enforcing a consistency loss moderated by curriculum learning.
**Self-Supervised Learning** Self-supervised learning has been shown to improve performance when combined with supervised learning by creating more transferable features [36]. These more generalizable features are extremely important in the cross-domain few-shot classification task, due to the domain gap and scarcity of labels. For self-supervised video classification, existing methods use contrastive learning to improve learning visual representation, at the cost of increased data augmentation and batch sizes [37, 23, 43, 1]. Masked auto-encoders [14] mask patches of an image and attempt to reconstruct the missing parts. VideoMAE [36] extends this to video by adding space-time attention via a ViT backbone, providing a data efficient solution so the self-supervised video pretraining. We use VideoMAE as the backbone of our method.
**Curriculum Learning** Curriculum learning involves prioritizing easier samples (or tasks) during training before increasing the weights of the more difficult samples [2]. Typically, training examples are sorted by a difficulty metric, and used to create mini-batches of increasing difficulty for training the model [13]. This method has shown success when applied in computer vision, specifically when used with transfer learning [42].
For our problem setup we work with two datasets,the labeled source dataset and the unlabeled target data (for which we generate pseudo-labels), simultaneously during training. In our method we leverage curriculum learning such that we focus on the large labeled source dataset at the beginning of training, and eventually shifting towards equal weighting of the source and target losses.
**Cross-Domain Few-Shot Learning** Similar to open-world semi-supervised learning [3, 11, 30, 29] that allows semi-supervised learning methods to perform on loosely related domains, the cross domain few shot learning framework permits base and test data that belong to different domains. BS-CDFSL [12] introduces a benchmark for the Cross-Domain Few-Shot problem for images. It consists of miniImageNet as the source dataset, and four target datasets of increasing difficulty: CropDisease [24], EuroSAT [15], ISIC [7], and ChestX [38]. STARTUP [28] attempts to solve this problem by learning a teacher model on the source dataset that is applied to generate pseudo-labels for the target dataset. Eventually, a new model on both the labeled source set and pseudolabeled target set is trained. Dynamic Distillation [16] improves upon this by updating the teacher model as a moving average of the student's weights. Both of these methods exhibit redundancy in the supervised training across their stages that we strive to eliminate in our approach.
While source-target dataset pairs such as UCF-HMDB51 from the SDAI Action II dataset [9] and the UCF-OlympicSport datasets [17] have been proposed [8], these dataset pairs share classes across domains, which is not representative of the CDFSL problem. We take inspiration from the BS-CDFSL benchmark and use Kinetics-100 [44] as our source, with UCF101 [33], HMDB51 [20], Something-SomethingV2 [10], Diving48 [21], and RareAct [22] as our datasets. We ensure that we remove any class overlap between the source and target datasets.
## 3 Methodology
This section elaborates on our approach to tackle the CDFSL problem in the video domain. At the core of our method, we learn features from the source and target data in a supervised and self-supervised fashion, respectively. Furthermore, we propose a progressive curriculum to encourage the emergence of rich features in the target domain based on class discriminative supervised features in the source domain. In the following, first, we discuss our problem formulation (Sec. 3.1). After that, we present our approach involving self-supervised feature learning and curriculum learning (Sec. 3.2).
### Problem Formulation
The Cross-Domain Few-Shot Video Classification task requires the classification of an unlabeled query video belonging to the target dataset \(\mathcal{D}_{T}\). A large labeled source dataset \(\mathcal{D}_{S}\) is available during training. \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\) have no shared classes, and usually have a significant domain gap. The unlabeled training split of \(\mathcal{D}_{T}\) is leveraged during training, denoted as \(\mathcal{D}_{T_{U}}\). For evaluation, multiple Few-Shot episodes are sampled from the testing split of \(\mathcal{D}_{T}\). These episodes consist of a small labeled support set \(\mathcal{S}\subset D_{T}\), consisting of a few labeled samples of each target class in the episode, and a disjoint query set \(\mathcal{Q}\subset D_{T}\) to be classi
fied. In the \(N\)-way \(K\)-shot classification setting, \(\mathcal{Q}\) and \(\mathcal{S}\) share the same \(N\) classes sampled from \(\mathcal{D}_{T}\) with \(\mathcal{S}\) having \(K\) labeled examples for each class.
### Approach
#### 3.2.1 Self-Supervised Feature Learning
A fundamental challenge in solving few-shot problems is learning generalizable representations. A successful representation learning method is based on self-supervised learning, therefore, it has been readily applied to few-shot learning problem. Even though, it has yet to be applied in CDFS learning. Following the success of VideoMAE, to extract strong representations from video data we apply VideoMAE model in our Pretraining phase. To this end, a rich set of features are extracted from a combination of the source and unlabeled-set of target dataset \(\mathcal{D}_{S}\bigcup\mathcal{D}_{T_{U}}\). After this step, the encoder model from VideoMAE, \(f\), is utilized as our primary feature extractor.
#### 3.2.2 Curriculum Learning
Next, in our framework, we further improve the quality of the extracted features with the help of the ground-truth labels of the source data. To this end, we train a classifier \(g\) on top of \(f\), where this classifier outputs the number of classes equal to the classes in the source domain. Training a classifier in such a supervised manner makes the self-supervised representation more compact and class discriminative, particularly in the source domain. Ideally, we want to achieve the same in the target domain. However, doing such is difficult without accessing the ground-truth labels in the target domain. To overcome this challenge and to better utilize the target data, we minimize a consistency loss for the unlabeled target samples. This consistency loss is minimized at the output space of the source domain where pseudo-labels are generated using a teacher network.
Supervised Representation LearningTo extract the discriminative features from the source dataset, we first train a student model \(f_{s}\) based on a supervised loss on the labeled source data. We use the commonly used cross-entropy loss as the supervised loss, \(\mathcal{L}_{sup}\), defined in the following,
\[\mathcal{L}_{sup}= \mathcal{L}_{CE}(\mathrm{Softmax}(f_{s}(\mathbf{x}_{i})),\mathbf{ y}_{i})\] \[= -\sum_{i=1}^{M}\mathbf{y}_{i}\log(\mathrm{Softmax}(f_{s}(\mathbf{ x}_{i}))), \tag{1}\]
where, \(\mathbf{x}_{i}\in\mathcal{D}_{S}\), \(M=|\mathcal{D}_{S}|\), and \(\mathbf{y}_{i}\) is the ground-truth label. The learned discriminative features provides us with more generalizable features to the target domain.
Unsupervised Representation LearningFor the unlabeled data from target domain, we apply pseudo-labels to increase generalizability of the learned features in an unsupervised fashion. To this end, after obtaining the pseudo-labels we compute a consistency loss. The consistency loss ensures that the representations from the student model match the representations from a teacher network. We create a teacher model \(f_{t}\) by taking an exponential moving average of the student model in the following manner,
\[f_{t}^{(i+1)}=\alpha f_{t}^{(i)}+(1-\alpha)f_{s}^{(i+1)}, \tag{2}\]
where, \(\alpha\) is the exponential decay parameter, \(i\) refers to \(i\)th iteration.
This consistency loss ensures that the \(f_{s}\) predictions for unlabeled target data match with the pseudo-labels generated from \(f_{t}\). Additionally, following the success of DINO [5] we want to extract features that can learn a local-to-global relationship between data. To this end, each batch of unlabeled target data \(\mathbf{X}\in\mathcal{D}_{T_{U}}\), is transformed into two separate sets to make strong and weak augmented copies of the batch: \(\mathbf{X}_{str}\) and \(\mathbf{X}_{weak}\). To be specific, we use temporally consistent \(\mathrm{RandomResizeCrop}\) and \(\mathrm{RandomHorizontalFlip}\) as a set of weak augmentations, while the set of strong augmentations consists of temporally consistent \(\mathrm{RandomColorJitter}\), \(\mathrm{RandomGreyscale}\), and \(\mathrm{RandomGaussianBlur}\) in addition to the set of weak augmentations.
To compute the consistency loss, first the weakly augmented unlabeled target data is passed through the teacher model to get the teacher outputs \(f_{t}(\mathbf{X}_{weak})\). These outputs are then sharpened by a temperature \(\tau\) to form pseudo-labels for the target data after performing the \(\mathrm{Softmax}\) operation. The consistency loss is a cross-entropy loss between the student outputs of the strongly augmented videos \(f_{s}(\mathbf{X}_{str})\) and the sharpened teacher outputs which is defined in the following,
\[\mathcal{L}_{con}=-\sum\hat{\mathbf{Y}}\log(\mathrm{Softmax}(f_{s}(\mathbf{X}_ {str}))), \tag{3}\]
where, \(\hat{\mathbf{Y}}=\mathrm{Softmax}(f_{t}(\mathbf{X}_{weak})/\tau)\).
The overall training objective for updating the parameters of the student network is a weighted average of the Supervised and the Consistency losses, defined in the following,
\[\mathcal{L}_{total}=\mathcal{L}_{sup}+\lambda\mathcal{L}_{con}, \tag{4}\]
where, the consistency loss scaling parameter \(\lambda\) controls the relative contribution of consistency loss to the total loss.
While previous CDFS methods have applied both supervised loss and consistency loss, they applied them in separate stages [16, 28]. One of the unique characteristics of our approach is to combine these losses through curriculum
learning, which not only simplifies the training pipeline but also improves performance.
In our curriculum, we adjust the difficulty of the consistency through tuning its scaling parameter \(\lambda\) following a pre-defined curriculum. In particular, at the beginning of training, we set the consistency loss scaling parameter, \(\lambda\), to a very low value. This makes the beginning of the training similar to performing supervised training solely on the source dataset. As the training progress, we emphasize the importance of consistency by increasing \(\lambda\) over the course of the training, which encourages the emergence of local-to-global features that can potentially generalize better in the target domain. Additionally, to facilitate the transition from the source domain to the target domain, we decay the learning rate of the classifier in the student model over the course of training. Initially, this classifier is trained at the same rate as the rest of the student model. This learning rate is decreased over the course of training to emulate freezing the classifier after supervised training on the source data. Once the training is complete, the student model is kept and the classifier is discarded. Using the labeled support set of the target data, a new logistic regression layer \(c^{\prime}\) is learned on top of the student model. The model can now be used for inference on the target query images. The entire procedure is summarized in Algorithm 1.
```
\(f_{s}\), \(f_{t}\): student, teacher model with parameters \(\theta_{s}\), \(\theta_{t}\). \(\tau\): teacher temperature \(\alpha\): momentum rate to update teacher for(\(\mathbf{x}_{s}\), \(\mathbf{y}_{s}\)), \(\mathbf{x}_{t}\) in loader do sample \(\mathbf{x}_{s}\), \(\mathbf{y}_{s}\) from base data sample \(\mathbf{x}_{t}\) from target data \(\mathcal{L}_{sup}=\mathcal{L}_{CE}(f_{s}(\mathbf{x}_{s}),\mathbf{y}_{s})\) \(\mathbf{x}_{weak},\mathbf{x}_{str}=\mathrm{WeakAug}(\mathbf{x}_{t}),\mathrm{ StrongAug}(\mathbf{x}_{t})\) \(out_{t}\), \(p_{s}=f_{t}(\mathbf{x}_{weak}),\mathrm{Softmax}(f_{s}(\mathbf{x}_{str}))\)\(\triangleright\) teacher logits and student pseudo-labels \(p_{t}=\mathrm{Softmax}(out_{t}/\tau,dim=-1).detach()\)\(\triangleright\) sharpen + stop-grad \(\mathcal{L}_{con}=\mathcal{L}_{CE}(p_{s},p_{t})\)\(\triangleright\) consistency loss \(\mathcal{L}_{total}=\mathcal{L}_{sup}+\lambda\mathcal{L}_{con}\) \(\theta_{s}\leftarrow\theta_{s}+\beta\nabla_{\theta_{s}}\mathcal{L}_{total}\)\(\triangleright\) update student \(\theta_{t}\leftarrow\alpha\theta_{t}+(1-\alpha)\theta_{s}\)\(\triangleright\) update teacher endfor
```
**Algorithm 1** Curriculum Learning for CDFSL-V
## 4 Experiments
In this section we evaluate our proposed method with strong transfer learning baselines and the recent techniques applied in cross-domain few-shot learning. For a thorough comparison, we utilize a variety of target domains to capture performance of different methods when encountering a variety of cross-domain scenarios. Our main result is that our approach outperforms existing sate-of-the-art cross-domain few-shot learning techniques. Finally, we conduct an ablation and analyse the significance of different components of our approach.
### Datasets
We use the Kinetics-100 [44] train split as our Source dataset. It contains 100 of the original dataset further split into 64, 12, and 24 class subsets for train, validation, and test, respectively for few-shot action recognition. We also conduct experiments on the larger Kinetics-400 [18] (Table 1). Due to class overlap between Kinetics and two of our target datasets, UCF101 and HMDB51, we remove the overlapping classes from the source dataset. Without this removal, the supervised training on shared classes between the source and target datasets would be an unfair representation of the Cross-Domain Few-Shot problem setting. The target datasets in the order of increasing difficulty are: UCF101, RareAct, HMDB51, Something-SomethingV2, and Diving48. UCF101 and HMDB51 are most similar to Kinetics datasets in terms of domain gap. They even have overlapping classes that needed to be removed in order to make them appropriate target datasets However, that is not the case for the other target datasets. For instance, the Something-SomethingV2 dataset has 87 classes, consisting of actions doing'something' to'something'. This dataset primarily contains zoomed-in videos focusing on the object instead of the person which is generally not the case for actions present in the actor-centric Kinetics dataset. Diving48 on the other hand is a dataset for fine-grained action recognition with 48 different dives, each comprised of different sequences of complex sub-actions. The RareAct is very different from all other source and target datasets since it contains unusual actions like 'blend phone' and is generally used for evaluating few/zero-shot action compositionality. For evaluation, we compute the 5-way 5-shot accuracy on the test-split for each target dataset.
### Experiment Details
We use the encoder network from VideoMAE with a ViT-S backbone for our feature extraction. For videos we sample 16 frames at a \(112\times 112\) resolution. We train on the combined training data of both the source and target datasets without labels for \(400\) epochs at a batch size of \(32\) using SGD optimizer at a learning rate of \(0.1\). After initializing the student and teacher models using the VideoMAE encoder, the student is trained for \(200\) epochs on the combined supervised and consistency losses. The student is updated directly using SGD with a learning rate of \(0.01\), and the teacher is updated as a moving average of the student weights with a momentum of \(0.9\). The teacher output is sharpened at a temperature of \(0.1\) to be used as pseudo-labels for the student output on the unlabeled target data.
The batch size used for the curriculum learning stage is \(16\). Over the course of training, we set the consistency loss scaling parameter, \(\lambda\) as:
\[\lambda_{cons}=\frac{\arctan(10(x-.5))}{\pi}+.5 \tag{5}\]
where \(x\) is the ratio of current epoch to total epochs in training. This reduces the weight of the consistency loss significantly in the start of training, while making it on par with the supervised loss towards the end. Similarly, the learning rate for the student classifier head (the classifier layer following \(f_{s}\)) decayed according to \(\lambda_{cls}=\frac{\arctan(-10(x-.5))}{\pi}+.5\). so that the classifier head learns primarily from the supervised loss early on and effectively freezes towards the end of training.
For few-shot adaptation, the student encoder is retained and the student classifier head is discarded. We then learn a new logistic regression classifier on top of the encoder using a sampled 5-way 5-shot support set from the target testing data. We report the accuracy on the remaining testing data for the selected classes.
### Kinetics-400 Experiment
Random initialization is used as the baseline for this experiment and entails learning a logistic regression classi
Figure 3: Results with varying size of source data.
Figure 2: Our goal is to solve the cross-domain few-shot learning task for the target dataset, leveraging the labeled base dataset alongside unlabeled target data. Our method three three stages: **1**: Self-supervised pretraining of an autoencoder on both the source and target data without labels is performed. **2**: The encoder is used to initialize a student and teacher model for curriculum learning. We compute a supervised loss on the labeled source data. For the consistency loss, we generate pseudo-labels using the sharpened teacher output for weakly augmented target images. The pseudo-labels are then used with the student output on strong augmentations of the same images to calculate the consistency loss. The supervised and consistency losses are both used to directly update the student, while the teacher is updated as a moving average of the student’s weights. **3**: for few-shot evaluation, the student classifier is replaced with a few-shot classifier that is fine-tuned on the labeled target support set. This classifier can then be used to classify the target query images.
fier on top of an _untrained_ VideoMAE encoder. We compare our method to two Cross-Domain Few-Shot methods for images, as no other methods exist to solve the CDFSL problem for videos. For this experiment, we include self-supervised pre-training for Dynamic Distillation and STARTUP, denoting them as Dynamic Distillation++ and STARTUP++, respectively. In addition, we compare our method to two Few-Shot methods for Videos: STRM [34] and HYSRM[39]. Our method outperforms the previous state-of-the-art method, Dynamic Distillation [16], across all 5 target datasets while using the Kinetics-400 dataset as the source. Additionally, the absolute improvement in classification performance is consistent with the aforementioned relative difficulty of each of the target datasets, with Diving48 improving the least.
Our main result is that we do better than existing CDFSL methods for images, as well as the Few-Shot methods for videos. As shown in Table [18], We outperform Dynamic Distillation by 2.2% on UCF101, 5.2% on HMDB51, 5.4% on SSV2, 2.7% on RareAct, and 2.8% on Diving48, averaging to a 3.4% increase. Compared to STARTUP, STRM, and HYSRM our method outperforms by 6.19%, 15.64%, and 11.8%, respectively. Interestingly, even our modified image baselines (STARTUP++ and Dynamic Distillation++) outperform these video few-shot methods, highlighting the inadequacy of traditional video few-shot approaches for this challenging Cross-Domain Few-Shot problem.
### Kinetics-100/200/300 Experiments
We repeat the experiments using Kinetics-100, Kinetics-200, and Kinetics-300 as the source datasets. We compare our method's performance across these varying source datasets. In this experiment, we evaluate how the increase in the number of classes in the source dataset impacts performance. As shown in Fig 3, increasing the size of the source dataset consistently improves performance on all datasets.
### Ablation and Analysis
In this section we analyze the importance of different components of our approach. Particularly we study the effect of increasing the size of the source dataset, the effect of sharpening temperature, and the impact of curriculum learning on the performance of the method.
**Increasing the size of the source dataset** In few-shot learning and transfer learning literature, it is common to utilize a source domain with a significantly larger number of classes than the target domain [12]. A dataset with a larger number of classes can capture a more diverse set of features which facilitates its application on less diverse datasets. For example, in the BS-CDFSL benchmark for images, miniImageNet, the source dataset, has 100 classes. The target image datasets: CropDisease, EuroSAT, ISIC, and ChestX have \(38\), \(10\), \(5\), and \(15\) classes, respectively. In that setup, the source dataset has over double the amount of classes of the largest target dataset. In comparison, the source dataset in our video benchmark has \(61\) classes, which is less than two of the target datasets: UCF101 with \(101\) and Something-SomethingV2 with \(87\). In this experiment, we explore the impact of the size of the source dataset in CDFSL. To be more specific, we apply the larger Kinetics-400 dataset instead of Kinetics-100 as the source.
Both STARTUP and Dynamic Distillation make use of supervised pretraining on the source dataset. For the experiments on Kinetics-400, we supplement the pretraining stages of both methods with self-supervised pretraining as well to highlight the effect of our curriculum-based schedule. In Table 1, We observe a drastic improvement in the performance when we utilize a more diverse source dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline Method & UCF101 & SSV2 & HMDB51 & Diving48 & RareAct & Average \\ \hline \hline Random Initialization & 23.83 & 16.02 & 12.08 & 15.37 & 16.57 & 16.78 \\ STARTUP++ & 60.82 & 39.60 & 44.71 & 14.92 & 45.22 & 41.05 \\ Dynamic Distillation++ & 63.26 & 44.50 & 48.04 & 16.23 & 47.01 & 43.81 \\ STRM & 42.33 & 35.01 & 24.98 & 16.69 & 39.01 & 31.60 \\ HYSRM & 45.65 & 40.09 & 29.81 & 17.57 & 44.27 & 35.49 \\
**Ours** & **65.42** & **49.92** & **53.23** & **17.84** & **49.80** & **47.24** \\ \hline \end{tabular}
\end{table}
Table 1: 5-way 5-shot Accuracy using Kinetics-400 as the source dataset. We use STARTUP++ and Dynamic Distillation++ to denote that these methods include self-supervised pretraining, despite being used in their original papers.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline Method, Source Dataset: Kinetics-100 & UCF101 & HMDB51 & SSV2 & Diving48 & RareAct & Average \\ \hline \hline Equal Loss Weighting & 32.02 & 27.39 & 15.34 & 16.07 & 33.67 & 24.90 \\ No Temperature Sharpening & 34.01 & 28.18 & 15.21 & 16.77 & 33.80 & 25.59 \\ Self-Supervised Training & 37.54 & 25.09 & 16.21 & 17.14 & 29.58 & 25.11 \\ Supervised Training & 32.06 & 23.86 & 14.40 & 16.16 & 31.15 & 23.53 \\
**Ours** & **36.53** & **29.80** & **17.21** & **16.37** & **33.91** & **26.82** \\ \hline \end{tabular}
\end{table}
Table 2: The effect of removing different components of our proposed method.
Interestingly, we further notice an increase in the relative performance of our method to Dynamic Distillation.
**Temperature Sharpening Analysis** As in STARTUP [28], we want to leverage the unlabeled target data during training by using a consistency loss. We use the teacher model to create the ground truth for this loss, dividing the teacher output by the temperature parameter T to sharpen it and use as pseudo-labels. Similarly to Dynamic Distillation [16], sharpening of the labels is used to develop low-entropy predictions from the student.
We study the impact of temperature sharpening by setting the temperature parameter to \(1\) (with the default value taken from Dynamic Distillation being \(0.1\)), making the teacher output completely unsharpened. As shown in second row of Table 2, removing the temperature sharpening reduces performance in almost all datasets (the exception being Diving48 with a \(0.4\%\) increase) with an average decrease in performance of \(1\%\) compared to our original method. We can see that temperature sharpening has a slight but positive impact when used with our CDFSL problem setup for videos.
**Pretraining Baselines** It has been shown that pretraining contributes a significant part to few-shot learning [25]. To examine how much of the performance is attributed to this, we compare some established transfer learning baselines with multiple pretraining configurations followed by few-shot adaptation. Self-supervised training refers to only self-supervised training on the combined source and target datasets without labels, and supervised training is simply training on the labeled source dataset. In rows 3 and 4 of Table 2, we can see the contribution of each of the pretraining techniques. For most of the datasets, only using self-supervised pretraining outperforms using supervised pretraining, with the exception being RareAct. On average, our method performs 1.7% over the self-supervised baseline and 3.3% over the supervised baseline.
**Impact of Curriculum Learning** The motivation behind curriculum learning is to ease the training of the model by focusing more on easier data first. For our problem setup where we leverage unlabeled target data alongside the labeled source, we begin with focusing more on the supervised source loss as it is an easier task than matching target videos to pseudo-labels in the source domain. Once the model has sufficiently learned relationships from the source dataset, the importance of the target consistency loss can increase to help improve the adaptation.
We use \(\lambda_{cons}\) to scale the consistency loss during training, as shown in Eq. 5. To analyze the effect of enforcing the curriculum scaling, we compare keeping \(\lambda_{cons}\) at \(1\) for the entirety of training and making both supervised and consistency losses weighted equally the whole time. Additionally, we train our model at temperatures of \(0.5\), \(1.5\), \(5\), and \(10\) as shown in Figure 4. Weighting both losses equally results in an average drop in performance of \(1.6\%\). We see that using curriculum learning improves the performance.
## 5 Conclusion
In this paper, we addressed the problem of cross-domain few-shot action recognition in videos, which is a challenging and realistic problem with several practical applications in fields such as robotics. We proposed a novel approach based on self-supervised feature learning and curriculum learning to address the challenges associated with this problem. Our approach strikes a balance between learning generic and class-discriminative features, which significantly improves the few-shot action recognition performance. We conducted extensive experiments on various benchmark datasets, where our proposed method outperforms current cross-domain few-shot learning methods in the image domain and few-shot learning methods in the video domain. Our work contributes to the computer vision community by introducing a new problem and providing a novel solution to address it. We hope that this work will inspire further research in this direction and help advance the state-of-the-art in few-shot action recognition.
Figure 4: Temperature parameter experiments. We use Kinetics-100 as the source dataset and vary the sharpening temperature for the teacher pseudo-labels. As the temperature increases (and sharpness decreases) the performance tends to decrease.
## 6 Acknowledgements
This research is based upon work supported in part by the Office of the Director of National Intelligence (IARPA) via 2022-21102100001. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the US Government. The US Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2309.15655 | Rediscussion of eclipsing binaries. Paper XIV. The F-type system V570
Persei | V570 Per is a binary star system containing two F-type stars in a 1.90 d
period circular orbit. It shows shallow partial eclipses that were discovered
from its Hipparcos light curve. We present an analysis of this system based on
two sectors of high-quality photometry from the NASA Transiting Exoplanet
Survey Satellite (TESS) mission, and published spectroscopic light ratio and
radial velocity measurements. We find masses of 1.449 +/- 0.006 and 1.350 +/-
0.006 Msun, and radii of 1.538 +/- 0.035 and 1.349 +/- 0.032 Rsun. The radius
measurements are set by the spectroscopic light ratio and could be improved by
obtaining a more precise light ratio. The eclipses in the TESS data arrived 660
+/- 30 s later than expected, suggesting the presence of a faint third body on
a wider orbit around the eclipsing system. Small trends in the residuals of the
fit to the TESS light curve are attributed to weak starspots. The distance to
the system is close to the Gaia DR3 value, but the Gaia spectroscopic orbit is
in moderate disagreement with the results from the published ground-based data. | John Southworth | 2023-09-27T13:44:32Z | http://arxiv.org/abs/2309.15655v1 | # Rediscussion of Eclipsing Binaries. Paper Xiv.
###### Abstract
V570 Per is a binary star system containing two F-type stars in a 1.90 d period circular orbit. It shows shallow partial eclipses that were discovered from its _Hipparcos_ light curve. We present an analysis of this system based on two sectors of high-quality photometry from the NASA Transiting Exoplanet Survey Satellite (TESS) mission, and published spectroscopic light ratio and radial velocity measurements. We find masses of \(1.449\pm 0.006\) and \(1.350\pm 0.006\) M\({}_{\odot}\), and radii of \(1.538\pm 0.035\) and \(1.349\pm 0.032\) R\({}_{\odot}\). The radius measurements are set by the spectroscopic light ratio and could be improved by obtaining a more precise light ratio. The eclipses in the TESS data arrived \(660\pm 30\) s later than expected, suggesting the presence of a faint third body on a wider orbit around the eclipsing system. Small trends in the residuals of the fit to the TESS lightcurve are attributed to weak starspots. The distance to the system is close to the _Gaia_ DR3 value, but the _Gaia_ spectroscopic orbit is in moderate disagreement with the results from the published ground-based data.
## Introduction
Detached eclipsing binary stars (dEBs) are our main source of measurements of the physical properties of normal stars. The number of dEBs for which precise measurements are available is increasing gradually, as traced by reviews of this subject [1, 2, 3] as well as compiled catalogues [4, 5, 6]. The Detached Eclipsing Binary Catalogue (DEBCat1, ref. [6]) currently lists just over 300 dEBs for which masses and radii are measured to 2% precision or better, helped by the widespread availability of light curves from space telescopes [7].
dEBs are useful in understanding the physical processes that govern the structure and evolution of stars. They have been used to calibrate the amount of convective core overshooting [8, 9, 10] albeit with conflicting results [11], the size of the convective core in massive stars [12], mixing length [13], and the radii of low-mass stars [14, 15]. They are also sources of distance measurements which have been used to calibrate the cosmological distance scale [16, 17]
Footnote 1: [https://www.astro.keele.ac.uk/jkt/debcat/](https://www.astro.keele.ac.uk/jkt/debcat/)
We are currently pursuing a project to increase the number of dEBs with reliable measurements of their masses and radii [18], primarily using new observations from the NASA Transiting Exoplanet Survey Satellite (TESS) mission [19]. TESS has observed thousands of dEBs [20, 21, 22], many of which have available high-quality
radial velocity (RV) measurements. In this context, we present an analysis of the V570 Persei system.
V570 Per (Table 1) is an F-type dEB which was discovered using data from the _Hipparcos_ satellite [23] and given its variable-star name by Kazarovets et al. [24]. It was selected for analysis by Munari et al. [25] in the context of assessing the expected performance of the _Gaia_ satellite in the study of dEBs. These authors used the _Hipparcos_ photometry of V570 Per along with ground-based spectroscopy restricted to the 850-875 nm wavelength range to mimic the expected characteristics of the _Gaia_ observations. They measured the masses of the components of V570 Per to 2.5%, and the radii to low precisions of 10% and 25% due to the large scatter in the _Hipparcos_ data and the shallow eclipses shown by this dEB. Tomasella et al. [26] (hereafter T08) presented a more detailed study of V570 Per based on new ground-based photometry, and the same spectroscopy but this time using the full available 450-948 nm wavelength range. They constrained the model of the light curve using spectroscopically-measured light contributions of the two stars in the \(V\)-band. They determined the atmospheric parameters of the component stars via a \(\chi^{2}\) fit of synthetic spectra to their observed spectra, a process which neglected the systematic errors inherent in this method.
### Observational material
The TESS mission [19] observed V570 Per in sectors 18 (2019/11/02 to 2019/11/27) and 58 (2022/10/29 to 2022/11/26), in both cases in short cadence mode with a 120 s sampling rate. We used the lightkurve package [33] to download these data and reject points flagged as bad. The simple aperture photometry (SAP) and pre-search data conditioning SAP (PDCSAP) data [34] are almost indistinguishable, so we used the SAP data in our analysis for consistency with previous papers in this series.
We converted the data to differential magnitude and subtracted the median
\begin{table}
\begin{tabular}{l l l} _Property_ & _Value_ & _Reference_ \\ Right ascension (J2000) & 03:09:34.94 & [27] \\ Declination (J2000) & +48:38:28.7 & [27] \\ Henry Draper designation & HD 19457 & [28] \\ _Hipparcos_ designation & HIP 1673 & [29] \\ _Gaia_ DR3 designation & 435997252803241856 & [27] \\ _Gaia_ DR3 parallax & \(8.2952\pm 0.0355\) mas & [27] \\ TESS Input Catalog designation & TIC 116991977 & [30] \\ \(B\) magnitude & \(8.55\pm 0.02\) & [31] \\ \(V\) magnitude & \(8.09\pm 0.01\) & [31] \\ \(J\) magnitude & \(7.160\pm 0.026\) & [32] \\ \(H\) magnitude & \(6.948\pm 0.017\) & [32] \\ \(K_{s}\) magnitude & \(6.882\pm 0.020\) & [32] \\ Spectral type & F3 V + F5 V & [26] \\ \end{tabular}
\end{table}
Table 1: _Basic information on V570 Per._
magnitude for further analysis, ending up with 15 256 datapoints from sector 18 and 19 475 from sector 58. On further inspection we found that the first stretches of data from both halves of the sector 18 light curve were affected by instrumental systematics, so we trimmed them by removing data in the intervals [2458790.6,2458792.5] and [2458801.0,2458804.7]. This left a total of 32 719 datapoints over both TESS sectors (Fig. 1).
We queried the _Gaia_ DR3 database2 for objects within 2 arcmin of V570 Per. A total of 108 were found, all of which are fainter than V570 Per by at least 7.2 mag in the _Gaia_\(G\) band. We deduce that the amount of light contaminating the TESS aperture for this dEB is negligible.
Figure 1: TESS short-cadence SAP photometry of V570 Per from sectors 18 (top), and 58 (bottom). The flux measurements have been converted to magnitude units then rectified to zero magnitude by subtraction of the median.
### Light curve analysis
We modelled the light curves from the two sectors both individually and together, using version 43 of the jktebop++ code[35; 36]. In all cases the parameters of the fit included the fractional radii of the stars (\(r_{\rm A}\) and \(r_{\rm B}\)), expressed as their sum (\(r_{\rm A}+r_{\rm B}\)) and ratio (\(k=r_{\rm B}/r_{\rm A}\)), the orbital inclination (\(i\)), the central surface brightness ratio (\(J\)), the ephemeris (period \(P\) and reference time of primary minimum \(T_{0}\)) and the coefficients of the reflection effect. We define star A to be the one eclipsed at the deeper minimum and star B to be its companion. A circular orbit was assumed based on the appearance of the light curve and of the RVs presented by T08 - when allowing for an eccentric orbit we found a best-fitting eccentricity of \(e=0.0053\) and almost no change in the other parameters. We included a quadratic function versus time for each half-sector to account for slow changes in the brightness of the dEB due to instrumental effects.
Footnote ‡: [http://www.astro.keele.ac.uk/jkt/codes/jktebop.html](http://www.astro.keele.ac.uk/jkt/codes/jktebop.html)
The eclipses are partial and shallow, so the light curve solution suffers from a strong degeneracy between \(k\), \(i\) and \(J\) (e.g. refs.[37] and[38]). This effect was found by T08 when modelling their ground-based photometry, and remains present in the much more extensive and higher-precision TESS data used in the current
Figure 2: Best fit to the TESS sector 18 light curve of V570 Per using jktebop as a function of orbital phase. The residuals are shown on an enlarged scale in the lower panel.
study. We therefore applied a spectroscopic light ratio as a constraint, in the same way as done in our work on V1022 Cas [39] and HD 23642 [40]. The light contributions found by T08 correspond to a light ratio of \(\ell_{\rm B}/\ell_{\rm A}=0.667\pm 0.053\) in the \(V\)-band. We propagated this to the TESS passband using the response function from Ricker et al. [19], theoretical spectra from Allard et al. [41], and the effective temperature (\(T_{\rm eff}\)) values from T08, finding \(\ell_{\rm B}/\ell_{\rm A}=0.703\pm 0.057\).
Limb darkening (LD) was included in the fit [42] using the power-2 law [43] and theoretical LD coefficients [44]. Fitting for the scaling coefficient ("c" in the terminology of Maxted [45]) for both stars yielded determinate values and little change in the other parameters, so was adopted as the default approach.
The amount of third light (\(L_{3}\)) has a significant effect on the best-fitting parameter values. If fitted, it converges to a formally significant but unphysically negative value (\(-0.083\pm 0.018\)) despite the negligible amount of light from nearby stars (see previous section). We therefore fixed it at zero in our default solution, but added contributions to the errorbars based on the change in parameter values by assuming \(L_{3}=2\%\) instead. For information, such an assumption decreases \(r_{\rm A}\) by 1.1% and increases \(r_{\rm B}\) by 0.4%.
The best fits to the light curves from the two sectors are shown in Figs. 2 and 3. These plots show the result of a fit to both sectors simultaneously, but divided
Figure 3: Best fit to the TESS sector 58 light curve of V570 Per using jktebop as a function of orbital phase. The residuals are shown on an enlarged scale in the lower panel.
into individual sectors in the plots. Slow trends in the residuals are apparent in both cases, and are discussed below.
The fitted parameters are given in Table 2. Uncertainties in the parameters were determined using Monte Carlo and residual-permutation simulations [46, 47]. The Monte Carlo errorbars are significantly larger than the residual-permutation alternatives because the latter do not account for the uncertainty in the spectroscopic light ratio. We therefore adopted the Monte Carlo errorbars for all parameters. The dominant source of uncertainty is the spectroscopic light ratio, which could be improved by further observations and analysis.
### The out-of-eclipse variability
The best fits to the light curves (Figs. 2 and 3) show slow trends in the residuals which differ between the two sectors. Our preferred interpretation of this is small brightness variations present on the surface of one or both stars, with the star(s) rotating synchronously with the orbit in order to obtain the consistent phasing in Figs. 2 and 3. This could be caused by starspots, and evolution of the spot configuration is a natural explanation for the differences between the residuals of the fits to the two sectors. The \(T_{\rm eff}\) values of the stars are relatively high for this explanation, but are only slightly higher than KIC 5359678 for which spot activity was clearly detected [48, 49]. The lack of increased residuals during eclipse suggests the spots are either a similar temperature to the rest of the photosphere and/or are located on parts of the star(s) that are not eclipsed.
We checked for the possibility of pulsations by calculating a periodogram of the residuals of the fit to the data from sector 58, using the period04 code [50].
\begin{table}
\begin{tabular}{l r} Parameter & Value \\ Fitted parameters: & \\ Time of primary eclipse (BJD\({}_{\rm TDB}\)) & \(2459894.392999\pm 0.000009\) \\ Orbital period (d) & \(1.90093830\pm 0.00000002\) \\ Orbital inclination (\({}^{\circ}\)) & \(77.294\pm 0.048\) \\ Sum of the fractional radii & \(0.31715\pm 0.00057\) \\ Ratio of the radii & \(0.877\pm 0.036\) \\ Central surface brightness ratio & \(0.8767\pm 0.0033\) \\ LD coefficient \(c\) for star A & \(0.548\pm 0.017\) \\ LD coefficient \(c\) for star B & \(0.516\pm 0.020\) \\ LD coefficient \(\alpha\) for star A & \(0.498\) (fixed) \\ LD coefficient \(\alpha\) for star B & \(0.467\) (fixed) \\ Orbital eccentricity & \(0.0\) (fixed) \\ Derived parameters: & \\ Fractional radius of star A & \(0.1690\pm 0.0028\) \\ Fractional radius of star B & \(0.1482\pm 0.0035\) \\ Light ratio \(\ell_{\rm B}/\ell_{\rm A}\) & \(0.683\pm 0.060\) \\ \end{tabular}
\end{table}
Table 2: Adopted parameters of V570 Per measured from the TESS light curves using the jktebop code. The uncertainties are 1\(\sigma\) and were determined using Monte Carlo and residual-permutation simulations.
Significant signals were found at the orbital period and half the orbital period, in agreement with the starspot hypothesis. No evidence for either \(\delta\) Scuti or \(\gamma\) Doradus pulsations were found, despite a significant number of such pulsators now being known in dEBs[51, 52, 53, 54, 55].
### Radial velocities
T08 measured RVs of both stars from each of 31 high-quality echelle spectra obtained using the Asiago 1.8 m telescope. We obtained these from table 2 in T08 and modelled them using jktebop, adopting a circular orbit and separate systemic velocities (\(V_{\gamma}\)) for the two stars. We fitted for velocity amplitudes (\(K_{\rm A}\) and \(K_{\rm B}\)), \(V_{\gamma,{\rm A}}\), \(V_{\gamma,{\rm B}}\) and \(T_{0}\). The period was fixed at the value from Table 2. Uncertainties were calculated from 1000 Monte Carlo simulations[56, 35] after adjusting the sizes of the errorbars to give a reduced \(\chi^{2}\) of unity for the RVs for each star.
We found \(K_{\rm A}=113.94\pm 0.24\) km s\({}^{-1}\), \(K_{\rm B}=122.33\pm 0.22\) km s\({}^{-1}\), \(V_{\gamma,{\rm A}}=23.15\pm 0.16\) km s\({}^{-1}\) and \(V_{\gamma,{\rm B}}=23.09\pm 0.14\) km s\({}^{-1}\), where the uncertainties in the systemic velocities do not include any transformation onto a standard system. The best fits are shown in Fig. 4. We cannot compare the \(K_{\rm A}\) and \(K_{\rm B}\) values directly with the results from T08 because they did not calculate these parameters explicitly.
We found an offset of \(658\pm 29\) s between the \(T_{0}\) from the RV fit and that predicted from the ephemeris in Table 2. Further investigation suggests that this offset is also present in the times of minimum light given by T08 and Hubscher et al.[57]. As the current work is the first by the author that used the lightcurve package to access TESS data, one possibility is that this approach has caused an offset in the timestamps. We checked this by using lightkurve to download TESS light curves for ZZ UMa and ZZ Boo and compared them to those used in refs.[58] and[59]. No offset in the timings was found, suggesting that the timing offset is an astrophysical effect, perhaps caused by a third component on a wider orbit around V570 Per.
V570 Per is present in the _Gaia_ DR3 _Non-single-star orbital models for sources compatible with Double Lined Spectroscopic binary model_ catalogue[8] which reports objects detected as double-lined and with a fitted spectroscopic orbit[60, 61]. The orbital parameters given are \(e=0.0029\pm 0.0019\), \(K_{1}=123.86\pm 0.28\) km s\({}^{-1}\) and \(K_{2}=113.82\pm 0.24\) km s\({}^{-1}\), based on RVs from 24 spectra. The eccentricity is very small and consistent with zero, as expected. We find that \(K_{2}\) is in good agreement with our \(K_{\rm A}\), but that \(K_{1}\) is moderately discrepant with our \(K_{\rm B}\). It is clear that the identities of the stars have been swapped, but the source of the \(K_{1}/K_{\rm B}\) discrepancy is unknown. We chose not to use these results because the spectra and RVs on which they are based are not publicly available so cannot be checked. It is relevant that Tokovinin[62] has found issues with the _Gaia_ DR3 \(K_{1}\) and \(K_{2}\) values in the sense that a significant fraction (14 of 22 in that case) have underestimated values or other problems.
### Physical properties of V570 Per
We determined the physical properties of V570 Per using the jktabsdim code[63]. The input values to this were: (i) the \(r_{\rm A}\), \(r_{\rm B}\), \(i\) and \(P\) from Table 2; the \(K_{\rm A}\) and \(K_{\rm B}\) from the RV analysis; the \(T_{\rm eff}\) values from T08 with the errorbars increased to \(\pm 50\) K to account for the systematic uncertainties of the \(T_{\rm eff}\) scale for F-stars[64; 65; 66]; an interstellar reddening of \(E(B-V)=0.05\pm 0.02\) mag from the stilism4 online tool[67; 68]; the \(B\) and \(V\) magnitudes from Tycho-2[31] which are averages of 12 measurements at effectively random orbital phases; and the \(JHK_{s}\) magnitudes from 2MASS[32] converted to the Johnson system using the transformations from Carpenter[69]. The 2MASS magnitudes were taken at phase 0.10 so are representative of the average brightness of the system. The results are given in Table 2, where the errorbars have been propagated individually from each input parameter.
Figure 4: RVs of V570 Per from T08 (filled circles for star A and open circles for star B) compared to the best-fitting spectroscopic orbits from our own analysis using jktebop (solid curves). The residuals are given in the lower panels separately for the two components.
The agreement between the measurements in Table 2 and the results from T08 is good, with all quantities within \(1\sigma\). The radii of the stars have been determined to 2.3% precision, which is slightly worse than managed by T08 despite the availability of much better photometry for the current study. This arises because the precision of the radius measurements is limited by the spectroscopic light ratio applied in the photometric analysis, and perhaps from underestimated errorbars in T08. A better spectroscopic light ratio is needed to measure the radii more precisely.
The synchronous rotational velocities are consistent with the \(v\sin i\) values measured by T08. This is in agreement with our assertion that the trends in the residuals of the fit to the light curves are due to starspots rotating synchronously with the orbit.
Inversion of the _Gaia_ DR3 parallax gives a distance to the system of \(d=120.55\pm 0.52\) pc, which is \(1.4\sigma\) longer than that found in our own work via the \(K\)-band surface brightness method [63] and calibrations from Kervella et al. [71]. An increase in \(E(B-V)\) to 0.1 mag would bring our optical (\(BV\)) and infrared (\(JHK_{s}\)) distances into better agreement at the expense of shortening the distance measurement to \(115.8\pm 2.3\) pc; this reddening is significantly more than the \(0.023\pm 0.007\) mag found by T08 from the interstellar sodium and potassium lines. The shorter distance could then be compensated by adopting larger \(T_{\rm eff}\) values for the stars. The _Gaia_ distance is questionable because the renormalised unit weight error (RUWE) of 1.395 for V570 Per is near the maximum value of 1.4 for a reliable astrometric solution [27].
### Summary and conclusions
V570 Per is a dEB containing two F-type stars on a 1.90 d circular orbit. The system shows shallow (0.12 and 0.11 mag) partial eclipses which were discovered using the _Hipparcos_ satellite. We used TESS light curves from two sectors and published RVs from T08 to determine its physical properties. The partial eclipses make a solution of the light curve alone poorly determined, but the addition of
\begin{table}
\begin{tabular}{l c c} _Parameter_ & _Star A_ & _Star B_ \\ Mass ratio \(M_{\rm B}/M_{\rm A}\) & \(0.9314\pm 0.0026\) \\ Semimajor axis of relative orbit (\(\mathcal{R}_{\odot}^{\rm N}\)) & \(9.100\pm 0.013\) \\ Mass (\(\mathcal{M}_{\odot}^{\rm N}\)) & \(1.4489\pm 0.0063\) & \(1.3495\pm 0.0062\) \\ Radius (\(\mathcal{R}_{\odot}^{\rm N}\)) & \(1.538\pm 0.035\) & \(1.349\pm 0.032\) \\ Surface gravity (log[cgs]) & \(4.225\pm 0.020\) & \(4.308\pm 0.021\) \\ Density (\(\rho_{\odot}\)) & \(0.398\pm 0.027\) & \(0.550\pm 0.039\) \\ Synchronous rotational velocity (km s\({}^{-1}\)) & \(40.93\pm 0.92\) & \(35.89\pm 0.85\) \\ Effective temperature (K) & \(6842\pm 50\) & \(6562\pm 50\) \\ Luminosity log(\(L/\mathcal{L}_{\odot}^{\rm N}\)) & \(0.669\pm 0.023\) & \(0.483\pm 0.024\) \\ \(M_{\rm bol}\) (mag) & \(3.068\pm 0.058\) & \(3.533\pm 0.061\) \\ Distance (pc) & \(117.2\pm 2.3\) & \\ \end{tabular}
\end{table}
Table 3: _Physical properties of V570 Per defined using the nominal solar units given by IAU 2015 Resolution B3 (ref. [70])._
a spectroscopic light ratio was sufficient to reach a determinate solution. The resulting radius measurements are relatively imprecise (2.3%) due to this, and in comparison with the mass measurements (0.5%). Our measured distance to the system is in reasonable agreement with that from _Gaia_ DR3.
We compared the masses, radii and \(T_{\rm eff}\)s of the stars to predictions from the parsec stellar evolutionary models[72]. The models provide a match to these properties to within the 1\(\sigma\) errorbars for an age of 800-900 Myr and a slightly supersolar fractional metal abundance of \(Z=0.020\) (where the solar value is \(Z=0.017\)).
We also found the eclipses to arrive 11 min later than expected in the TESS light curves. Checks turned up no evidence for this being due to instrumental or data reduction issues, so it may be an astrophysical effect. The system should be monitored for eclipse timing variations caused by a possible third body. We also found residual systematics in the light curve which we attribute to weak starspots rotating synchronously with the orbit. Twenty-four observations with the _Gaia_ Radial Velocity Spectrograph[73] yielded a double-lined spectroscopic orbit for the system which is in partial agreement with the ground-based results from T08. Future observations with _Gaia_ should allow the addition of more RV measurements to this analysis, plus direct access to the _Gaia_ spectra for checking the discrepancy found for one of the two stars.
### Acknowledgements
We thank the anonymous referee for a quick and positive report. This paper includes data collected by the TESS mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA's Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work has made use of data from the European Space Agency (ESA) mission _Gaia1_, processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC1). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. The following resources were used in the course of this work: the NASA Astrophysics Data System; the SIMBAD database operated at CDS, Strasbourg, France; and the ar\(\chi\)iv scientific paper preprint service operated by Cornell University.
Footnote 1: [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)
Footnote 2: [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)
|